
Vibe analytics: faster than specialized tools?
What if building a data app was as simple as describing it in plain English?
AI Summary
- Vibe analytics lets AI generate data apps from plain English, speeding up development with minimal technical input.
- Open-source tools like Python and JavaScript enable efficient, flexible, and secure app creation without vendor lock-in.
- Prototypes can be rapidly deployed, scaled, and secured on-premise, reducing costs and improving compliance.
What is vibe analytics?
Tools like Jupyter or Marimo are great for analytics, prototyping, and quick experimentation. But sometimes, it's simply faster to do “vibe analytics” - letting AI generate a working data app directly from your instructions.
Plotly's proprietary approach
Recently, Plotly introduced its idea of vibe analytics, which lets you build data apps just by instructing AI. The limitation? It only runs inside Plotly's proprietary ecosystem.
Our experiment
At LovelyData, we've been experimenting with the same idea - but using open-source technologies.
And Python and JavaScript work especially well. They can be efficient and, depending on the use case, require minimal hardware.
Also, there are countless resources for Python and JavaScript: articles, tutorials, libraries, and examples. These technologies are well known to LLMs, so the probability that AI-generated code will run correctly is high. And even if mistakes appear, they can usually be resolved quickly.
Deployment and scalability
One strength of the above mentioned technologies is that they can be packaged into containers (using Docker or Podman). For a business, this has two main benefits:
- Low-friction deployment - apps move smoothly from prototype to production without major rework.
- Scalability on demand - containerized apps can be replicated or shut down automatically, optimizing resources and controlling costs.
This means an AI-generated prototype can start as a data-driven concept, be refined for deployment, handed over to IT, and scaled as needed - all without vendor lock-in.
On-premise and security benefits
Not every organization is comfortable running data apps in the cloud, especially when sensitive data or proprietary models are involved. The approach we're testing makes it straightforward to deploy apps on-premise or in a private cloud. That includes running custom LLM models inside the same environment - keeping both data and models fully under company control.
For management, this translates into two clear benefits:
- Cost control - no unpredictable cloud bills for each query or app request.
- Security and compliance - data stays inside company infrastructure, reducing exposure and easing regulatory requirements.
Key takeaways
What we've observed firsthand:
- Clear specifications matter. Tell the AI what technologies to use, which UI elements to include, and what the app should do.
- This is not “no-code.” Technical and business knowledge is still needed to check, fix, and improve what the AI generates.
- With the right setup, the workflow becomes highly productive: humans write the specification, AI drafts the app, humans refine it, and the cycle repeats efficiently.
- For technically grounded teams, this can be faster than specialized tools.
Conclusion
We see promising results - not as a full replacement, but as a practical shortcut. For some teams, “vibe analytics” may accelerate the journey from idea to usable app.
Explore some examples here: LovelyData – Data Apps
You might also be interested in
Blog

The case for small language models

SQL: Tame your timestamps

How people actually use ChatGPT
Data Apps

Markdown to HTML/PDF Editor

EXIF Metadata Remover

