
The Promise of MCP-Powered Data Workflows
At Preset, we’re making a big bet on MCP (Model-Context Protocol), and it’s not just us. We’ve been hearing the same excitement from the LLM-providers, partners and early adopters: this protocol has the potential to reshape how we work with data, tools, and AI. The old model of copy/pasting between browser tabs and chatbot sessions is giving way to something far more powerful: a mesh of interoperable services where LLMs can reason and act across systems on your behalf.
We're writing this post to share our vision, grounded in the kinds of real-world workflows we believe will soon be not just possible, but expected. With MCP, we’re entering a world where AI can operate fluidly across systems, tools, and services. Where users don’t just ask questions—they co-create, debug, explore, and automate in ways that would’ve been unthinkable just a year ago.
For all the hype around RAG (Retrieval-Augmented Generation), let’s be honest: it’s starting to feel like trying to clean a modern data stack with a dirty rag. It works in limited contexts, but it can’t see across tools or take action beyond the scope of a single interface. It’s constrained by what it can retrieve and by where it’s allowed to act. MCP breaks that boundary.
Below are a series of imagined, but entirely plausible, MCP-enabled sessions. They’re grounded in today’s tools and challenges, and they show what’s coming as Preset and other ecosystem players continue to invest in this shared layer of intelligence.
1. Business Question → Insights → Dashboard Creation
Actors: Marketing Analyst, Customer-provided LLM, Snowflake MCP, Preset MCP
2. Root Cause Analysis → Visualization
3. Semantic Layer Evolution → Collaboration
As we discuss in The Semantic Layer Is Back, AI agents benefit enormously from structure. This example shows how MCP enables that.
Actors: Business Analyst, LLM, Preset MCP, Semantic Owner via Slack
age_group isn't in the current Cube semantic model, but it's available in customer_details.age_group which can easily be joined. I've reverse-engineered the semantic view and built an extended SQL version as a workaround:
Which approach would you prefer?
4. Proactive Assistance from the LLM
Actors: Data Analyst, LLM, Preset MCP
Session duration: Down 18% average
Checkout abandonment: Up 45%
• Add device type and browser filters
• Include page load time metrics
• Monitor payment gateway errors
5. From SQL to Workflow
Actors: SQL-savvy Analyst, LLM, Preset MCP, dbt MCP
customer_orders_enhanced dbt model, except for a few fields you added. Want to augment the dbt model to add in:
• acquisition_channel
• geographic_region
📁 models/marts/customer_orders_enhanced.sql
6. Embedded LLM → Preset Extension
Actors: Embedded App User, Host LLM, Preset MCP
7. Audit Trail + Governance
Actors: Data Steward, Admin LLM, Preset MCP
8. Multi-Agent Collaboration
Actors: Product Analyst, LLM, Preset MCP, GitHub MCP, Slack MCP
9. Cross-Stack Intelligence with Expanded MCP Mesh
Actors: LLM, Preset MCP, DataHub MCP, dbt MCP, Airflow MCP, Governance MCP
10. End-to-End Debugging Across GitHub, dbt Cloud, and Superset
Actors: Data Analyst, LLM, Preset MCP, dbt MCP, GitHub MCP, Airflow MCP
Future Possibilities
- Multi-user sessions: Teams co-pilot with an LLM in shared sessions
- Audit bots: LLM agents watching for anomalous changes to critical dashboards
- Compliance watchdogs: Monitoring for PII exposure via semantic metadata + lineage tracking
- MCP search portal: Global, natural language search across all charts, views, dashboards, and SQL snippets across org
This is just the beginning. MCP turns chat-based AI from a reactive assistant into a proactive collaborator that understands your tools and workflows. We're just getting started—and with the right mesh of services coming online, a new era of fluid, AI-augmented work is finally within reach.
Our Role: Building the Preset MCP
At Preset, we see our role in the MCP ecosystem as building a robust, high-fidelity interface that lets LLMs do nearly everything a user can do in Preset—securely, auditable, and with full governance. Our goal is to make Preset not just LLM-compatible, but LLM-native.
We’re designing the Preset MCP to:
- Expose rich functionality across charting, dashboards, metadata, alerts, and lineage—letting LLMs build, update, and navigate Superset objects the way a power user would.
- Respect user context and security: every interaction is scoped to the user's identity and permissions, with fine-grained audit logs and safety rails in place.
- Interoperate across the stack: by using global identifiers (e.g. UUIDs) and shared metadata conventions, we enable LLMs to fluidly reference the same dashboards, metrics, and models across Preset, dbt, DataHub, and more.
- Integrate with your LLM(s) of choice: whether you’re self-hosting, using a cloud provider, or routing via a broker, our vision includes support for plug-and-play LLM connectivity with optional access controls and observability.
- Embed AI in context: our in-product chatbot knows what you're doing—it sees your dashboard, your filters, recent error message, your history—and it can guide, automate, or escalate with full awareness of your workflow.
As governance-focused MCPs emerge (e.g. for access control, PII scanning, policy enforcement), we aim to be a clean endpoint they can interface with—logging actions, enforcing boundaries, and participating in organization-wide decision logic.
Note that with a robust MCP in place, building an in-product chatbot becomes almost trivial. The chatbot can be pre-configured and context-rich—aware of the user’s current view, filters, active SQL query, or even a recent error message. If you're in SQL Lab debugging a failed query, it already knows. If you're exploring a dataset and ask a question, it knows which one.
There are tradeoffs: the in-product bot may not be wired into your broader mesh of MCP services, or it might use a Preset-provided model instead of your org’s preferred one. But that’s the point—it’s about giving you options. Organizations can configure which LLMs are used, which users can access which assistants, and which MCP services are exposed. Whether it’s embedded, external, or both, the foundation is the same.
We’re not trying to own the LLM. We’re trying to be the best possible service an LLM could call when it needs to understand or act within Preset.
Until All MCPs Unite
This post paints a vision where all your tools speak MCP, and your LLMs have seamless access across the full stack. That’s the future—and we’re moving toward it. But even before every system joins the mesh, there’s a ton of value in wiring up just one or two MCPs to your favorite LLM.
Even with limited context, the workflows unlocked by a single, well-integrated MCP are game-changing. In our experiments with superset-mcp, we've already seen how a focused integration can unlock rich, tool-contained use cases that feel like magic compared to the old copy/paste world. And our new sup! CLI makes these workflows accessible to anyone working in the terminal.
Here are just a few powerful examples:
- Conversational charting: generate charts from scratch using natural language—no menus, no clicks.
- Dashboard crafting & augmentation: ask to add a new metric, breakdown, or filter, and the LLM updates your dashboard on the fly.
- Workflow assistance: go straight from a prompt to the right data exploration, prefilled SQL Lab, alerts and reports set up without touching a form.
- Anomaly detection & root cause analysis: LLMs can spot issues and guide users through exploration to uncover what happened and why.
- Help & explanations: ask how something works or what you’re looking at, and get a clear, contextual answer.
- Semantic search: find the chart, dashboard, or dataset you saw last week—even if you forgot the name. Ask open-ended questions like “Do we have any reliable dashboards around customer satisfaction?” and let the LLM surface the most relevant content with a reliability assessment based on the related metadata.
- Onboarding assistant: walk new users through your workspace, highlight what matters, and help them self-serve insights from day one.
Even one good MCP turns your data app into something far more powerful: a programmable, assistive surface. And the more MCPs join the mesh, the more that surface becomes a canvas for true cross-system reasoning and automation.
Related Reading
- AI in BI: the Path to Full Self-Driving Analytics — Where AI fits in analytics workflows today and tomorrow
- Building Better BI Chatbots — How context-aware AI assistants transform data exploration
- Meet 'sup!: Superset's New CLI — The command-line tool enabling AI agents to work with Superset
- Building Preset AI Assist — How we built our text-to-SQL solution using RAG