Thu, July 24, 2025
Although ChatGPT appeared less than three years ago, it has already significantly changed how we behave online. More precisely, not only ChatGPT, but LLMs in general: Gemini and Claude, of course, have also contributed greatly to this shift in behavior. And I believe that the market hasn’t yet fully appreciated the contribution of Anthropic (the developers of Claude). After all, they were the ones who came up with the Model Context Protocol (MCP), which we’ll be discussing today.
A quick note: although MCP was originally developed by Anthropic, it is now an open standard. That means everything I will describe below is supported by any of the major LLM models.
Our plan for today:
Model Context Protocol (MCP), simply put, is an open protocol that allows any LLM to communicate with any of your systems. Or in other words, as stated on the official project page, it’s a “USB-C port for AI applications.”
So essentially, it gives the LLM the ability to search for answers not in its existing training data, and not across the internet, but in a specific place that you define. For example, in Google Analytics.
At first, this idea might seem a bit strange: “Why should I ask, say, ChatGPT something about my Google Analytics data if I can just ask it directly using the ‘Ask Chat’ function in GA4?” The very same one shown in the screenshot below:
I could give several arguments on this, but as the saying goes, it’s better to see once than hear a hundred times. So let’s look at it in practice.
To understand why MCP is better — and why it’s the future — let’s try to solve a few practical tasks, both through MCP and the built-in GA4 chat.
I started with a simple question that any GA4 user can answer quickly: “What are the top 5 most viewed pages by visitors?”
We’ll compare the responses from:
And of course, I’ll add my comments and evaluations.
First, I must say the answer really matched the task. Then I compared it with GA4 data — everything aligned. The response was nicely visualized, which made it easier to process. So I can say that “Ask Chat” completed the task successfully. Subjectively, I would’ve preferred to see the data by Page Path instead of Page Title, but that’s a detail I could have specified in the query.
During the test, “Ask Chat” gave me data not for the most recent 7 days (July 17–23) but for its own 7-day period (July 16–22). I don’t consider that a critical issue. In real use, I hope every user specifies the desired time frame explicitly.
Now let’s try it via MCP. Here’s the response shown on the screen:
We can see it’s just a plain table, and the chatbot returned 30-day data. But the result is still accurate — and shown using Page Path, which is a plus.
In summary: both systems handled this task well. But the task was a simple one. Let’s try something more interesting.
“Analyze which traffic source/medium have the best and worst conversion rates from add-to-cart to purchase.”
This task is more challenging, as GA4 doesn’t provide a ready-made metric that answers it directly.
As seen in the screenshots, “Ask Chat” gave me a visualization and a long block of text — but not an actual answer.
Meanwhile, GPT-4o (the model I used via Copilot — though it could’ve been any other model, the key point is using MCP) did an excellent job.
It pulled data on the number of add_to_cart and purchase events and calculated the required conversion rate manually. As you can see, the numbers match GA4’s.
Don’t mind the traffic sources — this is a test project.
If you thought “this looks impressive” — I agree. But the truth is, using MCP here wasn’t entirely fair play. And no, I don’t mean I got the right answer after ten tries or manipulated anything. It’s about the technology itself: with MCP, we can add context.
If you look closely at the screenshot, you’ll notice I had a file open with detailed instructions on what to do.
Here’s the full instruction:
task: |
Analyze which traffic source/medium have the best and worst conversion rates from add-to-cart to purchase over the last 30 days using GA4 event-level data.
steps:
- Use analytics-mcp to get data from GA4
- Query GA4 event-level data to get the number of `add_to_cart` events grouped by `session_source_medium` within the last 30 days.
- Query GA4 event-level data to get the number of `purchase` events grouped by `session_source_medium` within the same 30-day period.
- Join both results on `session_source_medium`.
- Calculate the conversion rate as:
`purchase_count / add_to_cart_count`
(handle division by zero gracefully).
- Return a table sorted in descending order of conversion rate, including:
- session_source_medium
- add_to_cart_count
- purchase_count
- conversion_rate
- Highlight the top 3 and bottom 3 performing sources based on conversion rate.
That context helped my agent complete the task correctly, following a clear action plan. And MCP enabled access to GA4 data via API.
Looking at the execution flow, the model:
As you may have noticed, there were no explicit instructions to use this file in the prompt. Still, the agent leveraged the open context.
To be fair, I provided the same instruction in the prompt to GA4’s “Ask Chat” — but it failed the task.
You may have noticed “Ask Chat” in GA4 still carries a “Beta” label. Could that be the reason? Let’s test another system — and another MCP server.
Here we’ll once again compare the native chat (the one shown in the screenshot below) with the MCP-based approach. And I promise — this time there will be no preloaded or attached instruction files. Well, almost none. Keep reading to see a bit of magic ;)
This task is more advanced — we’ll be writing an SQL query. I’ll use several different prompts (with the native chat) to try and get the result I need. The goal: to get a query that returns the landing pages that generated the most revenue in June 2025.
As you might guess, both the native chat and Copilot perform well on simple tasks, so I won’t go over them here. Let’s get straight to the interesting part.
I started by describing the task as a business user might. The dataset clarification was needed only because the project contains multiple GA4 test datasets.
As you can see, I received an SQL query. Let’s take a closer look:
-- Which landing pages generated the most revenue for us in June 2025?
-- Use data from the analytics_399190266 dataset.
SELECT
(
SELECT
ep.value.string_value
FROM
UNNEST(events.event_params) AS ep
WHERE
ep.key = 'page_location' ) AS landing_page,
SUM(events.ecommerce.purchase_revenue_in_usd) AS total_revenue_usd
FROM
`learning-platform-370121`.`analytics_399190266`.`events_20250624` AS events
WHERE
events.event_date BETWEEN '20250601'
AND '20250630'
AND events.ecommerce.purchase_revenue_in_usd IS NOT NULL
AND events.ecommerce.purchase_revenue_in_usd > 0
GROUP BY
landing_page
ORDER BY
total_revenue_usd DESC;
If you look closely, this isn’t really showing the landing page, but just the page where the event happened — even though the column is labeled landing_page
.
Okay, I tried to explain the task in more detail, and as you’ll see — the response query immediately threw an error.
Final attempt: I added part of a sample SQL query. It’s a session-building query, which could help with part of the task. Of course, as the screenshot shows, it contains extra parameters and is only a partial query. But hey — if I already had the final query, why use AI at all? So I considered it a good hint and hoped to finally get a correct result. But again, the result was an error.
At this point, I gave up on the native chat and switched to Copilot. On the first try, it wrote a query that fully matched my goal. I’d even say it sounded like something I’d write myself.
So how did it get there with no instructions?
Let’s break it down.
Although, as shown in the screenshots above, I didn’t give any direct additional instructions, I did provide guidance in another form. Take a closer look at how the request was executed.
Before sending the query to BigQuery, Copilot first scanned my local knowledge base via another MCP and found the same example query I had used earlier to try and guide the native chat. It then analyzed that example and used it to construct its response.
One important note: you might notice it read several files. I had intentionally added some extra, unrelated files to test its ability to sort things out — and it passed that test successfully.
And this is exactly the key advantage of MCP: an LLM can use multiple MCP servers to complete a single task. There are no limits. The only goal is to get the job done.
And before doing that, jumping into a local or even cloud-based knowledge base is no problem at all. Just mention it in the prompt.
Using MCP isn’t just better than talking to a native chat within a specific system — as you’ve seen in this article. It’s the ability to connect any system you need, or even a specific file, with an LLM. It’s the ability to add as much context as necessary. It’s the ability to automate even more routine tasks. And it’s the ability to orchestrate multiple systems through a single chat with your favorite LLM.
Of course, the technology is still new — and I wouldn’t let it delete files just yet. But for analytical tasks, where you need to ask a question in natural language and get a response based on your company’s data (read-only access, of course), it’s an excellent solution.
And don’t limit yourself to just the MCPs I’ve shown here. This link leads to a fairly extensive (though not complete) list of systems that already offer their own MCP servers. And if that’s not enough — you can always build your own, tailored to your specific needs.
Web Analyst, Marketer