
Something significant happened in AI over the last 16 months. It didn't come with a splashy product launch or a benchmark headline. It came in the form of a plumbing standard.
In November 2024, Anthropic published the Model Context Protocol. Not a product, not a model. A specification for how AI agents talk to the rest of the world. The MCP protocol was designed to be disarmingly simple: instead of every AI company building its own bespoke connector to every tool and data source, define one universal interface. Build it once on each side. Let anything plug into anything.
Sixteen months later, that bet has paid off faster than almost anyone predicted. MCP is now the de facto standard for how AI agents connect to the outside world. OpenAI adopted it. Google adopted it. The Linux Foundation now governs it. As of early 2026, the Linux Foundation confirmed over 10,000 active public MCP servers and tens of millions of monthly SDK downloads. The protocol Anthropic open-sourced on a Tuesday afternoon is now the connective tissue of agentic AI.
This article covers what MCP is, which AI agents support it today, how custom remote servers actually work, and most concretely, what Truthifi's MCP connector means for anyone who wants an AI that knows their actual financial situation rather than a hypothetical one.
What MCP is and why it solves a real problem
MCP is an open standard that lets any AI agent connect to any external tool or data source through a single, universal interface. That one sentence sounds dry. What it actually means is that the era of custom-built AI integrations is ending.
Put simply: what is MCP in AI? It is the protocol that determines whether an AI agent can access live external data or is limited to its training knowledge. And what is an MCP server? It is a lightweight service that exposes tools, data, and prompts to any compatible AI agent through a standard URL. The Model Context Protocol defines that standard interface: build one MCP server, and every compatible AI agent can use it.
Before MCP, connecting an AI assistant to an external tool meant writing bespoke code from scratch: code that spoke the tool's API, handled its authentication quirks, normalised its data formats, and broke the moment the API changed. Multiply that by every tool and every AI platform, and you get what Anthropic called an "N×M integration problem": N tools × M AI platforms = an ungovernable matrix of brittle connectors that nobody wanted to maintain.
MCP collapses that matrix to N+M. Build one MCP server for your tool, and every compatible AI agent can use it. Build one MCP client in your agent, and it can reach every compatible tool in the ecosystem.
MCP collapses the N×M integration problem to N+M. Build one MCP server for your tool, and every compatible AI agent can use it. No bespoke connector work on either side.
The protocol runs over JSON-RPC 2.0 and supports three transport types: stdio for local processes, SSE (an older streaming standard being phased out), and Streamable HTTP, the current standard for anything running on the internet. Servers expose three types of objects: tools the AI can call, resources it can read, and prompt templates it can invoke. The AI discovers what's available, decides when to use it, and the server handles the rest.
For remote servers, the ones that live on the internet rather than your laptop, MCP OAuth uses the 2.1 standard with PKCE and Dynamic Client Registration. The agent initiates the auth flow, you approve access in your browser, and tokens are issued and refreshed automatically. No API keys in config files. No secrets hardcoded anywhere. A connection you can audit and revoke.
The ecosystem as of March 2026
Ten AI agents support custom remote MCP servers with native OAuth 2.1 today. Four more work with remote URLs but need a bridge package for OAuth. One major IDE, Google Antigravity, doesn't support MCP at all yet, which is worth noting given its multi-agent ambitions.
What's striking about the adoption curve isn't the speed. It's the breadth. This isn't one vendor's ecosystem. The agents that now support native remote MCP include products from Anthropic, OpenAI, Microsoft, Amazon, and multiple independent open-source projects. That consensus is what makes MCP different from every previous AI tool integration attempt. Worth noting: the open-source entries (OpenCode, Docker MCP Toolkit) often have the most thoughtful OAuth tooling, because they had to earn adoption rather than bundle it.
Agent tiers, pricing, and version requirements below are verified as of March 2026. This is a fast-moving space. Check vendor documentation for current details.
✅ Agents with native remote MCP + OAuth
Claude and Claude Desktop remain the most mature MCP client. Anthropic has been iterating on this since day one, and it shows: the Settings > Connectors panel gives non-technical users a clean UI for adding any remote MCP server by URL, full OAuth flow included. Both SSE and Streamable HTTP transports are supported. The OAuth callback URL is https://claude.ai/api/mcp/auth_callback. Token refresh is automatic. Available on Pro, Max, Team, and Enterprise plans. The free tier supports local MCP servers only via JSON config.
Claude Code, Anthropic's terminal agent, supports Claude Code MCP configuration via claude mcp add --transport http <url>. It also supports Client ID Metadata Documents for servers that don't implement Dynamic Client Registration, and enterprise-grade URL-pattern allowlists and denylists for org-level governance.
ChatGPT added custom MCP support in September 2025 via its Apps & Connectors settings. The auth requirements are strict: OAuth 2.1 and Dynamic Client Registration are both mandatory. Bearer tokens are not accepted. Developer mode (Plus plan and above) gives full read/write MCP client access.
VS Code with GitHub Copilot supports native remote MCP from version 1.101. Configure in .vscode/mcp.json or user profile settings. All Copilot tiers including the free plan can use MCP in agent mode, subject to monthly usage limits.
Zed, the high-performance Rust-based editor, recently shipped native OAuth for remote MCP servers in its preview release. Servers that require authentication now show an "Authenticate" button that redirects to the provider's browser flow. No tier gating, available on all plans including free.
Kiro (Amazon Web Services' agentic IDE, which shares its MCP config format with Amazon Q Developer CLI) supports HTTP remote MCP with a native OAuth config block. Enterprise administrators can enforce org-level MCP allowlists by hosting a JSON registry file and pointing Kiro to its URL.
Amazon Q Developer CLI supports remote HTTP servers with OAuth on both its free and Pro tiers. Q CLI users are being progressively migrated to Kiro. Both tools share the same MCP config format.
OpenCode, the open-source terminal agent with 95K+ GitHub stars, has the most polished OAuth tooling of any client: dedicated CLI commands to authenticate, list status, revoke credentials, and debug connection issues. Fully free under Apache-2.0.
Docker MCP Toolkit, available in Docker Desktop since mid-2025, runs each MCP server in an isolated container with no host filesystem access by default. Authorize with one CLI command or one click in Docker Desktop. Available on all Docker Desktop plans including the free Personal tier.
Cursor supports remote MCP URLs with native OAuth: one-click install for curated servers, with a browser-based OAuth flow for any OAuth-enabled remote server. Configure in Cursor Settings > Tools & MCP. Available on the free Hobby plan and above.
⚠️ Agents that support remote URLs but need a bridge for OAuth
Windsurf, Cline, and Continue.dev handle remote MCP URLs natively but do not implement a full OAuth browser flow. They rely on the mcp-remote npm package as a bridge: it spawns a local stdio process, handles the OAuth redirect, and proxies calls to the remote endpoint. This works reliably, but it's an extra step and the tokens live locally rather than being managed by the client.
Gemini CLI supports MCP but its remote and OAuth support is not yet on par with Claude or Codex. It's evolving rapidly.
❌ No MCP support yet
Google Antigravity is the notable gap. The multi-agent IDE is built on Gemini 3 Ultra and has strong full-stack development capabilities, but MCP support is still on the roadmap as of March 2026.
What "custom remote MCP server" actually means
A custom remote MCP server is an internet-hosted service that exposes tools, data, and prompts to any compatible AI agent through a URL. No local installation. No SDK for the agent to bundle. Just a URL, an OAuth flow if the server is protected, and a connection that works across every compatible client simultaneously.
That last part is the point. Under the old function-calling model, a tool integration was locked to one AI platform. You built for ChatGPT, and Cursor couldn't use it. You built for Claude, and VS Code couldn't see it. MCP breaks that lock.
A custom remote MCP server is also fundamentally different from the older model in how it handles identity. The server is persistent and self-describing. It tells the agent what it can do without any per-client negotiation. It maintains state across calls. And it authenticates with a principled OAuth flow rather than a shared secret sitting in a config file, which is why pasting API keys everywhere was always a bad idea even when everyone did it.
The practical result: if a service publishes an MCP server, every agent in the table below gains access to that service's data and capabilities instantly. That's the network effect Anthropic was betting on in November 2024. It has clearly arrived.
Quick reference: where every major agent stands
Verified March 27, 2026. Agent pricing and tier requirements change frequently. Check vendor documentation for current details.
The table below covers the best MCP servers and clients available today, sorted by OAuth support tier.
Agent | Remote URL | OAuth | Min. tier | Docs |
|---|---|---|---|---|
Claude / Claude Desktop | ✅ | ✅ Native | Pro ($20/mo) | |
Claude Code | ✅ | ✅ Native | Pro ($20/mo) | |
ChatGPT | ✅ | ✅ Native (DCR required) | Plus ($20/mo) | |
VS Code + GitHub Copilot | ✅ | ✅ Native | Free (v1.101+) | code.visualstudio.com/docs/copilot/customization/mcp-servers |
Zed | ✅ | ✅ Native | Free | |
Kiro (AWS) | ✅ | ✅ Native | Free tier | |
Amazon Q Developer CLI | ✅ | ✅ Native | Free tier | |
OpenCode | ✅ | ✅ Native | Free (open source) | |
Docker MCP Toolkit | ✅ | ✅ Native | Free (Personal) | |
Cursor | ✅ | ✅ Native | Free (Hobby) | |
Windsurf | ✅ | ⚠️ Bridge | Free | |
Cline | ✅ | ⚠️ Bridge | Free (open source) | |
Gemini CLI | ⚠️ | ⚠️ Limited | Free | |
Continue.dev | ✅ | ⚠️ Bridge | Free (open source) | |
Google Antigravity | ❌ | ❌ | — | No MCP support yet |
Something significant happened in AI over the last 16 months. It didn't come with a splashy product launch or a benchmark headline. It came in the form of a plumbing standard.
In November 2024, Anthropic published the Model Context Protocol. Not a product, not a model. A specification for how AI agents talk to the rest of the world. The MCP protocol was designed to be disarmingly simple: instead of every AI company building its own bespoke connector to every tool and data source, define one universal interface. Build it once on each side. Let anything plug into anything.
Sixteen months later, that bet has paid off faster than almost anyone predicted. MCP is now the de facto standard for how AI agents connect to the outside world. OpenAI adopted it. Google adopted it. The Linux Foundation now governs it. As of early 2026, the Linux Foundation confirmed over 10,000 active public MCP servers and tens of millions of monthly SDK downloads. The protocol Anthropic open-sourced on a Tuesday afternoon is now the connective tissue of agentic AI.
This article covers what MCP is, which AI agents support it today, how custom remote servers actually work, and most concretely, what Truthifi's MCP connector means for anyone who wants an AI that knows their actual financial situation rather than a hypothetical one.
What MCP is and why it solves a real problem
MCP is an open standard that lets any AI agent connect to any external tool or data source through a single, universal interface. That one sentence sounds dry. What it actually means is that the era of custom-built AI integrations is ending.
Put simply: what is MCP in AI? It is the protocol that determines whether an AI agent can access live external data or is limited to its training knowledge. And what is an MCP server? It is a lightweight service that exposes tools, data, and prompts to any compatible AI agent through a standard URL. The Model Context Protocol defines that standard interface: build one MCP server, and every compatible AI agent can use it.
Before MCP, connecting an AI assistant to an external tool meant writing bespoke code from scratch: code that spoke the tool's API, handled its authentication quirks, normalised its data formats, and broke the moment the API changed. Multiply that by every tool and every AI platform, and you get what Anthropic called an "N×M integration problem": N tools × M AI platforms = an ungovernable matrix of brittle connectors that nobody wanted to maintain.
MCP collapses that matrix to N+M. Build one MCP server for your tool, and every compatible AI agent can use it. Build one MCP client in your agent, and it can reach every compatible tool in the ecosystem.
MCP collapses the N×M integration problem to N+M. Build one MCP server for your tool, and every compatible AI agent can use it. No bespoke connector work on either side.
The protocol runs over JSON-RPC 2.0 and supports three transport types: stdio for local processes, SSE (an older streaming standard being phased out), and Streamable HTTP, the current standard for anything running on the internet. Servers expose three types of objects: tools the AI can call, resources it can read, and prompt templates it can invoke. The AI discovers what's available, decides when to use it, and the server handles the rest.
For remote servers, the ones that live on the internet rather than your laptop, MCP OAuth uses the 2.1 standard with PKCE and Dynamic Client Registration. The agent initiates the auth flow, you approve access in your browser, and tokens are issued and refreshed automatically. No API keys in config files. No secrets hardcoded anywhere. A connection you can audit and revoke.
The ecosystem as of March 2026
Ten AI agents support custom remote MCP servers with native OAuth 2.1 today. Four more work with remote URLs but need a bridge package for OAuth. One major IDE, Google Antigravity, doesn't support MCP at all yet, which is worth noting given its multi-agent ambitions.
What's striking about the adoption curve isn't the speed. It's the breadth. This isn't one vendor's ecosystem. The agents that now support native remote MCP include products from Anthropic, OpenAI, Microsoft, Amazon, and multiple independent open-source projects. That consensus is what makes MCP different from every previous AI tool integration attempt. Worth noting: the open-source entries (OpenCode, Docker MCP Toolkit) often have the most thoughtful OAuth tooling, because they had to earn adoption rather than bundle it.
Agent tiers, pricing, and version requirements below are verified as of March 2026. This is a fast-moving space. Check vendor documentation for current details.
✅ Agents with native remote MCP + OAuth
Claude and Claude Desktop remain the most mature MCP client. Anthropic has been iterating on this since day one, and it shows: the Settings > Connectors panel gives non-technical users a clean UI for adding any remote MCP server by URL, full OAuth flow included. Both SSE and Streamable HTTP transports are supported. The OAuth callback URL is https://claude.ai/api/mcp/auth_callback. Token refresh is automatic. Available on Pro, Max, Team, and Enterprise plans. The free tier supports local MCP servers only via JSON config.
Claude Code, Anthropic's terminal agent, supports Claude Code MCP configuration via claude mcp add --transport http <url>. It also supports Client ID Metadata Documents for servers that don't implement Dynamic Client Registration, and enterprise-grade URL-pattern allowlists and denylists for org-level governance.
ChatGPT added custom MCP support in September 2025 via its Apps & Connectors settings. The auth requirements are strict: OAuth 2.1 and Dynamic Client Registration are both mandatory. Bearer tokens are not accepted. Developer mode (Plus plan and above) gives full read/write MCP client access.
VS Code with GitHub Copilot supports native remote MCP from version 1.101. Configure in .vscode/mcp.json or user profile settings. All Copilot tiers including the free plan can use MCP in agent mode, subject to monthly usage limits.
Zed, the high-performance Rust-based editor, recently shipped native OAuth for remote MCP servers in its preview release. Servers that require authentication now show an "Authenticate" button that redirects to the provider's browser flow. No tier gating, available on all plans including free.
Kiro (Amazon Web Services' agentic IDE, which shares its MCP config format with Amazon Q Developer CLI) supports HTTP remote MCP with a native OAuth config block. Enterprise administrators can enforce org-level MCP allowlists by hosting a JSON registry file and pointing Kiro to its URL.
Amazon Q Developer CLI supports remote HTTP servers with OAuth on both its free and Pro tiers. Q CLI users are being progressively migrated to Kiro. Both tools share the same MCP config format.
OpenCode, the open-source terminal agent with 95K+ GitHub stars, has the most polished OAuth tooling of any client: dedicated CLI commands to authenticate, list status, revoke credentials, and debug connection issues. Fully free under Apache-2.0.
Docker MCP Toolkit, available in Docker Desktop since mid-2025, runs each MCP server in an isolated container with no host filesystem access by default. Authorize with one CLI command or one click in Docker Desktop. Available on all Docker Desktop plans including the free Personal tier.
Cursor supports remote MCP URLs with native OAuth: one-click install for curated servers, with a browser-based OAuth flow for any OAuth-enabled remote server. Configure in Cursor Settings > Tools & MCP. Available on the free Hobby plan and above.
⚠️ Agents that support remote URLs but need a bridge for OAuth
Windsurf, Cline, and Continue.dev handle remote MCP URLs natively but do not implement a full OAuth browser flow. They rely on the mcp-remote npm package as a bridge: it spawns a local stdio process, handles the OAuth redirect, and proxies calls to the remote endpoint. This works reliably, but it's an extra step and the tokens live locally rather than being managed by the client.
Gemini CLI supports MCP but its remote and OAuth support is not yet on par with Claude or Codex. It's evolving rapidly.
❌ No MCP support yet
Google Antigravity is the notable gap. The multi-agent IDE is built on Gemini 3 Ultra and has strong full-stack development capabilities, but MCP support is still on the roadmap as of March 2026.
What "custom remote MCP server" actually means
A custom remote MCP server is an internet-hosted service that exposes tools, data, and prompts to any compatible AI agent through a URL. No local installation. No SDK for the agent to bundle. Just a URL, an OAuth flow if the server is protected, and a connection that works across every compatible client simultaneously.
That last part is the point. Under the old function-calling model, a tool integration was locked to one AI platform. You built for ChatGPT, and Cursor couldn't use it. You built for Claude, and VS Code couldn't see it. MCP breaks that lock.
A custom remote MCP server is also fundamentally different from the older model in how it handles identity. The server is persistent and self-describing. It tells the agent what it can do without any per-client negotiation. It maintains state across calls. And it authenticates with a principled OAuth flow rather than a shared secret sitting in a config file, which is why pasting API keys everywhere was always a bad idea even when everyone did it.
The practical result: if a service publishes an MCP server, every agent in the table below gains access to that service's data and capabilities instantly. That's the network effect Anthropic was betting on in November 2024. It has clearly arrived.
Quick reference: where every major agent stands
Verified March 27, 2026. Agent pricing and tier requirements change frequently. Check vendor documentation for current details.
The table below covers the best MCP servers and clients available today, sorted by OAuth support tier.
Agent | Remote URL | OAuth | Min. tier | Docs |
|---|---|---|---|---|
Claude / Claude Desktop | ✅ | ✅ Native | Pro ($20/mo) | |
Claude Code | ✅ | ✅ Native | Pro ($20/mo) | |
ChatGPT | ✅ | ✅ Native (DCR required) | Plus ($20/mo) | |
VS Code + GitHub Copilot | ✅ | ✅ Native | Free (v1.101+) | code.visualstudio.com/docs/copilot/customization/mcp-servers |
Zed | ✅ | ✅ Native | Free | |
Kiro (AWS) | ✅ | ✅ Native | Free tier | |
Amazon Q Developer CLI | ✅ | ✅ Native | Free tier | |
OpenCode | ✅ | ✅ Native | Free (open source) | |
Docker MCP Toolkit | ✅ | ✅ Native | Free (Personal) | |
Cursor | ✅ | ✅ Native | Free (Hobby) | |
Windsurf | ✅ | ⚠️ Bridge | Free | |
Cline | ✅ | ⚠️ Bridge | Free (open source) | |
Gemini CLI | ⚠️ | ⚠️ Limited | Free | |
Continue.dev | ✅ | ⚠️ Bridge | Free (open source) | |
Google Antigravity | ❌ | ❌ | — | No MCP support yet |

The smartest money move you can make? Run a wellness check.
Truthifi® tests your finances for 100+ risks and opportunities—automatically. Unlock plain-English insights that drive smarter financial decisions today.

The smartest money move you can make? Run a wellness check.
Truthifi® tests your finances for 100+ risks and opportunities—automatically. Unlock plain-English insights that drive smarter financial decisions today.

The smartest money move you can make? Run a wellness check.
Truthifi® tests your finances for 100+ risks and opportunities—automatically.
Where MCP is headed
The Linux Foundation's Agentic AI Foundation, co-founded by Anthropic, OpenAI, and Block, with Google, Microsoft, AWS, and Cloudflare as platinum members, now governs the protocol. That governance structure is what makes MCP sticky in a way prior standards weren't. This isn't an Anthropic feature that OpenAI could decide to abandon. It's an industry standard with multi-vendor commitments behind it, and those commitments are structurally protected from any single company's roadmap changes.
MCP isn't an Anthropic feature that OpenAI could abandon. It's an industry standard with multi-vendor commitments, structurally protected from any single company's roadmap changes.
Three things are worth watching in 2026, and they're not all moving at the same speed.
The Streamable HTTP transport is replacing SSE as the standard for remote servers, and this one is urgent: if you're building a remote MCP server today and you're still targeting SSE, you're building to a deprecated spec. Most major clients will drop SSE support in the coming months. Streamable HTTP is where new builds should land.
MCP Apps, the first official MCP extension, allows tool calls to return interactive UI components directly in the agent conversation. Dashboards, forms, drag-and-drop interfaces. VS Code shipped first-class support already. This matters because it changes MCP from a pure data-and-function protocol into something that can also render. The implications for complex workflows are significant.
The agent-to-agent layer is the furthest out. The Agent-to-Agent (A2A) protocol and the Agent Communication Protocol (ACP) are beginning to standardise how MCP-connected agents hand tasks to each other. When that layer matures, MCP becomes the tool layer of interconnected agent systems rather than just the tool layer of a single agent. The horizon is 2026–2027 at the earliest, but the architecture decisions being made now will determine whether that future is interoperable or fragmented again.
What this all means if you're not a developer
Most MCP coverage is aimed at the people building it: how to stand up a server, configure the transport, handle the OAuth flow. That framing has made MCP feel like infrastructure. It's more than that.
MCP is the thing that determines whether an AI can actually know anything about your specific situation, your finances, your codebase, your calendar, rather than reasoning from assumptions about what someone like you probably has. The agents are capable now. What limits them isn't intelligence. It's data. AI agent MCP integration is the layer that closes that gap.
Ask Claude about your investment fees without a data connector, and you get a discussion of what advisory fees typically look like: 0.5% to 1.5%, depends on the account size, here are some questions to ask. Useful, generic, and entirely disconnected from your actual situation. Add the right MCP connector, and Claude knows your advisor's exact fee, your account's return over the past 14 months, the benchmark your allocation most resembles, and whether you're ahead of or behind it. The conversation changes.
That's not a feature. It's a different category of AI interaction, one that only exists when the connector between the agent and your real data is built correctly.
Truthifi: the MCP connector built for your financial life
Most financial MCP servers are built for developers. They expose stock prices, earnings statements, or market data through an API that requires keys, custom configuration, and a degree of comfort with JSON config files. Useful if you're building an application. Not useful if you're an investor who wants to talk to Claude about your actual portfolio.
Truthifi approaches this differently.
Truthifi is an independent investment monitoring platform that runs 100+ continuous checks on your portfolio. Fees, performance against relevant benchmarks, concentration risk, advisor value. Results in plain English. No conflicts of interest, no product recommendations, a flat subscription fee. The goal is to give individual investors the same quality of oversight that institutional investors have always had access to.
The Truthifi MCP connector connects your live portfolio data to Claude, and to any other compatible AI agent, in under two minutes with no developer setup. Once connected, Claude can see your real holdings, your actual fees, your performance history, and the 100+ findings Truthifi has already run on your behalf, across every linked account simultaneously.
Truthifi connects to 18,000+ financial institutions, which means the connector doesn't see a single brokerage account in isolation. It surfaces the complete picture: every brokerage, every advisor relationship, every retirement account in one place, continuously refreshed.
Here's what that looks like in practice. A real multi-account portfolio connected to Truthifi, spanning TIAA, Merrill, Raymond James, and a self-directed Chase account, surfaces the following in a single session:
Total return from January 2025 through March 2026 was 13.1% against a 56/44 equity/fixed income benchmark, representing -0.7% alpha over that period. Looking at asset class breakdown, 31.3% sits in cash or cash equivalents and only 14.9% in equity, with 37.9% classified as "Other": insurance products, annuities, and complex instruments that most portfolio trackers simply ignore. That last number is the one most investors have never seen clearly.
Sector exposure in the classified holdings runs 6.8% Electronic Technology, 4.4% Finance, 3.9% Technology Services. The top five equity concentrations by weight are NVDA at 4.5%, MSFT at 3.9%, GOOGL at 3.2%, AMZN at 3.0%, and AAPL at 2.7%, a familiar large-cap tech cluster appearing across both direct holdings and fund look-throughs. Within the fund sleeve, 43.5% sits in mega-cap stocks through Nuveen growth and value index funds.
The Truthifi MCP connector shows Claude your actual return, alpha, equity concentrations, and sector breakdown across every linked account. Claude answers from your real numbers, not a hypothetical.
That's the data Claude sees when you ask "Am I too concentrated in tech?" or "How is my TIAA allocation performing?" It answers those questions from your actual numbers, not a hypothetical. You can read more about what those conversations look like in our full guide.
On security: the connector is strictly read-only. Your bank credentials never leave your financial institution. Claude receives a scoped token that can read data but cannot initiate transactions or withdrawals, and you can revoke access any time from your Truthifi settings. Pasting account balances into a chat window is a fundamentally different arrangement: that data enters Anthropic's conversation log and goes stale the moment your portfolio moves. The MCP approach avoids both problems.
Getting started
If you're already a Truthifi subscriber on Claude Pro or Max, the setup takes under two minutes:
In Claude, go to Settings > Connectors > Add custom connector
Enter Name: Truthifi and URL: https://api.truthifi.com/mcp and click Add
After that, toggling Truthifi on at the start of any conversation gives Claude full read access to your portfolio data. There's nothing to maintain. Truthifi refreshes the underlying data automatically.
If you're not yet a Truthifi subscriber, start at truthifi.com. Connect your accounts, run the initial 100+ health checks, and then connect Claude once you have a live picture worth talking about.
The bigger picture
MCP started as a developer protocol. It's becoming something more fundamental: the difference between an AI that knows what's happening in your world and one that has to guess.
The agents are capable. The connectors exist. The question, in finance, in medicine, in law, in any domain where personal data matters, is whether the right data sources are building servers designed for individuals rather than for developers.
In financial services, Truthifi is building that server. For anyone who wants an AI that works from their actual situation, not a reasonable approximation of it.
Truthifi is an independent wealth monitoring platform. This article is for informational purposes only and does not constitute financial or investment advice. Always consult a qualified financial professional before making investment decisions.
Where MCP is headed
The Linux Foundation's Agentic AI Foundation, co-founded by Anthropic, OpenAI, and Block, with Google, Microsoft, AWS, and Cloudflare as platinum members, now governs the protocol. That governance structure is what makes MCP sticky in a way prior standards weren't. This isn't an Anthropic feature that OpenAI could decide to abandon. It's an industry standard with multi-vendor commitments behind it, and those commitments are structurally protected from any single company's roadmap changes.
MCP isn't an Anthropic feature that OpenAI could abandon. It's an industry standard with multi-vendor commitments, structurally protected from any single company's roadmap changes.
Three things are worth watching in 2026, and they're not all moving at the same speed.
The Streamable HTTP transport is replacing SSE as the standard for remote servers, and this one is urgent: if you're building a remote MCP server today and you're still targeting SSE, you're building to a deprecated spec. Most major clients will drop SSE support in the coming months. Streamable HTTP is where new builds should land.
MCP Apps, the first official MCP extension, allows tool calls to return interactive UI components directly in the agent conversation. Dashboards, forms, drag-and-drop interfaces. VS Code shipped first-class support already. This matters because it changes MCP from a pure data-and-function protocol into something that can also render. The implications for complex workflows are significant.
The agent-to-agent layer is the furthest out. The Agent-to-Agent (A2A) protocol and the Agent Communication Protocol (ACP) are beginning to standardise how MCP-connected agents hand tasks to each other. When that layer matures, MCP becomes the tool layer of interconnected agent systems rather than just the tool layer of a single agent. The horizon is 2026–2027 at the earliest, but the architecture decisions being made now will determine whether that future is interoperable or fragmented again.
What this all means if you're not a developer
Most MCP coverage is aimed at the people building it: how to stand up a server, configure the transport, handle the OAuth flow. That framing has made MCP feel like infrastructure. It's more than that.
MCP is the thing that determines whether an AI can actually know anything about your specific situation, your finances, your codebase, your calendar, rather than reasoning from assumptions about what someone like you probably has. The agents are capable now. What limits them isn't intelligence. It's data. AI agent MCP integration is the layer that closes that gap.
Ask Claude about your investment fees without a data connector, and you get a discussion of what advisory fees typically look like: 0.5% to 1.5%, depends on the account size, here are some questions to ask. Useful, generic, and entirely disconnected from your actual situation. Add the right MCP connector, and Claude knows your advisor's exact fee, your account's return over the past 14 months, the benchmark your allocation most resembles, and whether you're ahead of or behind it. The conversation changes.
That's not a feature. It's a different category of AI interaction, one that only exists when the connector between the agent and your real data is built correctly.
Truthifi: the MCP connector built for your financial life
Most financial MCP servers are built for developers. They expose stock prices, earnings statements, or market data through an API that requires keys, custom configuration, and a degree of comfort with JSON config files. Useful if you're building an application. Not useful if you're an investor who wants to talk to Claude about your actual portfolio.
Truthifi approaches this differently.
Truthifi is an independent investment monitoring platform that runs 100+ continuous checks on your portfolio. Fees, performance against relevant benchmarks, concentration risk, advisor value. Results in plain English. No conflicts of interest, no product recommendations, a flat subscription fee. The goal is to give individual investors the same quality of oversight that institutional investors have always had access to.
The Truthifi MCP connector connects your live portfolio data to Claude, and to any other compatible AI agent, in under two minutes with no developer setup. Once connected, Claude can see your real holdings, your actual fees, your performance history, and the 100+ findings Truthifi has already run on your behalf, across every linked account simultaneously.
Truthifi connects to 18,000+ financial institutions, which means the connector doesn't see a single brokerage account in isolation. It surfaces the complete picture: every brokerage, every advisor relationship, every retirement account in one place, continuously refreshed.
Here's what that looks like in practice. A real multi-account portfolio connected to Truthifi, spanning TIAA, Merrill, Raymond James, and a self-directed Chase account, surfaces the following in a single session:
Total return from January 2025 through March 2026 was 13.1% against a 56/44 equity/fixed income benchmark, representing -0.7% alpha over that period. Looking at asset class breakdown, 31.3% sits in cash or cash equivalents and only 14.9% in equity, with 37.9% classified as "Other": insurance products, annuities, and complex instruments that most portfolio trackers simply ignore. That last number is the one most investors have never seen clearly.
Sector exposure in the classified holdings runs 6.8% Electronic Technology, 4.4% Finance, 3.9% Technology Services. The top five equity concentrations by weight are NVDA at 4.5%, MSFT at 3.9%, GOOGL at 3.2%, AMZN at 3.0%, and AAPL at 2.7%, a familiar large-cap tech cluster appearing across both direct holdings and fund look-throughs. Within the fund sleeve, 43.5% sits in mega-cap stocks through Nuveen growth and value index funds.
The Truthifi MCP connector shows Claude your actual return, alpha, equity concentrations, and sector breakdown across every linked account. Claude answers from your real numbers, not a hypothetical.
That's the data Claude sees when you ask "Am I too concentrated in tech?" or "How is my TIAA allocation performing?" It answers those questions from your actual numbers, not a hypothetical. You can read more about what those conversations look like in our full guide.
On security: the connector is strictly read-only. Your bank credentials never leave your financial institution. Claude receives a scoped token that can read data but cannot initiate transactions or withdrawals, and you can revoke access any time from your Truthifi settings. Pasting account balances into a chat window is a fundamentally different arrangement: that data enters Anthropic's conversation log and goes stale the moment your portfolio moves. The MCP approach avoids both problems.
Getting started
If you're already a Truthifi subscriber on Claude Pro or Max, the setup takes under two minutes:
In Claude, go to Settings > Connectors > Add custom connector
Enter Name: Truthifi and URL: https://api.truthifi.com/mcp and click Add
After that, toggling Truthifi on at the start of any conversation gives Claude full read access to your portfolio data. There's nothing to maintain. Truthifi refreshes the underlying data automatically.
If you're not yet a Truthifi subscriber, start at truthifi.com. Connect your accounts, run the initial 100+ health checks, and then connect Claude once you have a live picture worth talking about.
The bigger picture
MCP started as a developer protocol. It's becoming something more fundamental: the difference between an AI that knows what's happening in your world and one that has to guess.
The agents are capable. The connectors exist. The question, in finance, in medicine, in law, in any domain where personal data matters, is whether the right data sources are building servers designed for individuals rather than for developers.
In financial services, Truthifi is building that server. For anyone who wants an AI that works from their actual situation, not a reasonable approximation of it.
Truthifi is an independent wealth monitoring platform. This article is for informational purposes only and does not constitute financial or investment advice. Always consult a qualified financial professional before making investment decisions.
Disclaimer: This article is for informational purposes only and does not constitute financial advice. Always consult with a qualified financial advisor before making investment decisions.
Disclaimer: This article is for informational purposes only and does not constitute financial advice. Always consult with a qualified financial advisor before making investment decisions.
Disclaimer: This article is for informational purposes only and does not constitute financial advice. Always consult with a qualified financial advisor before making investment decisions.
Ready to get started?