Skills vs MCP vs Plugins: AI Agent Ecosystem Explained
Skills provide knowledge. MCP provides connectivity. Plugins provide distribution. Learn how the three layers of the AI agent stack work together in 2026.
Every AI company spent 2024 inventing its own way for agents to talk to outside services. The result was predictable: incompatible plugins, proprietary APIs, integrations that broke whenever somebody shipped an update. A mess. A total, stupid mess.
We're in a better place now. But the cleanup left behind three terms that everyone uses interchangeably even though they mean completely different things: skills, MCP, and plugins.
I get why they're confusing. They all live in the "how do I make my AI agent do more stuff" category. But they solve different problems, and if you pick the wrong one for your situation you'll waste a couple months building something that doesn't quite work. I've watched teams do this. It's painful.
So here's the actual breakdown: what each one does, where one ends and the next begins, how they fit together, and which one you should probably start with. Most teams don't need all three on day one, by the way.
Skills: it's literally a markdown file
I love how underwhelming the explanation is. A skill is a folder. Inside that folder is a SKILL.md file with instructions. Maybe some reference files and scripts too. That's the whole thing. It teaches an AI agent how to do a specific task consistently, the same way every time.
Think of it like an SOP you'd hand to a new hire. You don't want your quarterly reports to look different every time someone writes one. You want a template, a checklist, standards. Skills give agents that same repeatability.
How the loading works (this part is actually clever)
The interesting bit isn't the concept, it's the architecture underneath. Skills use something called progressive disclosure, which is a fancy way of saying "don't dump everything into the context window at once."
When a session starts, the agent loads just the names and one-line descriptions of available skills. Tiny. A few tokens each. You can have 500 skills registered and the agent barely notices.
When a skill actually gets triggered, the full SKILL.md loads in. All the step-by-step procedures, formatting rules, quality checks, examples. And if the skill includes scripts or templates, those only load when the agent starts executing.
So you get this nice layered thing where the agent knows about hundreds of capabilities but only the relevant one takes up context space. Pretty elegant for what amounts to a folder with a markdown file.
It became a real standard, somehow
Anthropic open-sourced the skills format in late 2025 at agentskills.io. The spec is tiny. You can read it in a few minutes.
What surprised me was how fast everyone else adopted it. Microsoft put it in VS Code and Copilot. OpenAI built the same thing into ChatGPT and Codex. Cursor, Goose, Amp picked it up. By early 2026 there were 34,000+ published skills floating around various aggregator sites. For something that started as "a folder with a markdown file," that's kind of wild.
When to use them
Skills are for when the problem is quality, not connectivity. Your PRDs keep coming out inconsistent. Your financial reconciliations need GAAP formatting and the agent keeps winging it. Your brand guidelines exist but nobody follows them and you want the agent to actually internalize them.
What skills can't do is connect to anything. A skill that teaches an agent to write SQL is great. But it can't run that SQL against your database. That's a different layer entirely.
MCP: the part that connects to everything
MCP stands for Model Context Protocol, and it's how agents talk to the outside world. Open standard. One protocol for connecting AI agents to tools, APIs, databases, services — whatever.
Here's the problem it replaced. Before MCP, if you wanted your AI assistant to talk to Jira, Slack, your Postgres database, GitHub, and some analytics dashboard, you needed five separate integrations. Each with its own auth, error handling, serialization. And every other AI app that wanted the same connections had to build its own versions too. Ten apps, ten services, a hundred integrations. Developers call it the N x M problem.
MCP collapses that to N + M. Ten servers plus ten clients. One protocol. Done.
The USB-C thing
Yeah, everyone compares MCP to USB-C. Because it works. Before USB-C you had Micro-USB, Lightning, Mini-USB, and a drawer full of cables that each fit exactly one device. USB-C made the connector universal. MCP does the same for AI-to-service communication. JSON-RPC 2.0 under the hood. Write your server once, any client can use it.
How the server-client stuff works
In practice it goes like this.
A developer writes an MCP server that wraps some service. Project management tool, database, whatever. The server defines tools — create_task, list_projects, assign_user — with input schemas. It can also declare resources, which is structured data the agent can read.
On the other side, the AI host app (Claude, VS Code, ChatGPT) runs an MCP client. The client finds available servers, reads their tool definitions, and makes them available to the agent.
When the agent decides it needs to, say, create a Jira ticket, it calls the tool with the right parameters. Client routes it to the server. Server hits the Jira API. Results come back. Same dance regardless of which service is on the other end.
Everyone adopted it. Like, everyone.
OpenAI: ChatGPT and Agents SDK, March 2025. Google: Gemini, Android Dev Kit, Cloud databases. Microsoft: VS Code, Copilot, Agent Framework. AWS: Bedrock and AgentCore.
By February 2026, 10,000+ active MCP servers, 97 million monthly SDK downloads. Anthropic donated the whole thing to the Linux Foundation. It's not really "Anthropic's protocol" anymore in any meaningful sense.
I was talking to a backend engineer at a mid-sized SaaS company, and he told me this story that I think about a lot. His team spent three months building custom integrations between their AI assistant and five tools: Jira, Slack, Postgres, GitHub, internal dashboard. Five Python wrappers, each with its own auth flow, error handling, serialization. Then Jira updated their API and the whole thing broke. Two days to fix one integration.
They switched to MCP servers. Cut the integration code by 70%. Now when something breaks it's an hour, not two days. I asked him what the biggest difference was and he said "I stopped dreading API changelogs." Which, honestly, might be the best endorsement of any protocol I've heard.
MCP Apps: okay this is cool
January 2026 brought MCP Apps, and this one actually made me sit up. MCP tools can now return interactive UI components that render inside the conversation. Charts. Forms. Dashboards. Multi-step workflows. Right there in the chat window.
Before this, you'd ask an agent to analyze some data and get back a markdown table. Fine, whatever. Now you get an interactive chart where you can filter date ranges, toggle metrics, drill into segments. All without opening a new tab or copy-pasting into a spreadsheet.
The tech side is sandboxed iframes with JSON-RPC between the UI and the host. Claude, ChatGPT, VS Code, Goose, and JetBrains have all shipped or are building support.
I keep going back and forth on whether this is "a nice feature" or "a fundamental shift in how agents work." On one hand, it's just tools rendering HTML. On the other, it means your agent conversation can now contain a working budget allocator, a 3D map, a customer segmentation dashboard. That feels like a different kind of interaction than "agent sends text, you read text."
Plugins: the packaging layer (this is where people get confused)
Plugins are bundles. That's it.
Skills give knowledge. MCP gives connectivity. Plugins take some combination of skills, MCP servers, commands, and agents, wrap them up, and make the whole thing installable with one click. Packaging and distribution.
The old way was bad
If you tried ChatGPT plugins when OpenAI launched them in 2023, you saw the first attempt. Developers could build extensions that gave ChatGPT access to Expedia, Wolfram Alpha, whatever. But each plugin only worked in ChatGPT. Proprietary format. Required OpenAI's approval. Zero portability. Discovery was so bad that most users never found anything beyond the top 20.
Want the same integration in Claude? Build it again from scratch. Copilot? Again. The whole thing was kind of doomed.
OpenAI killed the original plugin system in favor of GPTs and eventually MCP-compatible stuff. But I think the real lesson was simpler: locking integrations to one platform doesn't work. Nobody wants to maintain five versions of the same tool, and users don't want to relearn their setup every time they switch providers.
The new way is better
Modern plugins are closer to VS Code extensions or browser extensions. Bundled, versioned, installable. A plugin might include:
- A couple of related skills (SEO writing, keyword research, technical audits)
- An MCP server for a specific API (Google Search Console)
- Some slash commands
- Maybe a specialized subagent
You install the plugin, everything lands where it needs to go. Skills register. MCP server connects. Commands appear. It just works. That's the whole point.
Some real examples because abstract descriptions are annoying
Let me make this concrete.
A "Sales Intelligence" plugin for a revenue team. Ships with: a skill for writing call prep briefs, another for post-call summaries, an MCP server hooked into the CRM and call recording platform, a command that pulls your calendar and auto-generates prep materials each morning. The sales rep installs one thing and their pre-call prep drops from 20 minutes to maybe 2. They don't know what's inside the box and they'd rather not.
A "Code Review" plugin for an engineering team. Bundles the team's review standards as a skill, an MCP server wired to GitHub for PR access, a subagent that flags security issues, and some convenience commands. The promise is that a new hire can review code with the rigor of a senior engineer from day one. Reality check: they'll still miss things. But fewer things, and the security subagent catches stuff that even experienced reviewers skip when they're tired on a Friday afternoon.
A "Marketing Analytics" plugin for a content team. Skill that knows how to structure a weekly performance report in the CMO's preferred format. MCP servers pulling from Google Analytics, Search Console, and the email platform. Slash command that generates the report every Monday morning. The marketing coordinator used to spend half a day on this. Now it's a one-liner plus 15 minutes of review.
Okay so what's actually different
This is the part where I see eyes glaze over so I'm going to be really blunt.
Skills are the brain. They're knowledge. "Here's how to do this thing well." They can't connect to anything.
MCP is the hands. It grabs data, pushes buttons, talks to APIs. It doesn't know or care whether the output is good.
Plugins are the packaging tape. They bundle brains and hands together so someone who isn't technical can install a working workflow and get on with their life.
Who makes each one
Skills: domain experts. The PM who knows the PRD template, the accountant who knows the close process, the content person who knows the SEO checklist. It's markdown. If you can write a Google Doc, you can write a skill.
MCP servers: developers. You're writing Python or TypeScript, wrapping APIs, handling auth. Real code.
Plugins: someone who gets both layers plus packaging and distribution. Writing a song vs. building a recording studio vs. producing an album and getting it on Spotify. Different jobs.
Standardization status, since people ask
MCP: the gold standard (pun intended). Open from day one. Linux Foundation. SDKs in every language. Every major AI company on board.
Skills: close behind. Started Claude-only, open-sourced as agentskills.io in late 2025. Microsoft, OpenAI, and others adopted it.
Plugins: still a bit of a mess. Formats differ between Claude Code, Copilot, and ChatGPT. Getting better. Not there yet.
The layer cake
Plugins → packaging and distribution
Skills → domain knowledge and procedures
MCP → connectivity to external stuff
Agent runtime → the LLM doing its thing
Each layer builds on the ones below. None of them replaces the others.
Here's why it matters in practice. Sarah runs content marketing at a B2B SaaS startup. Doesn't code. Doesn't want to. She installed an "SEO Content Suite" plugin from her team's internal marketplace. That plugin has a skill with her company's content standards baked in, an MCP server that grabs keyword data from Ahrefs, and a slash command that kicks off the whole workflow. One command, three minutes, she's got a keyword-researched, brand-consistent blog draft. She has no idea three extensibility layers are involved, and honestly, she'd probably be annoyed if you tried to explain it to her.
Mixing and matching
Here's how they actually get used together.
Skill + MCP server is the most common combo. "Monthly Financial Close" skill knows the steps: journal entries, reconciliations, variance analysis. MCP servers connect to the ERP and bank. Skill says what to do. MCP gets the data. They're a team.
Plugin wrapping everything is for distribution. "Product Management Suite" ships with skills for PRDs, sprint planning, research synthesis, and metrics. MCP servers for Jira, Linear, Mixpanel. One install, full workflow. The PM doesn't have to think about it.
MCP server solo works for simple stuff. Google Drive search, for instance. The agent already knows how to search. It just needs the connection. No skill necessary. Honestly this is where most teams start. Hook the agent up to Slack or your database, see how it handles basic queries. The limits show up when you start caring about how well the agent does the work, not just whether it can access the data. That's when you add a skill.
Skill solo is for when you need expertise but no external data. Cold email frameworks. Brand voice guidelines. Content formatting standards. It's also the easiest way to start if you've never built any of this before. Open a markdown file, describe the process, add some good-output examples, save it. You'll see results immediately and start building intuition for what agents can do with pure instruction.
Picking the right one
Don't overthink this.
Your agent's output quality sucks? The reports are sloppy, the tone is wrong, the formatting is inconsistent? Skill. Write a SKILL.md this afternoon. Seriously, you can do it in an hour.
The agent needs data from somewhere else? Needs to read from a database, create tickets, send messages, interact with your tool stack? MCP. Get a developer to write a server (or find one of the 10,000+ that already exist).
You've got skills and MCP servers that work well and you want other people on your team to use them without a 12-step setup guide? Plugin. Package it. One-click install. The adoption rate difference between a README and a plugin is night and day.
Where this goes from here
Some things I'm paying attention to.
MCP Apps are interesting. Now that tools can spit out interactive UIs, the line between "tool" and "app" starts dissolving. A skill orchestrating a workflow that includes a rendered dashboard, a user input form, data processing, and interactive visualization, all inside a chat window? That's not a tool call. That's closer to an application. Nobody's figured out the full implications yet. I'm watching.
Portability actually works now. A skill for Claude runs in VS Code, Copilot, Codex, anything implementing agentskills.io. An MCP server works with any MCP host. Eighteen months ago this wasn't true. The formation of the Agentic AI Foundation (Anthropic, OpenAI, Block) suggests the big players want to keep it this way. Good.
Subagents and hooks exist too. Subagents are specialist workers that handle isolated subtasks. Agent hits a security review step, delegates to a security-focused subagent, gets findings back, continues. Hooks are event triggers at specific lifecycle points — run a linter before a commit, log output to analytics after a response. Neither one competes with skills/MCP/plugins. They're extra tools in the box.
Plugin marketplaces are showing up. Think app stores but for agent capabilities. Developers publish workflows, organizations curate approved plugins for their teams. Skills and MCP are the raw materials. Plugins are the products people browse and install.
Enterprises are splitting it by team. IT owns MCP servers (they already manage integrations). Domain teams write skills (they know the processes). Platform teams package plugins and manage the internal catalog. Everyone stays in their lane. It works because nobody has to learn all three layers. The finance team writes markdown. IT writes Python. Platform handles packaging.
Mistakes that keep happening
I've seen these enough times to just list them.
Building an MCP server when the real problem is that your agent writes garbage. If output quality is the issue, you need a skill. A data connection won't fix bad formatting.
Writing one 3,000-word megaskill instead of five focused ones. The context window has limits. Your agent will get confused. Break it up.
Emailing your team a folder with a setup guide instead of packaging a plugin. Nobody does the 12-step configuration. Everyone installs a one-click plugin. Every single time.
Treating MCP servers as permanent. They wrap APIs. APIs change. If you don't build in versioning and health checks, you're signing up for emergency debugging at 2am.

AI Agent & RAG Developer
AI Agent & RAG Developer with 10+ years of software engineering experience. Specialized in intelligent AI solutions for enterprises in the DACH & Nordic region.