Journal of Distributed Software Engineering, Architecture and Design
APIs, MCP, and Skills for Agent Tool Use
<div class="cs-rating pd-rating" id="pd_rating_holder_1819065_post_3192"></div>
<p class="wp-block-paragraph">When I started going deeper into agent design, I thought the hard problems would be memory, reasoning, and model choice. They were not …the first real design problem was much more practical – <strong>How should the agent use tools, and how do I keep that simple, extensible, and not full of slop?</strong></p>
<p class="wp-block-paragraph">I had 3 options to enable Agent tool use – raw APIs, MCP and Agent Skills</p>
<p class="wp-block-paragraph">And after working through those layers, this is the mental model I landed on was this – <strong>APIs expose business capabilities, MCPs expose agent-usable tools, and Skills orchestrate the work.</strong></p>
<p class="wp-block-paragraph">This post is a simple write-up of how I got there.</p>
<h2 class="wp-block-heading">The question that started it</h2>
<p class="wp-block-paragraph">A simple question kept coming up in my head – <strong>What is the best technology for agent tool use: APIs, MCP, Skills, or something else?</strong></p>
<p class="wp-block-paragraph">At first glance, they can look similar. They all help an agent do work. They all connect the model to something outside itself. They all sound like part of the same pattern but they do different jobs and that distinction matters a lot once you start building real agent workflows.</p>
<h2 class="wp-block-heading">The first thing I noticed: agents need tools, but not all tool patterns are equal</h2>
<p class="wp-block-paragraph">An agent without tools is limited. It can reason over text, but it cannot safely and reliably do much in the real world unless it can read information, call systems, execute steps, validate outputs, work across multiple sources etc</p>
<p class="wp-block-paragraph">So the real question becomes<strong> how do you expose external capability to the agent in a way that is clean and scalable?</strong></p>
<p class="wp-block-paragraph">This is where the journey usually starts</p>
<h2 class="wp-block-heading">Stage 1: We let the agent use APIs directly</h2>
<p class="wp-block-paragraph">The most obvious answer is<strong> give the agent an API spec and let it call the API</strong> and while this sounds reasonable especially if an agent can read an OpenAPI file, Swagger document, GraphQL schema, or some other interface description, then surely it should be able to call the system directly.</p>
<p class="wp-block-paragraph">And yes, technically, it can and for simple cases, this works but this was the first place where the gap between theory and practice became obvious. Direct API use is harder than it looks, if the agent is calling APIs directly, then the agent has to understand the system interface itself. That means it needs to reason about things like endpoint paths, parameters, authentication, schemas, pagination, error handling, rate limits, workflow order etc.</p>
<p class="wp-block-paragraph">While an API spec can help with some of that, reading an API spec is not the same as reliably operating a real system and the problems show up quickly</p>
<p class="wp-block-paragraph">There are several challenges with using API specs directly including dealing with with incomplete specs, syntax without operational meaning, weak recovery and error semantics, real-world authentication complexity, excessive provider-specific surface area for the agent to reason over, and stale or divergent documentation versus live implementation.</p>
<p class="wp-block-paragraph">The API spec might tell the agent what can be called, but it often does not tell the agent how to use that interface safely and consistently in a messy real-world environment.</p>
<p class="wp-block-paragraph">An endpoint may exist, but the actual business logic around that endpoint may still be unclear. The documentation may say one thing, but the live implementation may behave differently. The authentication story may look simple in the spec and become complex in practice or the error handling may not be explicit, also the order of steps may matter or the retry logic may be dangerous.</p>
<p class="wp-block-paragraph">At that point, the agent is no longer just consuming a clean interface. It is acting like an integration developer that has to work out provider-specific details on the fly and that becomes brittle very quickly.</p>
<figure class="wp-block-image size-full"><a href="https://alok-mishra.com/wp-content/uploads/2026/03/Agent-API.png"><img src="https://alok-mishra.com/wp-content/uploads/2026/03/Agent-API.png" alt="" class="wp-image-3206"/></a><figcaption class="wp-element-caption"><em>Direct API use works, but it pushes too much provider-specific complexity into the agent.</em></figcaption></figure>
<p class="wp-block-paragraph">API use directly by an agent does not scale cleanly. If the agent needs to use ten different APIs, then it has to understand ten different sets of shapes, conventions, auth models, and quirks.</p>
<p class="wp-block-paragraph">What we want is to be able to stop forcing the agent to understand every provider interface directly and this leads to MCP.</p>
<h2 class="wp-block-heading">Stage 2: Use MCP as the tool layer</h2>
<p class="wp-block-paragraph">The second step in my mental model was understanding MCP properly. <strong>An agent calling an API directly means the agent understands systems. An agent calling MCP means the agent understands tools.</strong></p>
<p class="wp-block-paragraph">That is the real shift. Instead of giving the agent raw system contracts and expecting it to reason through each one, MCP gives the agent a more standardised tool-facing interface. The agent no longer needs to know every API in detail. It can work with named tools and structured inputs.</p>
<p class="wp-block-paragraph">I now think of MCP as sitting mostly on the <strong>provider side</strong>. That does not mean it is the provider service itself. It means it is the provider-facing gateway or adapter that exposes systems, tools, and data in a way that is easier for agents to consume.</p>
<p class="wp-block-paragraph">So if I describe the layers simply:</p>
<ul class="wp-block-list">
<li>the <strong>API</strong> is the business or system contract</li>
<li>the <strong>MCP</strong> is the agent-facing tool gateway over that capability</li>
</ul>
<p class="wp-block-paragraph">That framing made MCP make much more sense to me.</p>
<h3 class="wp-block-heading">Why MCP is better than raw API use</h3>
<p class="wp-block-paragraph">MCP is better than raw API use because it provides standardised agent-facing tool access, abstracts provider-specific API complexity, encapsulates authentication, schemas, retries, pagination, and version differences, and enables safer, more consistent, and reusable capability exposure across agents, but it still lacks the higher-level task orchestration, multi-step execution method, and cross-tool workflow coordination that a Skill provides.</p>
<p class="wp-block-paragraph">That last part is important, MCP improves access. It improves abstraction. It improves consistency. But it does not automatically give you a reusable working method.</p>
<p class="wp-block-paragraph">It exposes capabilities. It does not orchestrate the work.</p>
<h2 class="wp-block-heading">But where do MCP servers run?</h2>
<p class="wp-block-paragraph">Another question that naturally came up was – <strong>Where do MCP servers actually run?</strong></p>
<p class="wp-block-paragraph">MCP servers are normal services that run outside the agent and sit between the agent runtime and the underlying systems. They can run locally on a developer machine, in an enterprise environment, in cloud infrastructure, inside a broader integration layer or near the systems they expose</p>
<p class="wp-block-paragraph">The key point is that the MCP server absorbs complexity that you do not want the agent to carry itself and that can include tool-to-API mapping, authentication handling, schema translation, pagination, retries, version differences, workflow logic. Sounds a lot like <strong>consumer adapter pattern</strong> right? </p>
<p class="wp-block-paragraph">This is one of the strengths of MCP because of the abstraction it provides as a consumer adapter against the API as it gives the agent a more stable and agent-friendly access layer.</p>
<p class="wp-block-paragraph">But I also found a practical limitation – writing MCP servers, especially local ones, can become slow. You start with good intent, you want a clean standardised interface and you want to make tools discoverable and reusable – but if every new problem leads to another MCP wrapper, another local server, another setup step, another config surface, and another round of testing, you can end up creating friction and slop in a different place.</p>
<p class="wp-block-paragraph">That was the moment where I started appreciating Skills much more.</p>
<figure class="wp-block-image size-large"><a href="https://alok-mishra.com/wp-content/uploads/2026/03/Agent-MCP.png"><img src="https://alok-mishra.com/wp-content/uploads/2026/03/Agent-MCP-1024x599.png" alt="" class="wp-image-3213"/></a><figcaption class="wp-element-caption"><em>MCP is cleaner than raw API use, but it still solves capability exposure more than execution orchestration.</em></figcaption></figure>
<h2 class="wp-block-heading">Stage 3: Skills as the execution layer</h2>
<p class="wp-block-paragraph">This is the part that made the whole stack feel more practical later 2025/early 2026 for me. What I needed was not just access to systems but a repeatable way for the agent to perform work properly. That is where Skills became useful.</p>
<p class="wp-block-paragraph">I now see Skills mostly on the <strong>client or consumer side</strong> and they sit close to the agent runtime, they are not their own runtime though. They help the agent decide how to execute a task and in simple terms <strong>MCP</strong> is more provider-side for capability exposure or a consumer adapter while a <strong>Skill</strong> is also client-side for better execution orchestration</p>
<h3 class="wp-block-heading">What a Skill really is</h3>
<p class="wp-block-paragraph"><strong>A Skill is a reusable execution playbook.</strong> It can include instructions, scripts, local resources, references, APIs, MCP tool usage, validation steps, execution rules etc. So a Skill is not just another integration surface, but it is a structured method for doing work.</p>
<p class="wp-block-paragraph">More formally, a Skill is a reusable execution playbook structured as a folder containing a <code>SKILL.md</code> file plus optional scripts, supporting markdown resources, and other local assets, and it offers task-specific, on-demand guidance for repeatable work such as orchestrating steps, invoking scripts, using APIs and MCP tools, applying standards, and performing validation, which makes it different from custom instructions because instructions are always-on contextual guidance that broadly shapes Copilot’s behaviour within a scope, whereas Skills are modular, more procedural, and loaded when relevant for specialised tasks.</p>
<figure class="wp-block-image size-large"><a href="https://alok-mishra.com/wp-content/uploads/2026/03/Screenshot-2026-03-16-at-3.40.22-pm.png"><img src="https://alok-mishra.com/wp-content/uploads/2026/03/Screenshot-2026-03-16-at-3.40.22-pm-1024x560.png" alt="" class="wp-image-3217"/></a></figure>
<p class="wp-block-paragraph">That distinction matters a lot.</p>
<h3 class="wp-block-heading">Skills versus custom instructions</h3>
<p class="wp-block-paragraph">One of my engineers asked why we should use skills when we had custom instructions – this was in the context of a Code Review Skill we were building. I found it helpful to simplify the comparison like this <strong>Custom instructions</strong> (as in Github Copilot) are standing guidance where as <strong>Skills</strong> are reusable playbooks</p>
<p class="wp-block-paragraph">Instructions shape behaviour broadly where as Skills package a repeatable method for a specific class of work. That is why Skills felt more powerful in practice and they gave me a place to put procedure, structure, scripts, and validation without having to keep bloating always-on context.</p>
<p class="wp-block-paragraph">Skills also feel easier to extend because if I needed a new repeatable flow, I often found it faster to extend or create a Skill than to build another MCP server. Why? Because a Skill lets me package the method directly. It can say – “inspect these files”, “run this script”, “call this MCP tool”, “use this API”, “apply this standard” etc</p>
<p class="wp-block-paragraph">That is a much lighter and more direct way to extend the agent’s working method.</p>
<p class="wp-block-paragraph">By contrast, creating another MCP server often meant more setup, more plumbing, more testing, more local infrastructure, and more chance of creating messy tool surfaces too early. That does not make MCP bad, it just means MCP and Skills solve different problems.</p>
<p class="wp-block-paragraph">If the problem is <strong>provider capability exposure</strong>, MCP is strong. If the problem is <strong>repeatable execution method</strong>, Skills are often the faster and cleaner answer.</p>
<figure class="wp-block-image size-large"><a href="https://alok-mishra.com/wp-content/uploads/2026/03/Agent-Skill.png"><img src="https://alok-mishra.com/wp-content/uploads/2026/03/Agent-Skill-1024x608.png" alt="" class="wp-image-3221"/></a><figcaption class="wp-element-caption"><em>Skills orchestrate scripts, APIs, and MCPs into a reusable method for getting work done.</em></figcaption></figure>
<h2 class="wp-block-heading">My mental model </h2>
<p class="wp-block-paragraph"><strong>APIs expose business capabilities, MCPs expose agent-usable tools, and Skills orchestrate the work.</strong></p>
<ul class="wp-block-list">
<li><strong>API</strong> = system contract</li>
<li><strong>MCP</strong> = tool gateway for agents or consumer adapter</li>
<li><strong>Skill</strong> = execution playbook</li>
</ul>
<p class="wp-block-paragraph">Or even more simply:</p>
<ul class="wp-block-list">
<li><strong>API</strong> tells you what exists</li>
<li><strong>MCP</strong> gives the agent a cleaner way to access it</li>
<li><strong>Skill</strong> tells the agent how to use it well</li>
</ul>
<h2 class="wp-block-heading">Lets talk about security </h2>
<p class="wp-block-paragraph">What about security in these use cases? </p>
<p class="wp-block-paragraph">Raw API use – Direct API use can be fine, but it tends to push more provider-specific auth and runtime complexity closer to the agent. That can work, but it spreads integration details more widely.</p>
<p class="wp-block-paragraph">MCP – MCP can be better from a control and governance perspective because it gives you a defined boundary between the agent and the provider systems. You can centralise access patterns, auth handling, abstraction, tool definitions, provider-specific complexity. That often makes the environment cleaner and easier to govern.</p>
<p class="wp-block-paragraph">Skills – Skills are not really a security boundary by themselves. A Skill is an orchestration layer. Its security depends on what it is allowed to invoke. If a Skill can call a script, an API, or an MCP tool, then the risk sits in those underlying execution surfaces and in how well the Skill is reviewed and controlled</p>
<p class="wp-block-paragraph">I noticed there are several online “skills appstores” like skills.sh where you can find malware / scripts that are trojanised i.e. hide instructions that can leak information or worse allow access to your services, data, system to an unauthorised user</p>
<figure class="wp-block-image size-full"><a href="https://alok-mishra.com/wp-content/uploads/2026/03/skills-malware.png"><img src="https://alok-mishra.com/wp-content/uploads/2026/03/skills-malware.png" alt="" class="wp-image-3224"/></a></figure>
<p class="wp-block-paragraph">In Summary</p>
<ul class="wp-block-list">
<li><strong>MCP</strong> often improves control over access</li>
<li><strong>Skills</strong> improve control over execution method but careful where you get your skills from </li>
<li><strong>Neither replaces the need for good security design</strong></li>
</ul>
<h2 class="wp-block-heading">Are Skills more shareable?</h2>
<p class="wp-block-paragraph">In practice, yes, often they are as Skills tend to be lightweight and are easier to package and move around because they are mostly made of instructions, supporting resources, and optional scripts.</p>
<p class="wp-block-paragraph">MCP servers are shareable too, but they are operationally heavier as someone still has to host them, configure them, secure them etc for me the difference matters when you are trying to move quickly.</p>
<h2 class="wp-block-heading">Ok what are the risks with Skills?</h2>
<p class="wp-block-paragraph">I like Skills, but they are not risk-free. Their biggest strength is also their biggest risk: they are easy to create. That means they can become too broad, stale, overly procedural, weakly validated, dependent on fragile scripts and super difficult to govern if they multiply without discipline.</p>
<p class="wp-block-paragraph">A bad Skill can package poor process just as easily as a good Skill can package strong process. So Skills still need engineering discipline and need curation, review, versioning, boundaries, testing etc ..otherwise they can become another source of slop.</p>
<p class="wp-block-paragraph">Also Skills are now shared widely online meaning there is inherent security risky in downloading skills from public forums / portals </p>
<h2 class="wp-block-heading">Summary</h2>
<p class="wp-block-paragraph">The biggest lesson for me was that these layers should not be treated as competitors. They are not alternatives in the strict sense. They are parts of a stack.</p>
<ul class="wp-block-list">
<li>use <strong>APIs</strong> for real business capability exposure</li>
<li>use <strong>MCP</strong> when you want cleaner, agent-usable capability access</li>
<li>use <strong>Skills</strong> when you want repeatable execution and orchestration</li>
</ul>
<p class="wp-block-paragraph">That structure feels much more stable than forcing everything into one abstraction.</p>
<p class="wp-block-paragraph">When I started this, I thought I was trying to work out the best technology for tool use. What I was actually trying to work out was something deeper: <strong>Where should complexity live?</strong></p>
<p class="wp-block-paragraph">Should it live inside the agent? Should it live in provider wrappers? Should it live in reusable execution methods? That is the question that helped me separate APIs, MCPs, and Skills properly.</p>
<p class="wp-block-paragraph">My key takeaway has been <strong>APIs expose business capabilities, MCPs expose agent-usable tools, and Skills orchestrate the work.</strong></p>
<p class="wp-block-paragraph">That is the model I will keep building on in 2026 atleast 😉 </p>
<figure class="wp-block-image size-full"><a href="https://alok-mishra.com/wp-content/uploads/2026/03/Screenshot-2026-03-16-at-5.23.54-pm.png"><img src="https://alok-mishra.com/wp-content/uploads/2026/03/Screenshot-2026-03-16-at-5.23.54-pm.png" alt="" class="wp-image-3196"/></a></figure>
When I started going deeper into agent design, I thought the hard problems would be memory, reasoning, and model choice. They were not …the first real design problem was much more practical – How should the agent use tools, and how do I keep that simple, extensible, and not full of slop?
I had 3 options to enable Agent tool use – raw APIs, MCP and Agent Skills
And after working through those layers, this is the mental model I landed on was this – APIs expose business capabilities, MCPs expose agent-usable tools, and Skills orchestrate the work.
This post is a simple write-up of how I got there.
The question that started it
A simple question kept coming up in my head – What is the best technology for agent tool use: APIs, MCP, Skills, or something else?
At first glance, they can look similar. They all help an agent do work. They all connect the model to something outside itself. They all sound like part of the same pattern but they do different jobs and that distinction matters a lot once you start building real agent workflows.
The first thing I noticed: agents need tools, but not all tool patterns are equal
An agent without tools is limited. It can reason over text, but it cannot safely and reliably do much in the real world unless it can read information, call systems, execute steps, validate outputs, work across multiple sources etc
So the real question becomes how do you expose external capability to the agent in a way that is clean and scalable?
This is where the journey usually starts
Stage 1: We let the agent use APIs directly
The most obvious answer is give the agent an API spec and let it call the API and while this sounds reasonable especially if an agent can read an OpenAPI file, Swagger document, GraphQL schema, or some other interface description, then surely it should be able to call the system directly.
And yes, technically, it can and for simple cases, this works but this was the first place where the gap between theory and practice became obvious. Direct API use is harder than it looks, if the agent is calling APIs directly, then the agent has to understand the system interface itself. That means it needs to reason about things like endpoint paths, parameters, authentication, schemas, pagination, error handling, rate limits, workflow order etc.
While an API spec can help with some of that, reading an API spec is not the same as reliably operating a real system and the problems show up quickly
There are several challenges with using API specs directly including dealing with with incomplete specs, syntax without operational meaning, weak recovery and error semantics, real-world authentication complexity, excessive provider-specific surface area for the agent to reason over, and stale or divergent documentation versus live implementation.
The API spec might tell the agent what can be called, but it often does not tell the agent how to use that interface safely and consistently in a messy real-world environment.
An endpoint may exist, but the actual business logic around that endpoint may still be unclear. The documentation may say one thing, but the live implementation may behave differently. The authentication story may look simple in the spec and become complex in practice or the error handling may not be explicit, also the order of steps may matter or the retry logic may be dangerous.
At that point, the agent is no longer just consuming a clean interface. It is acting like an integration developer that has to work out provider-specific details on the fly and that becomes brittle very quickly.
Direct API use works, but it pushes too much provider-specific complexity into the agent.
API use directly by an agent does not scale cleanly. If the agent needs to use ten different APIs, then it has to understand ten different sets of shapes, conventions, auth models, and quirks.
What we want is to be able to stop forcing the agent to understand every provider interface directly and this leads to MCP.
Stage 2: Use MCP as the tool layer
The second step in my mental model was understanding MCP properly. An agent calling an API directly means the agent understands systems. An agent calling MCP means the agent understands tools.
That is the real shift. Instead of giving the agent raw system contracts and expecting it to reason through each one, MCP gives the agent a more standardised tool-facing interface. The agent no longer needs to know every API in detail. It can work with named tools and structured inputs.
I now think of MCP as sitting mostly on the provider side. That does not mean it is the provider service itself. It means it is the provider-facing gateway or adapter that exposes systems, tools, and data in a way that is easier for agents to consume.
So if I describe the layers simply:
the API is the business or system contract
the MCP is the agent-facing tool gateway over that capability
That framing made MCP make much more sense to me.
Why MCP is better than raw API use
MCP is better than raw API use because it provides standardised agent-facing tool access, abstracts provider-specific API complexity, encapsulates authentication, schemas, retries, pagination, and version differences, and enables safer, more consistent, and reusable capability exposure across agents, but it still lacks the higher-level task orchestration, multi-step execution method, and cross-tool workflow coordination that a Skill provides.
That last part is important, MCP improves access. It improves abstraction. It improves consistency. But it does not automatically give you a reusable working method.
It exposes capabilities. It does not orchestrate the work.
But where do MCP servers run?
Another question that naturally came up was – Where do MCP servers actually run?
MCP servers are normal services that run outside the agent and sit between the agent runtime and the underlying systems. They can run locally on a developer machine, in an enterprise environment, in cloud infrastructure, inside a broader integration layer or near the systems they expose
The key point is that the MCP server absorbs complexity that you do not want the agent to carry itself and that can include tool-to-API mapping, authentication handling, schema translation, pagination, retries, version differences, workflow logic. Sounds a lot like consumer adapter pattern right?
This is one of the strengths of MCP because of the abstraction it provides as a consumer adapter against the API as it gives the agent a more stable and agent-friendly access layer.
But I also found a practical limitation – writing MCP servers, especially local ones, can become slow. You start with good intent, you want a clean standardised interface and you want to make tools discoverable and reusable – but if every new problem leads to another MCP wrapper, another local server, another setup step, another config surface, and another round of testing, you can end up creating friction and slop in a different place.
That was the moment where I started appreciating Skills much more.
MCP is cleaner than raw API use, but it still solves capability exposure more than execution orchestration.
Stage 3: Skills as the execution layer
This is the part that made the whole stack feel more practical later 2025/early 2026 for me. What I needed was not just access to systems but a repeatable way for the agent to perform work properly. That is where Skills became useful.
I now see Skills mostly on the client or consumer side and they sit close to the agent runtime, they are not their own runtime though. They help the agent decide how to execute a task and in simple terms MCP is more provider-side for capability exposure or a consumer adapter while a Skill is also client-side for better execution orchestration
What a Skill really is
A Skill is a reusable execution playbook. It can include instructions, scripts, local resources, references, APIs, MCP tool usage, validation steps, execution rules etc. So a Skill is not just another integration surface, but it is a structured method for doing work.
More formally, a Skill is a reusable execution playbook structured as a folder containing a SKILL.md file plus optional scripts, supporting markdown resources, and other local assets, and it offers task-specific, on-demand guidance for repeatable work such as orchestrating steps, invoking scripts, using APIs and MCP tools, applying standards, and performing validation, which makes it different from custom instructions because instructions are always-on contextual guidance that broadly shapes Copilot’s behaviour within a scope, whereas Skills are modular, more procedural, and loaded when relevant for specialised tasks.
That distinction matters a lot.
Skills versus custom instructions
One of my engineers asked why we should use skills when we had custom instructions – this was in the context of a Code Review Skill we were building. I found it helpful to simplify the comparison like this Custom instructions (as in Github Copilot) are standing guidance where as Skills are reusable playbooks
Instructions shape behaviour broadly where as Skills package a repeatable method for a specific class of work. That is why Skills felt more powerful in practice and they gave me a place to put procedure, structure, scripts, and validation without having to keep bloating always-on context.
Skills also feel easier to extend because if I needed a new repeatable flow, I often found it faster to extend or create a Skill than to build another MCP server. Why? Because a Skill lets me package the method directly. It can say – “inspect these files”, “run this script”, “call this MCP tool”, “use this API”, “apply this standard” etc
That is a much lighter and more direct way to extend the agent’s working method.
By contrast, creating another MCP server often meant more setup, more plumbing, more testing, more local infrastructure, and more chance of creating messy tool surfaces too early. That does not make MCP bad, it just means MCP and Skills solve different problems.
If the problem is provider capability exposure, MCP is strong. If the problem is repeatable execution method, Skills are often the faster and cleaner answer.
Skills orchestrate scripts, APIs, and MCPs into a reusable method for getting work done.
My mental model
APIs expose business capabilities, MCPs expose agent-usable tools, and Skills orchestrate the work.
API = system contract
MCP = tool gateway for agents or consumer adapter
Skill = execution playbook
Or even more simply:
API tells you what exists
MCP gives the agent a cleaner way to access it
Skill tells the agent how to use it well
Lets talk about security
What about security in these use cases?
Raw API use – Direct API use can be fine, but it tends to push more provider-specific auth and runtime complexity closer to the agent. That can work, but it spreads integration details more widely.
MCP – MCP can be better from a control and governance perspective because it gives you a defined boundary between the agent and the provider systems. You can centralise access patterns, auth handling, abstraction, tool definitions, provider-specific complexity. That often makes the environment cleaner and easier to govern.
Skills – Skills are not really a security boundary by themselves. A Skill is an orchestration layer. Its security depends on what it is allowed to invoke. If a Skill can call a script, an API, or an MCP tool, then the risk sits in those underlying execution surfaces and in how well the Skill is reviewed and controlled
I noticed there are several online “skills appstores” like skills.sh where you can find malware / scripts that are trojanised i.e. hide instructions that can leak information or worse allow access to your services, data, system to an unauthorised user
In Summary
MCP often improves control over access
Skills improve control over execution method but careful where you get your skills from
Neither replaces the need for good security design
Are Skills more shareable?
In practice, yes, often they are as Skills tend to be lightweight and are easier to package and move around because they are mostly made of instructions, supporting resources, and optional scripts.
MCP servers are shareable too, but they are operationally heavier as someone still has to host them, configure them, secure them etc for me the difference matters when you are trying to move quickly.
Ok what are the risks with Skills?
I like Skills, but they are not risk-free. Their biggest strength is also their biggest risk: they are easy to create. That means they can become too broad, stale, overly procedural, weakly validated, dependent on fragile scripts and super difficult to govern if they multiply without discipline.
A bad Skill can package poor process just as easily as a good Skill can package strong process. So Skills still need engineering discipline and need curation, review, versioning, boundaries, testing etc ..otherwise they can become another source of slop.
Also Skills are now shared widely online meaning there is inherent security risky in downloading skills from public forums / portals
Summary
The biggest lesson for me was that these layers should not be treated as competitors. They are not alternatives in the strict sense. They are parts of a stack.
use APIs for real business capability exposure
use MCP when you want cleaner, agent-usable capability access
use Skills when you want repeatable execution and orchestration
That structure feels much more stable than forcing everything into one abstraction.
When I started this, I thought I was trying to work out the best technology for tool use. What I was actually trying to work out was something deeper: Where should complexity live?
Should it live inside the agent? Should it live in provider wrappers? Should it live in reusable execution methods? That is the question that helped me separate APIs, MCPs, and Skills properly.
My key takeaway has been APIs expose business capabilities, MCPs expose agent-usable tools, and Skills orchestrate the work.
That is the model I will keep building on in 2026 atleast 😉
Alok brings experience in engineering and architecting distributed software systems from over 20 years across industry and consulting. His posts focus on Systems Integration, API design, Microservices and Event driven systems, Modern Enterprise Architecture and other related topics
View all posts by alokmishra