Everyone wants to “add AI” to their systems — but few succeed.
Not because AI models aren’t powerful — they are.
But because exposing enterprise systems to AI is still too hard.
- APIs are scattered.
- Schemas are inconsistent.
- Every integration feels bespoke.
- And publishing capabilities to AI often requires deep expertise, time, and organizational buy-in.
So for most teams, the friction is simply too high.
A Useful Analogy: AI as a Researcher
Think of AI as a researcher.
At first, the researcher had only their own brain — impressive, but limited to memory. Then the researcher learned to read books — browsing the web. Later, they learned to use tools — calling APIs and running functions.
But here’s the catch:
A researcher is only as good as the lab they’re allowed to work in.
And most enterprise systems are not built like accessible labs.
MCP: A Standard Language for Capability
The Model Context Protocol (MCP) changes this.
MCP provides a standard way to describe what a system can do — its data, actions, and behaviors — so that any large language model can understand and use it.
Instead of teaching each AI how to talk to each system, MCP creates a shared language.
That alone is powerful.
But in practice, publishing MCP services still carries inertia:
- You need to understand MCP deeply
- You need to write adapters
- You need to commit engineering time
And that’s where most efforts stall.
Our Insight: The Real Bottleneck Is Publishing
We asked a simple question:
What if exposing a system to AI felt less like engineering… and more like explaining?
That question led to our MCP framework.
Intent-First MCP Publishing
Our framework flips the problem around. Instead of starting with protocols and schemas, we start with intent. To expose a capability, you simply:
- Describe what the service does (in plain English)
- Define its inputs
- Define its outputs
- Implement it using what you already know, e.g. Python (REST APIs, logic), SQL (databases), MOCA (Blueyonder native scripting language)
That’s it.
No reinvention.
No model-specific glue.
No heavyweight onboarding.
Why This Matters
This does something subtle but profound:
- It removes the psychological barrier to entry
- It lowers the technical cost of participation
- It turns system owners into capability publishers
From Intent to Execution: What Publishing MCP Looks Like in Practice
Once intent becomes the center of the model, the technical execution becomes secondary.
Our framework provides an integrated environment for defining, documenting, and publishing these intents — regardless of how the underlying system is implemented.
We provide a comprehensive integrated development environment for managing these enterprise intents. We natively support:
- MOCA allows us to expose native Blue Yonder WMS functionality with minimal effort
- Python enables integration with any REST API
- SQL allows legacy systems to participate without refactoring

This shows the effort involved in exposing a single intent (showing waves) to AI. As you can see, the technical aspect, i.e. how the MOCA is called — is trivial. The key is to provide good documentation of the intent and describe the inputs and the outputs.
Our solution allows for integrating any system that is available as REST API. For example, see the following where we call a REST API to get users from Blue Yonder WMS:

We have simplified this by abstracting the authentication and specific connection protocols. Furthermore we provide a “sql” primitive function to enhance the data returned from the underlying API
We also support calling SQL natively. This provides a powerful capability to support legacy systems that may not be easily available otherwise:

To ensure consistency and predictability, we introduce the concept of enterprise domains.
Domains provide a shared vocabulary for the enterprise — the same way humans naturally think about orders, waves, inventory, and statuses.

Domains also support “enums” — which become critical in providing a capability to the end-users to query the data without knowing the underlying coded values.

This abstraction greatly improves the quality of the intents we create. To bring consistency across the intents, our framework also provides the capability for creating input and output collections:


Every object mentioned above supports an elaborate tagging framework that greatly simplifies the overall management of these objects.
These objects are maintained directly in a GIT repository allowing our solution to take advantage of all the core capabilities offered there.
How Our Solution Comes Alive?
Once this basic infrastructure is setup, then every intent has already been published per MCP protocol. So I can then use this from any tool that supports MCP, for example:
- Open AI
- Microsoft Copilot
- Claude
Additionally note that since we completely abstract MCP we support other routes to access these capabilities as well, such as:
- Blueyonder native screen
- Smart IS provided chat solution
- Teams
Following shows some such use cases with MCP compatible clients:


Don’t Want to Share Data With AI — No Problem!
For organizations with data-sharing concerns, our abstraction layer enables an alternative path — where LLMs generate executable intent code instead of directly accessing enterprise data.
Our extension for Blue Yonder client uses this capability. We provide a simple screen within the Blue Yonder application:

I can then ask it “what did you do? to which it will respond with the code that it ran:

We have created a client ourselves as well which allows for interacting with the systems. Here we provide a richer experience by allowing the users to save the interactions and also run them on schedule.

We can access these intents from teams as well:

Why Sharing with AI Unlocks More Value
This is where the real value of MCP becomes visible.
For example I used Claude to answer the following prompt:
What is today’s situation, any risks , priory orders, etc. also share any recommendations to optimize anything Let me know your thought on risks , and recommendations etc.
In order to respond to this abstract query, it utilized following services that we had exposed via MCP:
- get_warehouse_id (MCP)
- list_waves (MCP)(wh_id: “SMARTWH001”)
- list_shipments (MCP)(wh_id: “SMARTWH001”)
- list_orders (MCP)(wh_id: “SMARTWH001”, order_filter_type: “all”)
- list_picks (MCP)(wh_id: “SMARTWH001”)
- list_trailers (MCP)(wh_id: “SMARTWH001”)
- list_outbound_loads (MCP)(wh_id: “SMARTWH001”)
- list_inbound_receipt_data (MCP)(wh_id: “SMARTWH001”, receipt_view: “truck”)
It analyzed the results based on our rule matrix to provide a comprehensive response.
From AI Experiments to AI Ecosystems
When publishing becomes easy:
- More systems participate
- More capabilities become visible
- AI stops being a demo and starts being useful
MCP gives AI a standard way to consume capabilities. Our framework gives organizations a frictionless way to publish them.
That’s how ecosystems form.
What’s next
MCP gives AI a standard way to consume capabilities.
Our framework removes the friction required to publish them.
That combination is what turns AI experiments into AI ecosystems. In the next blog, I describe how we carry this concept forward to make the enterprise systems cognitive.
You can contact me at saad.ahmad@smart-is.com and view our company website Smart IS
Press enter or click to view image in full size