AI Endpoints - Model Context Protocol (MCP) avec LangChain4j
AI Endpoints is covered by the OVHcloud AI Endpoints Conditions and the OVHcloud Public Cloud Special Conditions.
Objective
OVHcloud AI Endpoints allows developers to easily add AI features to their day to day developments.
In this article, we will explore how to create a Model Context Protocol (MCP) server and client using Quarkus and LangChain4J to interact with OVHcloud AI Endpoints.
Combined with OVHcloud AI Endpoints which offers both LLM and embedding models, it becomes easy to create advanced, production-ready assistants.

Definition
- Model Context Protocol: MCP is a protocol that allows your LLM to ask for additional context or data from external sources during the generation processes. If you want more information about MCP, please refer to the official documentation.
- LangChain4j: Java-based framework inspired by LangChain, designed to simplify the integration of LLMs (Large Language Models) into applications. Note that LangChain4j is not officially maintained by the LangChain team, despite the similar name.
- AI Endpoints: A serverless platform by OVHcloud providing easy access to a variety of world-renowned AI models including Mistral, LLaMA, and more. This platform is designed to be simple, secure, and intuitive with data privacy as a top priority.
Requirements
- A Public Cloud project in your OVHcloud account.
- An access token for OVHcloud AI Endpoints. To create an API token, follow the instructions in the AI Endpoints - Getting Started guide.
- This code example uses JBang, a Java-based tool for creating and running Java programs as scripts. For more information on JBang, please refer to the JBang documentation.
Instructions
In this tutorial, we will explore how to easily create in Java, a MCP Server using Quarkus and a client using LangChain4J.
Creating a Server with Quarkus
The goal of this MCP server is to allow the LLM to ask for information about OVHcloud public cloud projects.
ℹ️ The code used to call the OVHcloud API is in the GitHub repository and will not be detailed here.
Thanks to Quarkus, the only thing you need to create a MCP server, is to define the tools that you want to expose to the LLM.
⚠️ The description is very important, as it will be used by the LLM to choose the right tool for the task. ⚠️
At the time of writing, there are two types of MCP servers: stdio and Streamable HTTP. This blog post uses the Streamable mode thanks to Quarkus with the quarkus-mcp-server-sse extension.
Run your server with the quarkus dev command. Your MCP server will be used on http://localhost:8080.
Using the MCP server with LangChain4J
You can now use the MCP server with LangChain4J to create a powerful chatbot that can now interact with your OVHcloud account!
If you run the code you can see your MCP server and client in action:
Conclusion
In this article, we have seen how to create a Model Context Protocol (MCP) server and client using Quarkus and LangChain4J to interact with OVHcloud AI Endpoints.
Go further
You can find the full code example in the GitHub repository.
Browse the full AI Endpoints documentation to further understand the main concepts and get started.
To discover how to build complete and powerful applications using AI Endpoints, explore our dedicated AI Endpoints guides.
If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for a custom analysis of your project.
Feedback
Please feel free to send us your questions, feedback, and suggestions regarding AI Endpoints and its features:
- In the #ai-endpoints channel of the OVHcloud Discord server, where you can engage with the community and OVHcloud team members.