AI Endpoints - Using Structured Output with LangChain4j
AI Endpoints is covered by the OVHcloud AI Endpoints Conditions and the OVHcloud Public Cloud Special Conditions.
Objective
In this tutorial, we will explore how to use Structured Output with OVHcloud AI Endpoints.
To do this, we will use LangChain4j, Java-based framework inspired by LangChain, designed to simplify the integration of LLMs (Large Language Models) into applications. Note that LangChain4j is not officially maintained by the LangChain team, despite the similar name.
Combined with OVHcloud AI Endpoints which offers both LLM and embedding models, it becomes easy to create advanced, production-ready assistants.

Definition
- Structured Output: Structured output allows you to format output data in a way that makes it easier for machines to interpret and process.
- LangChain4j: a Java-based framework inspired by LangChain, designed to simplify the integration of LLMs (Large Language Models) into applications. Note that LangChain4j is not officially maintained by the LangChain team, despite the similar name.
- AI Endpoints: A serverless platform by OVHcloud providing easy access to a variety of world-renowned AI models including Mistral, LLaMA, and more. This platform is designed to be simple, secure, and intuitive with data privacy as a top priority.
Requirements
- A Public Cloud project in your OVHcloud account.
- An access token for OVHcloud AI Endpoints. To create an API token, follow the instructions in the AI Endpoints - Getting Started guide.
- This code example uses JBang, a Java-based tool for creating and running Java programs as scripts. For more information on JBang, please refer to the JBang documentation.
Instructions
Here is an excerpt of code that shows how to define a structured output format for the responses of the language model:
In this example, we define a JSON output format with a schema that specifies the name, age, height, and married properties as required.
This example uses the Mistral AI model hosted on OVHcloud AI Endpoints.
To configure the model, you need to set up the API key, base URL, and model name as environment variables. Feel free to use another model, see AI Endpoints catalog.
You can find your access token, model URL, and model name in the OVHcloud AI Endpoints model dashboard.
Calling the language model
Thanks to the JSON mode of the LLM, the response from the language model is received as a JSON string:
The full source code
Running the application
Conclusion
In this article, we have seen how to use Structured Output with OVHcloud AI Endpoints and LangChain4J.
Go further
You can find the full code example in the GitHub repository.
Browse the full AI Endpoints documentation to further understand the main concepts and get started.
To discover how to build complete and powerful applications using AI Endpoints, explore our dedicated AI Endpoints guides.
If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for a custom analysis of your project.
Feedback
Please feel free to send us your questions, feedback, and suggestions regarding AI Endpoints and its features:
- In the #ai-endpoints channel of the OVHcloud Discord server, where you can engage with the community and OVHcloud team members.
If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for a custom analysis of your project.