AI Endpoints - Développer un chatbot avec mémoire en utilisant LangChain4j (EN)
AI Endpoints is covered by the OVHcloud AI Endpoints Conditions and the OVHcloud Public Cloud Special Conditions.
Introduction
In our other tutorials, chatbots answered one question at a time without remembering anything from the conversation. That’s fine for simple Q&A, but not ideal for real-world interactions.
Indeed, here’s a simple conversation with no memory:
That’s not very helpful.
In this tutorial, we’ll use LangChain4j to build a chatbot with memory support, allowing it to remember previous messages and provide more natural, contextual answers.
Definitions
- LangChain4j: Java-based framework inspired by LangChain, designed to simplify the integration of LLMs (Large Language Models) into applications. Note that LangChain4j is not officially maintained by the LangChain team, despite the similar name.
- AI Endpoints: A serverless platform by OVHcloud providing easy access to a variety of world-renowned AI models including Mistral, LLaMA, and more. This platform is designed to be simple, secure, and intuitive, with data privacy as a top priority.
Requirements
- A Public Cloud project in your OVHcloud account.
- An access token for OVHcloud AI Endpoints. To create an API token, follow the instructions in the AI Endpoints - Getting Started guide.
Instructions
Configure pom.xml
Add the following configuration to your Maven pom.xml file:
Create a memory-enabled chatbot
Run your chatbot
Make sure your environment variables are set:
Make sure to replace the token value (OVH_AI_ENDPOINTS_ACCESS_TOKEN) by yours. If you do not have one yet, follow the instructions in the AI Endpoints - Getting Started guide.
Then run your Java application:
Conclusion
In just a few steps, you have created your own AI chatbot powered by LangChain4j, Quarkus, and OVHcloud AI Endpoints.
Going further
If you want to go further and deploy your web app in the cloud, making your interface accessible to everyone, refer to the following articles and tutorials:
- AI Deploy – Tutorial – Build & use a custom Docker image
- AI Deploy – Tutorial – Deploy a Gradio app for sketch recognition
If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for a custom analysis of your project.
Feedback
Please feel free to send us your questions, feedback, and suggestions regarding AI Endpoints and its features:
- In the #ai-endpoints channel of the OVHcloud Discord server, where you can engage with the community and OVHcloud team members.