AI Endpoints - Créez votre propre chatbot IA à l'aide de LangChain4j et Quarkus (EN)

Base de connaissances

AI Endpoints - Créez votre propre chatbot IA à l'aide de LangChain4j et Quarkus (EN)


Icons/System/eye-open Created with Sketch. 73 vues 19.12.2025 AI Endpoints

Introduction

Looking to build an AI-powered chatbot with Java? In this tutorial, you will learn how to create a chatbot using LangChain4j, Quarkus, and AI Endpoints.

Objective

This tutorial demonstrates how to:

  • Set up a Quarkus application using LangChain4j
  • Connect your chatbot to AI Endpoints
  • Customize prompts with annotations
  • Expose your chatbot as an API
  • Test your chatbot using a cURL request

Definitions

  • LangChain4j: Java-based framework inspired by LangChain, designed to simplify the integration of LLMs (Large Language Models) into applications. It offers abstractions and annotations for building intelligent agents and chatbots. Note that LangChain4j is not officially maintained by the LangChain team, despite the similar name.
  • Quarkus: A Kubernetes-native Java framework designed to optimize Java applications for containers and the cloud. In this tutorial we will use the quarkus-langchain4j extension.
  • AI Endpoints: A serverless platform by OVHcloud providing easy access to a variety of world-renowned AI models including Mistral, LLaMA, and more. This platform is designed to be simple, secure, and intuitive, with data privacy as a top priority.

Requirements

Instructions

Create a Quarkus project

Generate your Quarkus project using the CLI with the LangChain4j Mistral AI extension:

$ quarkus create app com.ovhcloud.examples.aiendpoints:quarkus-langchain4j \
                   --extension='quarkus-langchain4j-mistral-ai'

Here is the tree structure after running the previous command:

.
├── .dockerignore
├── .gitignore
├── .mvn
│   └── wrapper
│       ├── .gitignore
│       ├── MavenWrapperDownloader.java
│       └── maven-wrapper.properties
├── README.md
├── mvnw
├── mvnw.cmd
├── pom.xml
└── src

└── main

├── docker

│   ├── Dockerfile.jvm

│   ├── Dockerfile.legacy-jar

│   ├── Dockerfile.native

│   └── Dockerfile.native-micro

├── java

└── resources

└── application.properties

After running this command, check your pom.xml for the following dependency:

<!-- ... -->

<dependency>
  <groupId>io.quarkiverse.langchain4j</groupId>
  <artifactId>quarkus-langchain4j-mistral-ai</artifactId>
  <version>0.10.4</version>
</dependency>

<!-- ... -->

AI service creation

Let’s code our service to create a chatbot, using an annotation:

import io.quarkiverse.langchain4j.RegisterAiService;

@RegisterAiService
public interface ChatBotService {

}

Now add system and user prompts to your service:

import dev.langchain4j.service.SystemMessage;
import dev.langchain4j.service.UserMessage;
import io.quarkiverse.langchain4j.RegisterAiService;

@RegisterAiService
public interface ChatBotService {
  // Scope / context passed to the LLM
  @SystemMessage("You are a virtual, an AI assistant.")
  // Prompt (with detailed instructions and variable section) passed to the LLM
  @UserMessage("Answer as best possible to the following question: {question}. The answer must be in a style of a virtual assistant and add some emojis to make the answer more fun.")
  String askAQuestion(String question);
}

💡 For more details about @SystemMessage and @UserMessage, refer to the Quarkus LangChain4j extension documentation.

Configure AI Endpoints access

Adapt and add the following configuration to your application.properties, to enable AI Endpoints access:

### Global configurations
# Base URL for Mistral AI endpoints
quarkus.langchain4j.mistralai.base-url=${OVH_AI_ENDPOINTS_URL}
# Activate or not the log during the request
quarkus.langchain4j.mistralai.log-requests=true
# Activate or not the log during the response
quarkus.langchain4j.mistralai.log-responses=true
# Delay before raising a timeout exception                   
quarkus.langchain4j.mistralai.timeout=60s   
# No key is needed
quarkus.langchain4j.mistralai.api-key=${OVH_AI_ENDPOINTS_ACCESS_TOKEN}

# Activate or not the Mistral AI embedding model                     
quarkus.langchain4j.mistralai.embedding-model.enabled=false

### Chat model configurations
# Activate or not the Mistral AI chat model
quarkus.langchain4j.mistralai.chat-model.enabled=true             
# Chat model name used
quarkus.langchain4j.mistralai.chat-model.model-name=${OVH_AI_ENDPOINTS_MODEL_NAME}
# Number of tokens to use
quarkus.langchain4j.mistralai.chat-model.max-tokens=1024

Make sure to replace the token value (OVH_AI_ENDPOINTS_ACCESS_TOKEN) by yours. If you do not have one yet, follow the instructions in the AI Endpoints - Getting Started guide.

You will also have to replace two other environments variables, related to the model you want to use. You can find these model-specific values in the documentation tab of each model. For example, if you want to add the Mistral-7B-Instruct-v0.3 model, the expected environment variables will be:

  • OVH_AI_ENDPOINTS_MODEL_NAME: Mistral-7B-Instruct-v0.3
  • OVH_AI_ENDPOINTS_URL: https://oai.endpoints.kepler.ai.cloud.ovh.net/v1

Build a REST API to interact with the chatbot

First, install the Quarkus REST extension:

$ quarkus ext add io.quarkus:quarkus-rest

Looking for the newly published extensions in registry.quarkus.io
[SUCCESS] ✅  Extension io.quarkus:quarkus-rest has been installed

Then create a REST endpoint:

import com.ovhcloud.examples.aiendpoints.services.ChatBotService;
import jakarta.inject.Inject;
import jakarta.ws.rs.POST;
import jakarta.ws.rs.Path;

// Endpoint root path
@Path("ovhcloud-ai")    
public class AIEndpointsResource {
  // AI Service injection to use it later
  @Inject                                            
  ChatBotService chatBotService;

  // ask resource exposition with POST method
  @Path("ask")                                       
  @POST                                              
  public String ask(String question) {
    // Call the Mistral AI chat model
    return  chatBotService.askAQuestion(question);   
  }
}

Run and test

Now it is time to test the AI chatbot API!

To start your application and run your API, just use the Quarkus dev mode, by executing quarkus dev:

$ quarkus dev

 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
 -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \  
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/  
2024-04-12 08:51:33,515 INFO  [io.quarkus] (Quarkus Main Thread) quarkus-langchain4j 1.0.0-SNAPSHOT on JVM (powered by Quarkus 3.9.3) started in 3.163s. Listening on: http://localhost:8080

2024-04-12 08:51:33,517 INFO  [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
2024-04-12 08:51:33,518 INFO  [io.quarkus] (Quarkus Main Thread) Installed features: [awt, cdi, langchain4j, langchain4j-mistralai, poi, qute, rest, rest-client, rest-client-jackson, smallrye-context-propagation, vertx]

--
Tests paused
Press [e] to edit command line args (currently ''), [r] to resume testing, [o] Toggle test output, [:] for the terminal, [h] for more options>

Then, use cURL to interact with your API:

$ curl --header "Content-Type: application/json" \
  --request POST \
  --data '{"question": "What is OVHcloud?"}' \
  http://localhost:8080/ovhcloud-ai/ask

For the previous prompt example, here is the result:

Answer: «OVHcloud is a global, integrated cloud hosting platform, offering Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). »

This answer describes OVHcloud as a global integrated cloud hosting platform offering IaaS and PaaS with optimal and secure services. Emojis were added to the response to make it more fun and engaging.

Conclusion

In just a few steps, you have created your own AI chatbot powered by LangChain4j, Quarkus, and OVHcloud AI Endpoints.

Going further

If you want to go further and deploy your web app in the cloud, making your interface accessible to everyone, refer to the following articles and tutorials:

If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for a custom analysis of your project.

Feedback

Please feel free to send us your questions, feedback, and suggestions regarding AI Endpoints and its features:

  • In the #ai-endpoints channel of the OVHcloud Discord server, where you can engage with the community and OVHcloud team members.