AI Deploy - Tutoriel - Déployer une app de segmentation de tumeurs (EN)
AI Deploy is covered by OVHcloud Public Cloud Special Conditions.
Objective
The purpose of this tutorial is to show how to deploy an image segmentation application, which allows to use the U-Net model we trained in this notebook. This will allow us to visualize and segment brain tumors.
In order to do this, we will use Streamlit, a Python framework that turns scripts into a shareable web application. You will also learn how to build and use a custom Docker image for a Streamlit application.
For more information on how to train U-Net on the BraTS2020 Dataset, refer to the following documentation.
Overview of our image segmentation app:

Requirements
To deploy your app, you need:
- Access to the OVHcloud Control Panel
- An AI Deploy Project created inside a Public Cloud project in your OVHcloud account
- A user for AI Deploy
- The OVHcloud AI CLI installed on your local computer
- Docker installed on your local computer, or access to a Debian Docker Instance, which is available on the Public Cloud.
- Some knowledge about building image and Dockerfile
- The BraTS2020 dataset. Download it here
- Your weights obtained from training the U-Net model on this dataset (refer to the "Step 6 - Train and Save the model" part of the image segmentation notebook tutorial).
- The full code of the application, which can be found on this GitHub repository, which I invite you to clone.
Instructions
We are going to follow different steps to deploy our brain tumor segmentation application:
- Write the requirements.txt that contains the required libraries that need to be installed so that our application can work.
- Write the
Dockerfilethat contains all the commands to launch our image segmentation app. - Upload the dataset and the model's weights to an Object Storage (volume) to be able to use this data within the app's environment.
- Build the Docker image from the Dockerfile.
- Push the image into a registry.
- Deploy your app.
If you have cloned the app's repository, you will not need to rewrite the files (requirements.txt and Dockerfile) since you already have them. In this case, you can go directly to the "Build the Docker image" step, even if it is better to understand the global process.
Write the requirements.txt file for the application
The requirements.txt file will allow us to write all the modules needed by our application. This file will be useful for the Dockerfile.
Put this file (and the next ones) in the same directory as your python scripts.
Write the Dockerfile for the application
A Dockerfile is a text document that contains all the commands a user could call on the command line to build an image.
This file should start with the FROM instruction, indicating the parent image to use. In our case, we choose to start from the official tensorflow/tensorflow image:
Then, define the home directory and add all your files (python scripts, requirements.txt and the Dockerfile) to it thanks to the following commands:
With AI Deploy, workspace will be your home directory.
Now, let's indicate that we must install the requirements.txt file which contains our needed Python modules, by using a pip install ... command:
Once our environment is set up, we must define our default launching command to start the application:
Finally, give correct access rights to the OVHcloud user (42420:42420):
Upload the dataset and the model's weights in an Object Storage
In order to use our trained model on the dataset images, we need to create two container objects, one for the model's weights, one for the BraTS2020 dataset. We can either:
- upload the dataset as a
.zipfile (4GB) which will be unzipped by the app's python code when the Streamlit application is started. - upload the whole dataset. Though it is much heavier to upload (40GB), it will not require an unzip step.
In this tutorial, we will upload the .zip format.
If your model's weights and/or your .zip are in a folder, be careful to only add the files, not their folders. Otherwise, the path to these resources will no longer be directly attached to the root /workspace. The name of the folder must be specified before the name of the file, which means that you will need to adapt the application code, especially the places where the resource paths are mentioned (variables.py and dataset_unzip() function from utils.py).
You can upload your data to the cloud either by using the OVHcloud Control Panel (UI) or with the OVHcloud AI CLI.
Upload data via UI (Control Panel)
If you do not feel comfortable with commands, this method will be more intuitive.
First, log in to the OVHcloud Control Panel and access the Public Cloud section.
Then, select the Object Storage section (in the Storage category) and create a new object container by clicking Storage > Object Storage > Create an object container.
Here you can create the object container that will store the dataset and the model's weights. Several types and regions are available, choose the best parameters for you.
We advise you to separate your data by creating a container dedicated to the model weights and a second container for the dataset.
We will name our object containers BraTS2020_dataset_zip and BraTS2020_model_weights.
Once your object containers are created, you will see them in the Object Storage list. By clicking on them, you will be able to click the Add Objects button, which will allow you to upload your data to the cloud.
1.2 - Upload data via CLI
To follow this part, make sure you have installed the ovhai CLI on your computer or on an instance.
As in the Control Panel, you will have to specify the region, the name of your container and the path where your data will be located. The creation of an object container can be done with the following command:
Here are the commands that we will enter:
- Upload the dataset (
.zip):
- Upload the model's weights:
Make sure to upload the 2 files obtained during the training of your model.
Weight files names can change for you. Make sure that the variable best_weights_path (of the variables.py file) indicates the path of your model's weights. Change it if necessary.
Once your data is uploaded, it can be accessed from all OVHcloud AI tools, either with read-only (RO) or read-write (RW) permissions.
Before continuing, make sure to modify the samples_test list in the variables.py file.
Indeed, this list must contain the patients of the test set, which were not used for the training of the model. This list will not be the same for you, since the patients were randomly shuffled. It is therefore important that you replace it with your own test samples list!
Build the Docker image from the Dockerfile
From the directory containing your Dockerfile, run one of the following commands to build your application image:
-
The first command builds the image using your system’s default architecture. This may work if your machine already uses the
linux/amd64architecture, which is required to run containers with our AI products. However, on systems with a different architecture (e.g.ARM64onApple Silicon), the resulting image will not be compatible and cannot be deployed. -
The second command explicitly targets the
linux/AMD64architecture to ensure compatibility with our AI services. This requiresbuildx, which is not installed by default. If you haven’t usedbuildxbefore, you can install it by running:docker buildx install
The dot . argument indicates that your build context (place of the Dockerfile and other needed files) is the current directory.
The -t argument allows you to choose the identifier to give to your image. Usually image identifiers are composed of a name and a version tag <name>:<version>. For this example we chose tumor_seg_streamlit_app:latest.
Test it locally (optional)
Launch the following Docker command to launch the application locally on your computer:
Notes
-
The
-p 8501:8501argument indicates that you want to execute a port redirection from the port 8501 of your local machine into the port 8501 of the Docker container. The port 8501 is the default port used by Streamlit applications. -
Don't forget the
--user=42420:42420argument if you want to simulate the exact same behaviour that will occur on AI Deploy apps. It executes the Docker container as the specific OVHcloud user (user 42420:42420).
Once started, your application should be available on http://localhost:8501.
Push the image into the shared registry
Warning The shared registry should only be used for testing purposes. Please consider creating and attaching your own registry. More information about this can be found here. The images pushed to this registry are for AI Tools workloads only, and will not be accessible for external uses.
Find the address of your shared registry by launching this command:
Log in on your shared registry with your usual AI Platform user credentials:
Tag the compiled image and push it into your shared registry:
Launch the AI Deploy app
The following command starts a new app running your Streamlit application:
Notes
-
--default-http-port 8501indicates that the port to reach on the app URL is8501. -
--gpu 1indicates that we request 1 GPU for that app. -
--volumeallows us to specify what volume we want to add to our job. As mentioned, we add the BraTS2020 dataset and weights volumes, which we put inRO:cachemode. RO stands for Read-Only, which means that we will only be able to read the data from these volumes and not modify them. Since all our volumes are in RO mode, a good idea is to cache our data, which will allow faster initialization of apps connected to these same volumes. -
Consider adding the
--unsecure-httpattribute if you want your application to be reachable without any authentication.
Go further
- Do you want to use Streamlit to create an audio classification app? Learn how to do it.
- Learn how to create & deploy a Speech-to-Text app here.
If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for a custom analysis of your project.
Feedback
Please send us your questions, feedback and suggestions to improve the service:
- On the OVHcloud Discord server