AI Deploy - Tutorial - Deploy a brain tumor segmentation app using Streamlit

Knowledge Base

AI Deploy - Tutorial - Deploy a brain tumor segmentation app using Streamlit


Icons/System/eye-open Created with Sketch. 61 Views 27.06.2025 AI Deploy

AI Deploy is covered by OVHcloud Public Cloud Special Conditions.

Objective

The purpose of this tutorial is to show how to deploy an image segmentation application, which allows to use the U-Net model we trained in this notebook. This will allow us to visualize and segment brain tumors.

In order to do this, we will use Streamlit, a Python framework that turns scripts into a shareable web application. You will also learn how to build and use a custom Docker image for a Streamlit application.

For more information on how to train U-Net on the BraTS2020 Dataset, refer to the following documentation.

Overview of our image segmentation app:

Overview

Requirements

To deploy your app, you need:

Instructions

We are going to follow different steps to deploy our brain tumor segmentation application:

  • Write the requirements.txt that contains the required libraries that need to be installed so that our application can work.
  • Write the Dockerfile that contains all the commands to launch our image segmentation app.
  • Upload the dataset and the model's weights to an Object Storage (volume) to be able to use this data within the app's environment.
  • Build the Docker image from the Dockerfile.
  • Push the image into a registry.
  • Deploy your app.

If you have cloned the app's repository, you will not need to rewrite the files (requirements.txt and Dockerfile) since you already have them. In this case, you can go directly to the "Build the Docker image" step, even if it is better to understand the global process.

Write the requirements.txt file for the application

The requirements.txt file will allow us to write all the modules needed by our application. This file will be useful for the Dockerfile. Put this file (and the next ones) in the same directory as your python scripts.

keras==2.10.0
matplotlib==3.5.1
nibabel==5.1.0
numpy==1.22.0
opencv-python-headless==4.7.0.68
scikit-image==0.20.0
scikit-learn==1.0.2
streamlit==1.20.0
tensorflow==2.10.0

Write the Dockerfile for the application

A Dockerfile is a text document that contains all the commands a user could call on the command line to build an image.

This file should start with the FROM instruction, indicating the parent image to use. In our case, we choose to start from the official tensorflow/tensorflow image:

FROM tensorflow/tensorflow

Then, define the home directory and add all your files (python scripts, requirements.txt and the Dockerfile) to it thanks to the following commands:

WORKDIR /workspace
ADD . /workspace

With AI Deploy, workspace will be your home directory.

Now, let's indicate that we must install the requirements.txt file which contains our needed Python modules, by using a pip install ... command:

RUN pip install -r requirements.txt

Once our environment is set up, we must define our default launching command to start the application:

CMD [ "streamlit" , "run" , "/workspace/main.py", "--server.address=0.0.0.0" ]

Finally, give correct access rights to the OVHcloud user (42420:42420):

RUN chown -R 42420:42420 /workspace
ENV HOME=/workspace

Upload the dataset and the model's weights in an Object Storage

In order to use our trained model on the dataset images, we need to create two container objects, one for the model's weights, one for the BraTS2020 dataset. We can either:

  • upload the dataset as a .zip file (4GB) which will be unzipped by the app's python code when the Streamlit application is started.
  • upload the whole dataset. Though it is much heavier to upload (40GB), it will not require an unzip step.

In this tutorial, we will upload the .zip format.

If your model's weights and/or your .zip are in a folder, be careful to only add the files, not their folders. Otherwise, the path to these resources will no longer be directly attached to the root /workspace. The name of the folder must be specified before the name of the file, which means that you will need to adapt the application code, especially the places where the resource paths are mentioned (variables.py and dataset_unzip() function from utils.py).

You can upload your data to the cloud either by using the OVHcloud Control Panel (UI) or with the OVHcloud AI CLI.

Upload data via UI (Control Panel)

If you do not feel comfortable with commands, this method will be more intuitive.

First, log in to the OVHcloud Control Panel and access the Public Cloud section.

Then, select the Object Storage section (in the Storage category) and create a new object container by clicking Storage > Object Storage > Create an object container.

Here you can create the object container that will store the dataset and the model's weights. Several types and regions are available, choose the best parameters for you.

We advise you to separate your data by creating a container dedicated to the model weights and a second container for the dataset.

We will name our object containers BraTS2020_dataset_zip and BraTS2020_model_weights.

Once your object containers are created, you will see them in the Object Storage list. By clicking on them, you will be able to click the Add Objects button, which will allow you to upload your data to the cloud.

1.2 - Upload data via CLI

To follow this part, make sure you have installed the ovhai CLI on your computer or on an instance.

As in the Control Panel, you will have to specify the region, the name of your container and the path where your data will be located. The creation of an object container can be done with the following command:

ovhai bucket object upload <container>@<region> <paths>

Here are the commands that we will enter:

  • Upload the dataset (.zip):
ovhai bucket object upload BraTS2020_dataset_zip@GRA brats20-dataset-training-validation.zip
  • Upload the model's weights:
ovhai bucket object upload BraTS2020_model_weights@GRA model_.26-0.025329.m5.index model_.26-0.025329.m5.data-00000-of-00001

Make sure to upload the 2 files obtained during the training of your model. Weight files names can change for you. Make sure that the variable best_weights_path (of the variables.py file) indicates the path of your model's weights. Change it if necessary.

Once your data is uploaded, it can be accessed from all OVHcloud AI tools, either with read-only (RO) or read-write (RW) permissions.

Before continuing, make sure to modify the samples_test list in the variables.py file. Indeed, this list must contain the patients of the test set, which were not used for the training of the model. This list will not be the same for you, since the patients were randomly shuffled. It is therefore important that you replace it with your own test samples list!

Build the Docker image from the Dockerfile

From the directory containing your Dockerfile, run one of the following commands to build your application image:

# Build the image using your machine's default architecture
docker build . -t tumor_seg_streamlit_app:latest

# Build image targeting the linux/amd64 architecture
docker buildx build --platform linux/amd64 -t tumor_seg_streamlit_app:latest .
  • The first command builds the image using your system’s default architecture. This may work if your machine already uses the linux/amd64 architecture, which is required to run containers with our AI products. However, on systems with a different architecture (e.g. ARM64 on Apple Silicon), the resulting image will not be compatible and cannot be deployed.

  • The second command explicitly targets the linux/AMD64 architecture to ensure compatibility with our AI services. This requires buildx, which is not installed by default. If you haven’t used buildx before, you can install it by running: docker buildx install

The dot . argument indicates that your build context (place of the Dockerfile and other needed files) is the current directory.

The -t argument allows you to choose the identifier to give to your image. Usually image identifiers are composed of a name and a version tag <name>:<version>. For this example we chose tumor_seg_streamlit_app:latest.

Test it locally (optional)

Launch the following Docker command to launch the application locally on your computer:

docker run --rm -it -p 8501:8051 --user=42420:42420 tumor_seg_streamlit_app:latest

Notes

  • The -p 8501:8501 argument indicates that you want to execute a port redirection from the port 8501 of your local machine into the port 8501 of the Docker container. The port 8501 is the default port used by Streamlit applications.

  • Don't forget the --user=42420:42420 argument if you want to simulate the exact same behaviour that will occur on AI Deploy apps. It executes the Docker container as the specific OVHcloud user (user 42420:42420).

Once started, your application should be available on http://localhost:8501.

Push the image into the shared registry

Warning The shared registry should only be used for testing purposes. Please consider creating and attaching your own registry. More information about this can be found here. The images pushed to this registry are for AI Tools workloads only, and will not be accessible for external uses.

Find the address of your shared registry by launching this command:

ovhai registry list

Log in on your shared registry with your usual AI Platform user credentials:

docker login -u <user> -p <password> <shared-registry-address>

Tag the compiled image and push it into your shared registry:

docker tag tumor_seg_streamlit_app:latest <shared-registry-address>/tumor_seg_streamlit_app:latest
docker push <shared-registry-address>/tumor_seg_streamlit_app:latest

Launch the AI Deploy app

The following command starts a new app running your Streamlit application:

ovhai app run <shared-registry-address>/tumor_seg_streamlit_app:latest \
      --gpu 1 \
      --default-http-port 8501 \
      --volume BraTS2020_model_weights@GRA/:/workspace/weights:RO:cache \
      --volume BraTS2020_dataset_zip@GRA/:/workspace/BraTS2020_dataset_zip:RO:cache

Notes

  • --default-http-port 8501 indicates that the port to reach on the app URL is 8501.

  • --gpu 1 indicates that we request 1 GPU for that app.

  • --volume allows us to specify what volume we want to add to our job. As mentioned, we add the BraTS2020 dataset and weights volumes, which we put in RO:cache mode. RO stands for Read-Only, which means that we will only be able to read the data from these volumes and not modify them. Since all our volumes are in RO mode, a good idea is to cache our data, which will allow faster initialization of apps connected to these same volumes.

  • Consider adding the --unsecure-http attribute if you want your application to be reachable without any authentication.

Go further

  • Do you want to use Streamlit to create an audio classification app? Learn how to do it.
  • Learn how to create & deploy a Speech-to-Text app here.

If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for a custom analysis of your project.

Feedback

Please send us your questions, feedback and suggestions to improve the service: