AI Notebooks - Tutoriel - Utiliser le Speech-to-Text sur des audios et vidéos (EN)
Objective
The purpose of this tutorial is to show you how it is possible to convert speech into text and generate transcripts thanks to AI Notebooks.

In Natural Language Processing (NLP), speech-to-text is a Deep Learning task that enables machines to understand and read human language. There are many applications: transcription, summaries, diarization, subtitle generation, ...
This documentation allows you to test and launch 3 AI Notebooks allowing you to get to grips with and use various speech-to-text features.
- The first one will teach you the basics of audio transcript. You will be able to transcribe long local or YouTube audio files, measure the quality of a transcription, add punctuation and summarize them.
- The second tutorial is intended to discover more advanced steps such as the detection of speaker changes (diarization) and the generation of video subtitles.
- The last tutorial is a comparison of different Speech-to-Text models to find the best one among those available.
The following instructions correspond to each of these 3 tutorials.
Requirements
- Access to the OVHcloud Control Panel
- An AI Notebooks project created inside a Public Cloud project in your OVHcloud account
- A user for AI Notebooks
Instructions
You can launch your notebook from the OVHcloud Control Panel or via the ovhai CLI.
Direct link to the full code can be found here.
Launching a Jupyter notebook with "Miniconda" via UI
To launch your notebook from the OVHcloud Control Panel, refer to the following steps.
Code editor
Choose the Jupyterlab code editor.
Framework
In this tutorial, the Miniconda framework is used.
With Miniconda, you will be able to set up your environment by installing the Python libraries you need.
You can choose the conda version.
The default version of conda is functional for this tutorial: conda-py39-cuda11.2-v22-4.
Resources
GPU is recommended since audio transcription is resource intensive.
Here, using 1 GPU is sufficient.
Launching a Jupyter notebook with "Miniconda" via CLI
If you want to launch it with the CLI, choose the jupyterlab editor and the conda framework.
To access the different versions of conda available, run the following command.
ovhai capabilities framework list -o yaml
This tutorial has been launched with the conda-py39-cuda11.2-v22-4 version.
If you do not specify a version, your notebook starts with the default version of conda.
Choose the number of CPUs/GPUs (<nb-cpus> or <nb-gpus>) to use in your notebook and use the following command.
Here we recommend using 1 GPU.
ovhai notebook run conda jupyterlab \
--name <notebook-name> \
--framework-version <conda-version> \
--gpu <nb-gpus>
You can then reach your notebook’s URL once the notebook is running.
Accessing the notebooks
Once the repository has been cloned, find your notebook by following this path: ai-training-examples > notebooks > natural-language-processing > speech-to-text.
- You can find the first tutorial in the
basicsfolder. A preview of this notebook can be found on GitHub here. - The second tutorial corresponds to the
advancedfolder. A preview of this notebook can be found on GitHub here. - The last folder, named
compare-models, contains the third tutorial. A preview of this notebook can be found on GitHub here.
Go further
- With NLP, you can do sentiment analysis. For more information, click here.
If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for a custom analysis of your project.
Feedback
Please send us your questions, feedback and suggestions to improve the service:
- On the OVHcloud Discord server