Getting started with Terraform for Analytics

Knowledge Base

Getting started with Terraform for Analytics


Icons/System/eye-open Created with Sketch. 107 Views 04.03.2026 Common

Objective

Public Cloud Managed Analytics services allow you to focus on building and deploying cloud applications while OVHcloud takes care of the analytics infrastructure and maintenance.

This guide explains how to order an OpenSearch instance of an Analytics service using Terraform.

Requirements

Instructions

Step 1: Gather the OVHcloud required parameters

Getting your cluster/API tokens information

The "OVH provider" needs to be configured with a set of credentials:

  • an application_key
  • an application_secret
  • a consumer_key

Why?

Because, behind the scenes, the "OVH Terraform provider" is doing requests to OVHcloud APIs.

To retrieve this information, follow our First steps with the OVHcloud APIs tutorial.

Specifically, you have to generate these credentials via the OVHcloud token generation page with the following rights:

  • GET /cloud/project/*/database/*
  • POST /cloud/project/*/database/*
  • PUT /cloud/project/*/database/*
  • DELETE /cloud/project/*/database/*

Once you have generated your tokens, save them — you will need them shortly.

The last needed information is the service_name: it is the ID of your Public Cloud project.

How to get it?

In the Public Cloud section, you can retrieve your service name ID thanks to the Copy to clipboard button.

You will also use this information in Terraform resources definition files.

Step 2: Gather the set of required parameters

To create a new OpenSearch cluster, specify at least:

  • the engine (e.g. "opensearch")
  • the version (e.g. "3.3")
  • the region (e.g. "EU-WEST-PAR")
  • the plan (e.g. "production")
  • the flavor of the cluster (e.g. "b3-8")

Step 3: Create Terraform files

First, create a main.tf file defining the resources that will be created.

terraform {
  required_providers {
    ovh = {
      source  = "ovh/ovh"
      version = ">= 2.11.0"
    }
  }
}

provider "ovh" {
  endpoint           = var.ovh.endpoint
  application_key    = var.ovh.application_key
  application_secret = var.ovh.application_secret
  consumer_key       = var.ovh.consumer_key
}

resource "ovh_cloud_project_database" "service" {
  service_name = var.product.project_id
  description  = var.product.name
  engine       = var.product.engine
  version      = var.product.version
  plan         = var.product.plan
  nodes {
    region = var.product.region
  }
  nodes {
    region = var.product.region
  }
  nodes {
    region = var.product.region
  }
  flavor = var.product.flavor
  ip_restrictions {
    ip = var.product.ip
  }
}

resource "ovh_cloud_project_database_opensearch_user" "analyticsuser" {
  service_name = ovh_cloud_project_database.service.service_name
  cluster_id   = ovh_cloud_project_database.service.id
  name         = var.access.name
}

Then, create a variables.tf file defining the variables used in main.tf:

variable "ovh" {
  type = map(string)
  default = {
    endpoint           = "ovh-eu"
    application_key    = ""
    application_secret = ""
    consumer_key       = ""
  }
}

variable "product" {
  type = map(string)
  default = {
    project_id = ""
    name       = ""
    engine     = ""
    region     = "EU-WEST-PAR"
    plan       = "production"
    flavor     = "b3-8"
    version    = ""
    ip         = "0.0.0.0/32"
  }
}

variable "access" {
  type = map(string)
  default = {
    name = "johndoe"
  }
}

Here, we defined the ovh-eu endpoint because we want to call the OVHcloud Europe API. Other endpoints exist, depending on your needs:

  • ovh-eu for OVHcloud Europe API
  • ovh-ca for OVHcloud North-America API

Then, create a secrets.tfvars file containing the required variables values:

ovh = {
  endpoint           = "ovh-eu"
  application_key    = "<application_key>"
  application_secret = "<application_secret>"
  consumer_key       = "<consumer_key>"
}

product = {
  project_id = "<service_name>"
  name       = "opensearch-terraform"
  engine     = "opensearch"
  region     = "EU-WEST-PAR"
  plan       = "production"
  flavor     = "b3-8"
  version    = "3.3"
  ip         = "<ip_range>"
}

access = {
  name = "johndoe"
}

Don't forget to replace <service_name>, <application_key>, <application_secret>, <consumer_key>, <ip_range> by the real data.

Finally, create an outputs.tf file defining the resources that will be exported:

output "cluster_uri" {
  value = ovh_cloud_project_database.service.endpoints.0.uri
}

output "user_name" {
  value = ovh_cloud_project_database_opensearch_user.analyticsuser.name
}

output "user_password" {
  value     = ovh_cloud_project_database_opensearch_user.analyticsuser.password
  sensitive = true
}

Step 4: Run

Now we need to initialise Terraform, generate a plan, and apply it.

terraform init

The init command will initialize your working directory which contains .tf configuration files.

Run it first for any new configuration, or after checking out a configuration from a git repository.

The init command will:

  • Download and install Terraform providers/plugins
  • Initialise backend (if defined)
  • Download and install modules (if defined)

Now, we can generate our plan:

terraform plan -var-file=secrets.tfvars

Thanks to the plan command, we can check what Terraform wants to create, modify or remove.

The plan is OK for us, so let's apply it:

terraform apply -var-file=secrets.tfvars -auto-approve

Finally export the user credentials and the URI:

export PASSWORD=$(terraform output -raw user_password)
export USER=$(terraform output -raw user_name)
export URI=$(terraform output -raw cluster_uri)

And that's it, the OpenSearch cluster is created.

How to deploy with another engine

This guide covered deploying an OpenSearch service. You can find a Kafka example here:

https://github.com/ovh/public-cloud-databases-examples/tree/main/databases/kafka/terraform/hello-world

Go further

OpenSearch capabilities

Starting with OpenSearch analytics service

Configuring vRack for Public Cloud

Visit our dedicated Discord channel: https://discord.gg/ovhcloud. Ask questions, provide feedback and interact directly with the team that builds our Analytics services.

If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for a custom analysis of your project.

Join our community of users.

Related articles