Using the RESTful API of NSX-T (and others) Part 1

In many of my projects customers required access to the NSX-T API. While it’s perfectly possible to use tools like Postman, using a Linux CLI that permits you to run commands like

GET /policy/api/v1/infra/segments

might be much easier and will even permit you to write scripts that automate common tasks.

Many scripts that I’ve been writing are based on customer requirements that can’t be satisfied by the NSX-T UI. 

The core of my scripts is a curl wrapper that implments shell functions like GET, PUT, PATCH and so on. Since the scripts have some dependencies, I decided to package the curl wrapper, all scripts and all dependencies in a docker image. So if you want to test my scripts, you need to have a docker host available.

In part 2 of that post, you can learn how to backup and restore individual DFW policies. You can find it here:

Using the RESTful API of NSX-T (and others) Part 2

Downloading and running the image

Since customers like to see what a container can do before starting it, all required files can be donwloaded from GitHub:

git clone

This will download the project to the folder api in your current directory.

You can now explore the content of the project, and if you think you’re ready to give it a try, you need to build the image. Change to the project directory and run a command like:

docker build -t blog-api:1.0 .

This will take some minutes and at the end of the build process you should get a messager like:

Successfully built 4a96c3ecd52b
Successfully tagged blog-api:1.0

You can now run the container. Since the application is a CLI, the container will start an open-ssh server for CLI access. You need to map a port of your docker host to port 22 of the container. Start the container with a command similar to:

docker run --name blog-api -p 3333:22 -d blog-api:1.0

Getting started: connect to NSX-T manager

Now that the container is running, you can ssh into it:

ssh -p 3333 root@localhost
root@localhost's password:

The password is geheim which means secret in german language. It comes from the file shadow in the project’s root directory, feel free to change it.

You now need to setup the ip or FQDN of your NSX-T manager like this:


You also need to create a file for authenticating to NSX-T. If your REST_ENDPOINT is then you create the file /root/creds/ in the following format:



The second line tells the curl wrapper to use basic authentication. This is tested with NSX-T but also works with NSX for vSphere.

You can now test the system with a simple API call like:

GET /policy/api/v1/infra/segments

which should return a json file with all your segments and their configuration.

Volume Mounts

To make things a bit easier, it is adviced to mount some of your host directories into the container. I typically mount three volumes:

/data – for persistently storing files

/root/creds – this is where your authentication files are located.

/root/.ssh – can be used for ssh public key authentication

Mounting /data

This is a general purpos volume that can be used to persistently store data like backups of DFW policies, backups of VM tags and others.

Whenever you ssh to the container, the file /data/.RESTCALLrc is sourced from .zshrc. This file can be used to perform initializations, typically including somthing like


Another important file on this volume is /data/.RESTCALL.readonly. If this file exists, the curl wrapper will refuse to use any method that would modify your NSX configuration. It’s useful when you want to explore the API and want to make sure to not modify anything by mistake.

Mounting /root/creds

In this directory you have to create your authentication files. Instead of recreating these files every time you restart the container, you put the files on your host file system. If your docker host is a multi user system, make sure to set permissions that prevent other users from reading your password.

Mounting /root/.ssh

This can be useful in case you want to use ssh public key authentication. You have to create the file /root/.ssh/authorized_keys cand copy your ssh public key into it. Then you can ssh to the container without a password. Some of my customers use this to regulary ssh into the container via a cron job and backup things like DFW policies, VM tags and others.

Getting started again

Putting all this together, you now start the container using a command like this:

docker run –name blog-api -p 3333:22 -d -v ~/creds:/root/creds -v ~/data:/data -v /root/.ssh:/root/.ssh blog-api:1.0

This must be a single line.




Kommentar absenden

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert