Manual setup

This page covers procedures needed for advanced setups such as when you want to use the ML Workbench with an older version of JupyterLab than 3.0, or if you want to integrate GDPS with your own IDE.

If the database is running on a cluster, you only need to deploy GDPS on the m1 machine.

Install GDPS

Before you install GDPS, make sure that port 8000 is open on your machine or Docker container. For Docker containers, make sure that port 8000 is mapped to an appropriate port that the Workbench can connect to.

To install GDPS, follow these steps:

  1. On the machine that hosts the TigerGraph server (or the Docker container running the TigerGraph Server image), create a directory for GDPS and navigate to it:

mkdir -p tg_gdps && cd tg_gdps.

  1. Download GDPS for your operating system to the folder you just created.

  2. Run chmod +x on the file you downloaded to make it executable.

  3. Run the executable to start GDPS with the default configurations.

Install process on Docker container
:~$ mkdir -p tg_gdps && cd tg_gdps

:~/tg_gdps$ wget --no-check-certificate

--2022-03-29 22:08:02--
Resolving (
Connecting to (||:443... connected.
WARNING: cannot verify's certificate, issued by ‘CN=Amazon,OU=Server CA 1B,O=Amazon,C=US’:
  Unable to locally verify the issuer's authority.
HTTP request sent, awaiting response... 200 OK
Length: 49905264 (48M) [binary/octet-stream]
Saving to: ‘start_gdps_linux’

start_gdps_linux    100%[===================>]  47.59M  7.40MB/s    in 6.7s

2022-03-29 22:08:09 (7.12 MB/s) - ‘start_gdps_linux’ saved [49905264/49905264]

:~/tg_gdps$ chmod +x start_gdps_linux

:~/tg_gdps$ ./start_gdps_linux

INFO:     Started server process [36]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:uvicorn.error:Application startup complete.
INFO:     Uvicorn running on (Press CTRL+C to quit)
INFO:uvicorn.error:Uvicorn running on (Press CTRL+C to quit)

Test that the install is working by opening a new terminal instance and running curl The response on a working install will be {"message":"GDPS is running"}.

GDPS configurations

Here is a list of configurable options for the GDPS service:

Option Definition Default


The IP address of the database host.

Normally it is http://localhost since GDPS should run on the same machine as the database.

If the database is running on an HTTPS server, use the full address of the server. https://localhost will not work in this case.


The port for TigerGraph’s RESTPP server.



The port for the GSQL server.



Whether the TigerGraph database is running on a cluster.



Where GDPS can read the output files from the database.



Where to generate temporary output files from the database.

Normally this is the same as local_output_path, which is also the default. However, if the database is running in a container, this should be the path inside the container that is mounted to local_output_path.


Whether to keep temporary files. Temporary files are generated in the temporary output folder while the service is running. We do not recommend keeping these files unless you are debugging.



Whether to use the default TigerGraph username and password (tigergraph:tigergraph) to authenticate with the database. If true, any authentication header will be ignored.


To configure GDPS, set the values for the configurations through environment variables before starting GDPS.

For example, if you run the following line:

tg_host= local_output_path=/home/tigergraph/tmp ./start_gdps_linux

you set tg_host to be and local_output_path to be /home/tigergraph/tmp when you start GDPS.

Install kernel for JupyterLab

This Python kernel includes all packages required for machine learning with TigerGraph.

  1. From JupyterLab, click File  New  Terminal to open a new terminal.

  2. From the terminal, run the following command to install the tigergraph-torch Python kernel. Choose the appropriate command depending on whether you are using a CPU or GPU for training:

    • For CPU

    • For GPU

    $ conda env create -f && conda activate tigergraph-torch-cpu && python -m ipykernel install --user --name tigergraph-torch-cpu --display-name "TigerGraph Pytorch (cpu)"
    $ conda env create -f && conda activate tigergraph-torch-gpu && python -m ipykernel install --user --name tigergraph-torch-gpu --display-name "TigerGraph Pytorch (gpu)"

    You should see a new Python kernel named "TigerGraph Pytorch" on the launch page. This Python kernel includes all packages required for machine learning with TigerGraph.

Install tgml

After installing tgml, you are able to communicate with a TigerGraph server that has GDPS installed from any environment.

  1. Install Pytorch.

  2. Run pip install "tgml[pyg]" or pip install "tgml[dgl]" to install tgml with PyTorch-Geometric or Deep Graph Library (DGL) support, depending on if you want to use PyTorch Geometric or DGL for the Graph Neural Network model.

After installing tgml, create a connection to your TigerGraph server with and start training your ML in the IDE of your choice.