ML Workbench for an Existing Jupyter Server

If you have a running Jupyter lab server or you are using the Jupyter lab on Microsoft Azure, AWS Sagemaker, or Google Vertex AI, you can still take advantage of our ML Workbench by installing the Python package.

You can choose to install on the base kernel or using a Conda environment.

Base kernel installation

From JupyterLab, open a new terminal and run the following command to install pyTigerGraph:

pip install pyTigerGraph[gds]

Conda environment installation

  1. Run the following command to install our Python environment:

    • CPU

    • GPU

    conda env create -f https://raw.githubusercontent.com/TigerGraph-DevLabs/mlworkbench-docs/main/conda_envs/tigergraph-torch-cpu.yml
    conda env create -f https://raw.githubusercontent.com/TigerGraph-DevLabs/mlworkbench-docs/main/conda_envs/tigergraph-torch-gpu.yml
  2. Run source activate tigergraph-torch-cpu or source activate tigergraph-torch-gpu to activate your environment depending on if you installed a CPU or GPU environment.

  3. Run the following command to install the Python kernel:

    • CPU

    • GPU

    python -m ipykernel install --user --name tigergraph-torch-cpu --display-name "TigerGraph Pytorch (cpu)"
    python -m ipykernel install --user --name tigergraph-torch-gpu --display-name "TigerGraph Pytorch (gpu)"

Next steps

The next step after installation is activation.

After installation, go to our Tutorials and Sample Data section.