Integrate ML Workbench with Azure ML

This guide walks you through integrating the ML Workbench with a compute instance on Azure Machine Learning (AML).

1. Prerequisites

  • An Azure Machine Learning Workspace with a running compute instance.

2. Procedure

  1. From Azure ML Studio, find your compute instance, and open the Jupyter Lab instance running on it.

  2. Open a terminal in the Jupyter Lab instance. From the terminal, run pip install tigergraph_mlworkbench. This installs the ML Workbench JupyterLab extension.

  3. From the terminal, run the following command to install the tigergraph-torch Python kernel. Choose the appropriate command depending on whether you are using a CPU or GPU for training:

    • For CPU

    • For GPU

    $ conda env create -f https://raw.githubusercontent.com/TigerGraph-DevLabs/mlworkbench-docs/main/conda_envs/tigergraph-torch-cpu.yml
    $ conda env create -f https://raw.githubusercontent.com/TigerGraph-DevLabs/mlworkbench-docs/main/conda_envs/tigergraph-torch-gpu.yml
  4. Install the Tigergraph Pytorch kernel in JupyterLab by running the following command in the terminal:

    conda activate tigergraph-torch-gpu && pip install ipykernel && python -m ipykernel install --user --name tigergraph-torch-gpu --display-name "TigerGraph Pytorch (gpu)"

Once installation finishes, refresh your browser. You should see a small TigerGraph logo on the very left navigation bar and a new Python kernel called TigerGraph Pytorch on the launch page. You now have successfully installed the TigerGraph ML Workbench on an Azure Machine Learning compute instance.

3. Next steps