Integrate ML Workbench with Azure ML
This guide walks you through integrating the ML Workbench with a compute instance on Azure Machine Learning (AML).
From Azure ML Studio, find your compute instance, and open the Jupyter Lab instance running on it.
Open a terminal in the Jupyter Lab instance. From the terminal, run
pip install tigergraph_mlworkbench. This installs the ML Workbench JupyterLab extension.
From the terminal, run the following command to install the
tigergraph-torchPython kernel. Choose the appropriate command depending on whether you are using a CPU or GPU for training:
Install the Tigergraph Pytorch kernel in JupyterLab by running the following command in the terminal:
conda activate tigergraph-torch-gpu && pip install ipykernel && python -m ipykernel install --user --name tigergraph-torch-gpu --display-name "TigerGraph Pytorch (gpu)"
Once installation finishes, refresh your browser. You should see a small TigerGraph logo on the very left navigation bar and a new Python kernel called TigerGraph Pytorch on the launch page. You now have successfully installed the TigerGraph ML Workbench on an Azure Machine Learning compute instance.
With the ML Workbench JupyterLab extension and the
tigergraph-torchkernel installed, the next step is to deploy GDPS on your TigerGraph instance so the Workbench can communicate with your TigerGraph database.
You can connect to your Azure compute instance from VS Code if you have SSH access to the compute instance. See Connect to an Azure Machine Learning compute instance in Visual Studio Code (preview)