ML Workbench On-Prem

The following steps will guide you through some key aspects of setting up TigerGraph ML Workbench on a local server or private cloud, distinct from the TigerGraph Cloud integration. There are several paths forward to get started with ML Workbench, and the options are presented below.

At the conclusion of the Getting Started sequence, you’ll have reached an excellent starting point for further, more detail-driven activities.

Set Up Your Workbench

Select one of the following installation approaches based on your situation:

  1. If you are new to TigerGraph, we recommend running TigerGraph Server and the Workbench together in a sandbox container. The sandbox image comes with a free version of TigerGraph preloaded with a sample data set and ML Workbench preconfigured.

  2. If you already have a running TigerGraph instance, you can install ML Workbench on your Linux or MacOS machine as a standalone application.

  3. If you already have your own TigerGraph instance and a JupyterLab server, either self-hosted or on a cloud machine learning environment, you can download our Python library and Jupyter extension to integrate the Workbench with your JupyterLab.

  4. If you are not using JupyterLab at all, but still want to perform machine learning tasks with the graph data in your TigerGraph database, you can use pyTigergraph.

Set up Kafka Streaming

If you are using ML Workbench Enterprise Edition, you have the option of configuring Kafka data streaming between the database and the workbench.

Activate Your Workbench

After installation, activate your product with the appropriate license (see Editions).

Follow the tutorials and examples in our provided notebooks to train your first model.