This guide describes how to install the TigerGraph platform either as a single node or as a multi-node cluster. Please use the Table of Contents to go to the appropriate section of this guide.
Before you can install the TigerGraph system, you need the following:
One or more servers that meets the minimum Hardware and Software Requirements with regard to operating system, memory and hard disk space, as well as enough memory and storage to store your graph data.
sudo or root privilege.
A license key provided by TigerGraph (not applicable to Developer Edition)
A TigerGraph system package .
If your package is a *tar.gz file, you may need to install some software prerequisites.
If you do not yet have a TigerGraph system package, you can request one at www.tigergraph.com/download/ .
If your package is a *tar.gz file, you also need to insure your machine has the following software prerequisites.
Pre-install these basic Linux utilities on your server, if necessary:
I f you are installing a cluster, you also need the following:
If you will use the password login method (P method) instead of ssh key login method (K method) to install the TigerGraph platform, you will also need the following:
The name of your package may vary, depending on the product edition (e.g., developer or enterprise) and the version (e.g., 2.0.1). For the examples here, we will assume the name is tigergraph-x.y.z.tar.gz. Substitute the name of your actual package file.
Extract the package:
2. A folder named
tigergraph-<version>-offline (or tigergraph-<version>-developer) will be created. Change into this folder. To Install with default settings, run the install.sh script with commands:
The installer will ask you a few questions:
Do you agree to the License Terms and Conditions?
What is your license key? (not applicable to Developer Edition)
Do you want to use the default TigerGraph user name or select/create your own?
Do you want to use the default TigerGraph user password or create your own?
Do you want to use the default installation folder or select/create your own?
To see what are the default settings, and to see how customize the installation, read the Installation Options section below.
3. The installer concludes by using 'su' to switch to the tigergraph user account. To confirm correct operation:
1. Try the command
If the system installed correctly, the command should report that zk , kafka , dict, ts3, nginx, gsql, and Visualization are up and ready. Since there is no graph data loaded yet, gse , gpe , and restpp are not initialized.
2. Try the command
4. Basic installation is now finished! Please see Post-Installation Notes below.
The following default settings will be applied if no parameters specified:
The installer will create a user called tigergraph , with password tigergraph .
The root directory for the installation (referred to as <TigerGraph.Root.Dir>) is a folder called tigergraph located in the tigergraph user's home directory, i.e., /home/tigergraph/tigergraph .
The installation can be customized by running command line options with the install.sh script:
TigerGraph cluster configuration enables the graph database to be partitioned and distributed across multiple server nodes in a local network (not available in the Developer Edition). The cluster can either be a physical cluster or a network virtual cluster from a cloud service such as Amazon EC2 or Microsoft Azure.
During cluster configuration, the user provides the following information:
The IP address for each server node, e.g., 172.30.3.2
The login credentials for the nodes.
Cluster installation begins by the user downloading the TigerGraph software package to any Linux machine in the cluster or with access to the cluster nodes (see notes above). When the user runs the installation script with the cluster option, it will either prompt the user for cluster configuration information described above, or if the user requests non-interactive installation, it will read the configuration information from a file
cluster_config.json located in the same folder with the platform package. The installer then proceeds to install the product on each of the cluster nodes and to configure the cluster.
The two installation methods, interactive and non-interactive, are described below.
In interactive mode, the installer will first ask the same basic questions it asks for single-node installation. It will then ask how many machines are in your cluster. Then it will prompt for the IP addresses of the machines, assigning each machine an alias m1, m2, m3, etc. Next it will ask for sudo user name and credentials information. Last, it will ask the user if they accept some changes to the system. (See non-interactive mode installation below for details about user credentials.) A screenshot of interactive installation is shown below.
For non-interactive mode installation, the user must put all the settings into the file
cluster_config.json before running the installer. This file is in the folder with your install.sh file and other TigerGraph package files.
The following two key parameters to set are the following:
nodes.ip Each machine in the cluster is defined as a key:value pair, where the key is a machine alias m1, m2, m3, etc. NOTE: If you chose names other than m1, m2, etc., be sure to list them in alphanumeric order in the config file. The first machine ("m1") has a special role in some cases. Use as many key:value pairs as you need, placing the public IP addresses next to each key. The installer will auto detect the local IP addresses and use them to configure the system. If the installer detects more than one local IP address, it will ask the user to select one for configuration.
nodes.login Two login methods are supported:
SSH with password
SSH with key file
For SSH with password, you must input the sudo/root user and its password. For SSH with key file, you may specify the AWS ec2 key file or other key file. If nothing provided, the installer will use default ssh key file such as
HA.option If enable.HA is set to "true", then the system will be configured for a replication factor of 2. For example, if your cluster has 6 machines, 3 will be used for one copy of the data, and 3 will be used for a replica copy of the data. More advanced configuration is possible after initial setup. See Configuring a High Availability Cluster v2.1
Custom SSH Key To use custom SSH Key/Public Key as default and to override the key in ~/.ssh/tigergraph_rsa, please use the following optional parameters: tigergraph.rsa for private key tigergraph.rsa.pub for public key,i f it is not already in ~/.ssh/authorized_keys
Below is a sample
Sometimes you may want further control over configuration details, such as replication factor of individual components, security settings, and others. You may also want to install a new TigerGraph system to match your existing TigerGraph system's setup. TigerGraph supports advanced configuration with the
-a option. This option can be used in either interactive mode or non-interactive mode.
First, create a configuration file named
adv_config.cfg. You can manually create this file, or if you have an existing TigerGraph system, you can generate a file representing its configuration, with the following command:
gadmin --dump-config |grep replicas >> adv_config.cfg)
If you manually create it, make sure it's a valid YAML file.
For example, the
adv_config.cfgfile below sets up TigerGraph as a 3-node cluster with HA replications factor of 3.
Second, in the installation command, add the
-a option. Once the installation is done, verify the system has the configuration as specified.
After you have planned out your cluster configuration, you are ready to run the installer.
Extract the package.
2. A folder named
tigergraph-<version>-offline will be created. Change into this folder. To run cluster installation in interactive mode, use the -c option:
To run cluster installation in non-interactive mode, using the settings in the
cluster_config.json file, use the -c and -n (or merged -cn) options:
3. The installer concludes by prompting the user to login to node m1 of the cluster and use 'su' to switch to the tigergraph user account. To confirm correct operation:
Try the command
gadmin status from any machine in the cluster.
If the system installed correctly, the command should report that zk , kafka , dict, nginx, gsql, and Visualization are up and ready. Since there is no graph data loaded yet, gse , gpe , and restpp are not
Try the command
gsql command must be run on node m1 of the cluster because the gsql server is installed on m1 only.
4. Basic installation is now finished! Please see Post-Installation Notes below.
If you installed with the default password, we recommend that you change it now.
To perform additional customization, run
gadmin --configure ( must be on node m1 if it is cluster ), followed by
gadmin config-apply . The '
gadmin config-apply ' command must be run on node m1 if it is cluster, since only node m1 contians pkg_poolresources. If you configured one or more items of gpe.servers, gse.servers, restpp.servers, kafka.servers, zk.servers, dictserver.servers, gpe.replicas, or gse.replicas, you must reinstall the package by running command
gadmin pkg-install reset on node m1.
see the appropriate sections of the TigerGraph System Administrators Guide v2.1 .
If you are a first-time user:
See our GSQL language tutorial for first-timer users: GSQL 101
Start designing, using our visual interface. see the TigerGraph GraphStudio UI Guide .
To see more GSQL examples, see GSQL Demo Examples .
To get answers to common questions, see TigerGraph Knowledge Base and FAQs .
If your specific versions are not listed below, please upgrade by :
Download the latest version of TigerGraph to your system.
Extract the tarball.
Run the TigerGraph.bin file that was extracted from the tarball.
These steps are assuming that v2.1.7 is installed. To upgrade to v2.2 from a version older than v2.1.7 , please upgrade to v2.1.7 first. If the tigergraph username and password have been changed, please have them ready as you will need them in order to update the system.
Download tigergraph-2.2.x-offline.tar.gz with user “tigergraph” and extract the tarball file.
Download the post_upgrade.sh script that is attached here.
Run tigergraph.bin under the same folder to upgrade to 2.2.x
Run the post-upgrade script that was downloaded in step 2 : post_upgrade.sh -u <sudoUser> [-P <sudoPass> | -K <sshKey> ] -p <tigergraphUserPass>
v2.0 can be upgraded to v2.1 Enterprise Edition. The data store format and GSQL language scripts in v2.0 are forward compatible to v2.1.
The data store format between 1.x and 2.x for single servers is forward compatible but not backward compatible. For a single server platform, users can upgrade from 1.x to 2.x without reloading data or recreating the graph schema. Some details of the GSQL language have changed, so some loading jobs and queries will need to be revised and reinstalled.
For a cluster configuration, direct upgrade from 1.x to 2.x is not supported at this time. Users interested in migrating from 1.x to 2.x need to export their data and metadata, install v2.x, and then reload data and metadata, with some small modifications. Please contact firstname.lastname@example.org for assistance.
Please consult the Release Notes for all the versions between your current version and your target version (e.g., v2.1) to see a summary of specification changes. Contain email@example.com for assistance.
Verify that your data store is compatible and is eligible for direct update / upgrade.
Review the specification changes and how they may affect your applications (loading jobs and queries).
Stop issuing new commands to your TigerGraph system and allow any operations to complete.
(Recommended) Backup your data, as a precaution.
Follow the procedure at the beginning of this document for installing a new system. The installer will automatically shut down your system and start it again.
Pay attention to output messages during the installation process which may alert you to additional tasks or checks you should perform.
Run the command gsql to start the GSQL shell. The first time after an update, gsql performs two important operations:
Copies your catalog from your old installation to the new installation .
Compares the files in the backup /dev_<datetime>/gdk/gsql/src folder to the new /dev/gdk/gsql/src folder. Pay attention to any files residing in the old folder but not in the new folder. Review them and copy them to the new folder if appropriate. See the example below.
Revise and reinstall loading jobs, user-defined functions, and queries as needed.