Loading...
Loading...
Installation, Cluster Configuration and Scale-out, License Activation
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
User Privileges and Authentication, LDAP, Single Sign-on
Loading...
Loading...
Loading...
Encryption for Data at Rest and Data in Motion
Loading...
Loading...
Admin Portal, gamin utility, GBAR backup and restore
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Managing TigerGraph Servers with gadmin
TigerGraph Graph Administrator (gadmin) is a tool for managing TigerGraph servers. It has a self-contained help function and a man page, whose output is shown below for reference. If you are unfamiliar with the TigerGraph servers, please see GET STARTED with TigerGraph.
To see a listing of all the options or commands available for gadmin, run any of the following commands:
After changing a configuration setting, it is generally necessary to run gadmin config apply
. Some commands invoke config apply automatically. If you are not certain, just run
gadmin config apply
Below is the man page for gadmin. Most of the commands are self-explanatory. Common examples are provided with each command.
NOTE: Some commands have changed in v3.0. In particular,
gadmin set <config | license>
has changed to
gadmin <config | license> set
Gadmin autocomplete is more of a feature than a command. It is an auto-complete feature that allows you to see all possible entries of a specific configuration. You can press tab when typing a command to either print out all possible entries, or auto-complete the entry you are currently typing.
The example below shows an example of the autocomplete for the command gadmin status
.
gadmin config
commands are used to manage the configuration for the TigerGraph system. To get a complete list of configuration parameters that are available, see Configuration Parameters.
gadmin config
has many sub-entries as well, they will be listed below.
Example: Change the retention size of the kafka queue to 10GB:
Show what configuration changes were made.
Discard the configuration changes without applying them.
Display all configuration entries.
Change a configuration entry.
Get the value of a specific configuration entry.
Configure entries for a specific service group. e.g. KAFKA, GPE, ZK
Initialize your configuration.
List all configurable entries or entry groups.
Options for configuring your license.
To generate a license seed, use the following command:
Depending on your host machine, you need to choose the appropriate host signature type. If you are generating the seed from a cloud instance, choose the corresponding cloud provider as your signature type.
If you are generating the seed from your own machine, choose either node-id
or hardware
.
The hardware
option tells gadmin
to collect information from your machine's hardware as the host signature to generate the license seed. A signature produced by using this parameter will not be altered by software changes on the machine, including OS reinstalls. This is the usual choice.
node-id
refers to the machine ID in the machine-id
file located at /etc/machine-id
and is a unique signature for the OS that identifies your machine. A reinstall of the OS may change the machine ID.
Example flow for applying a new license (which may be replacing an existing license key):
Once the license has been set and config has been applied, you can run gadmin license status
to view the details of your license, including the expiration date and time.
The gadmin log
command will reveal the location of all commonly checked log files for the TigerGraph system.
The gadmin restart
command is used to restart one, many, or all TigerGraph services. You will need to confirm the restarting of services by either entering y (yes) or n (no). To bypass this prompt, you can use the -y flag to force confirmation.
The gadmin start
command can be used to start one, many, or all services.
Check the status of TigerGraph component servers:
Use gadmin status
to report whether each of the main component servers is running (up) or stopped (off). The example below shows the normal status when the graph store is empty and a graph schema has not been defined:
You can also check the status of each instance using the verbose flag : gadmin status -v
or gadmin status --verbose
. This will show each machine's status. See example below
Here are the most common service and process status states you might see from running the gadmin status
command :
Online - The service is online and ready.
Warmup - The service is processing the graph information and will be online soon.
Stopping - The service has received a stop command and will be down soon.
Offline - The service is not available.
Down - The service has been stopped or crashed.
StatusUnknown - The valid status of the service is not tracked.
Init - Process is initializing and will be in the running state soon.
Running - The process is running and available.
Zombie - There is a leftover process from a previous instance.
Stopped - The process has been stopped or crashed.
StatusUnknown - The valid status of the process is not tracked.
The gadmin stop command can be used to stop one, many, or all TigerGraph services. You will need to confirm the restarting of services by either entering y (yes) or n (no). To bypass this prompt, you can use the -y flag to force confirmation.
TigerGraph offers two levels of memory thresholds using the following configuration settings:
SysAlertFreePct and SysMinFreePct
SysAlertFreePct setting indicates that the memory usage has crossed a threshold where the system will start throttling Queries to allow long-running queries to finish and release the memory.
SysMinFreePct setting indicates that the memory usage has crossed a critical threshold and the Queries will start aborting automatically to prevent GPE crash and system stability.
By default, SysMinFreePct is set at 10%, at which point Queries will be aborted.
Example:
SysAlertFreePct=30 means when the system memory consumption is over 70% of the memory, the system will enter an alert state, and Graph updates will start to slow down.
SysMinFreePct=20 means 20% of the memory is required to be free. When memory consumption enters a critical state (over 80% memory consumption) queries will be aborted. automatically.
Follow the steps documented in this support article to update the Nginx configurations of your TigerGraph instance.
This page describes the steps to upgrade an existing installation of TigerGraph to TigerGraph 3.1.x.
This page lists all the versions you can upgrade from through self-service. If you are trying to upgrade a production system to 3.1.x from a different version than those listed on this page, please first follow the corresponding guide to upgrade to one of the versions listed on this page, and then upgrade to 3.1.x from that version, or contact TigerGraph Support.
Upgrading to v3.1.0 - 3.1.4 is not recommended. Please upgrade to 3.1.5 or 3.1.6 instead.
Follow the steps described in Advisory: Update to TigerGraph 3.1.5+ to ensure your existing installation passes the schema validation
Any 3.x version of TigerGraph can up upgraded to another v3.x version by running the installation script with the upgrade(-U
) flag.
Download the latest version of TigerGraph to your system.
Extract the tarball.
Run the install script under the Linux user created during installation with the upgrade flag (-U
) that was extracted from the tarball:
If you are upgrading from 3.0.x, follow the procedures in Enable GUI and GSQL HA after upgrading to 3.1 to enable High Availability for the GSQL server as well as the GUI server.
If you are upgrading from 3.1.x, no further actions are necessary.
If you are running a production system, please contact TigerGraph support for upgrading from 2.x.
Follow the steps described in 2.6.x to 3.x upgrade flow to upgrade from 2.6 to 3.1.5/3.1.6.
TigerGraph architecture is built with no Single Point of Failure (SPOF). This provides fault tolerance built at each component level. Any component or server failure is handled seamlessly by TigerGraph's Continuous Availability.
However, there are situations where the failures span the entire cluster due to loss of data center or any other catastrophic event, Continuous Availability will not be sufficient. Typically, such an event would be defined as a Disaster. Customers would need a disaster recovery (DR) plan to get services back up in the event of a disaster.
Cross-Region Replication (CRR) is a new feature that will allow users to keep two or more TigerGraph clusters in different data centers or regions in sync.
For customers, cross-region replication will help deliver on the following business goals:
Disaster Recovery: Support Disaster Recovery functionality with the use of a dedicated remote cluster
Enhanced Availability: Enhance Inter-cluster data availability by synchronizing data using Read Replicas across two clusters
Enhanced Performance: If the customer application is spread over different regions, CRR can take advantage of data locality to avoid network latency.
Improved System Load-balancing: CRR allows you to distribute computation load evenly across two clusters if the same data sets are accessed in both clusters.
Data Residency Compliance: Cross-Region replication allows you to replicate data between different data centers or Regions to satisfy compliance requirements. Additionally, this feature can be used to set up clusters in the same region to satisfy more stringent Data sovereignty or localization business requirements.
Besides Disaster recovery and enhanced business continuity, this will enable forward-thinking customers to set up the clusters as part of Blue/Green deployment purposes for agile upgrades.
Disaster Recovery support will include complete native support for all Data and Metadata replication including Automated schema changes, User management, and Query management.
Cross-region replication will be delivered in two phases:
Phase1: Cross-region replication support for data from Primary to DR cluster. Metadata operations will not be supported. Phase 1 will be delivered in TigerGraph 3.1.
Phase2: Complete native support for all Data and Metadata replication including Automated schema changes, User and Query management. Phase 2 will be delivered in TigerGraph 3.2.
To support cross-region replication, primary and standby clusters need to have the same number of partitions. However, the clusters can have different numbers of replicas. Also, the clusters can be in the same region or data center.
The following setup is needed in order to perform a failover in the event of a disaster:
There are no configuration changes required for the primary cluster. This feature is designed not to impact the primary cluster operations in any way. However, the primary cluster should be running on TigerGraph Version 3.1.
The remote cluster needs to be set up to be used as a Disaster Recovery cluster. The following configurations should be set up by the operations team to enable the synchronization of data between primary and remote clusters.
All the data loaded in the Primary cluster will be copied and loaded into the DR cluster automatically. In TigerGraph 3.1, users will also have to manually perform all the metadata operations. The Metadata operations include Schema change, Installation of new queries, and User Management operations.
With respect to Schema change, users will have to perform all the Schema change operations on the DR cluster in the same order after successfully applying schema change in the primary cluster. Without applying the corresponding schema change in the DR clusters, data updates will pause in the DR clusters. Or if wrong schema change (or wrong order) is performed in the DR cluster, there will be data inconsistency issues resulting in loss of cluster services.
In the event of catastrophic failure that has impacted the full cluster due to Data Center or Region failure, the customer can initiate the failover to the DR cluster. This is a manual process. Users will have to make the following configuration changes to upgrade the DR cluster to become the primary cluster.
If we want to set up a new DR cluster over the upgraded primary cluster:
There is no limit on the number of times a cluster can failover to another cluster. When designating a new DR cluster, make sure that you set the System.CrossRegionReplication.GpeTopicPrefix
parameter correctly by adding an additional .Primary
. For example, if your original cluster fails over once, and the current cluster's GpeTopicPrefix
is Primary
, then the new DR cluster needs to have its GpeTopicPrefix
be Primary.Primary
. If it needs to fail over again, the new DR cluster needs to have its GpeTopicPrefix
be set to Primary.Primary.Primary
.
A TigerGraph system with High Availability (HA) is a cluster of server machines which uses replication to provide continuous service when one or more servers are not available or when some service components fail. TigerGraph HA service provides load balancing when all components are operational, as well as automatic failover in the event of a service disruption. The replication factor is the number of copies of data. In contrast, the partitioning factor is the number of machines across which one copy of the database is distribution.
If the replication factor is 2, a fully-functioning system maintains two copies of the data, stored on separate machines. Users can choose a higher replication factor for greater query throughput and greater system resiliency.
The total cluster size should be (partitioning factor) X (replication factor).
The smallest possible distributed database with HA is 2 x 2 = 4 machines.
The smallest possible non-distributed database with HA is 1 x 3 = 3 machines.
There is no upper limit for either partitioning factor or replication factor.
The same version of the TigerGraph software package is installed on each machine.
Starting from version 3.0, configuring a HA cluster is part of platform installation. See the Installation Guide page for details.
HA configuration can only be done at the time of system installation and before deploying the system for database use. HA configuration change after installation is not supported. Converting a non-HA system to an HA cluster would require reinstalling all the TigerGraph components and rebuilding the database from the start.
During TigerGraph platform installation, specify the replication factor. The default value for replication factor is 1, which means there is no HA setup for the cluster. The user does not explicitly set the partitioning factor. Instead, the TigerGraph system will set
partitioning factor = (number of machines / replication factor)
If the division does not produce an integer, some machines will be left unused.
Example: If you install a 7-node cluster with replication factor = 2, the resulting configuration will be 2-way HA for a database with with a partitioning factor of 3. One machine will be unused.
This section provides an overview of the system requirements for running TigerGraph in a production or development environment.
TigerGraph can be used for different scopes and the system requirements largely depend on the use of the software. This page provides a good reference, but actual hardware requirements will vary based on your data size and workload.
*Actual needs (CPU, memory, storage) depend on data size and application requirements. Consult our solution architects for an estimate of memory and storage needs.
Comments:
The TigerGraph system is optimized to take advantage of multiple cores.
Performance is optimal when the memory is large enough to store the full graph and to perform computations.
The TigerGraph Software Suite is built on 64-bit Linux. It can run on a variety of Linux 64-bit distributions. The software has been tested on the operating systems listed below:
When a range of versions is given, it means that the software has been tested on the oldest and newest versions. We continually evaluate the operating systems on the market and work to update our set of supported operating systems as needed. The TigerGraph installer will install its own copies of Java JDK and GCC, accessible only to the TigerGraph user account, to avoid interfering with any other applications on the same server.
Please use a bash shell for the installation process.
Before offline installation, the TigerGraph system needs a few basic software packages to be present:
tar
curl
crontab
ip
ssh
/sshd
(No longer required for single-server installation of TigerGraph 3.1.6 and later)
more
netstat
sshpass
if you intend to use password login method (P method) instead of ssh key login method (K method) to install the TigerGraph platform.
If they are not present, contact your system administrator to have them installed on your target system. For example, they can be installed with one of the following commands.
If you are running TigerGraph on a multi-node cluster, you must install, configure and run the NTP (Network Time Protocol) daemon service. This service will synchronize system time among all cluster nodes.
If you are running TigerGraph on a multi-node cluster, you must configure the iptables/firewall rules to make all TCP ports open among all cluster nodes.
In an on-premises installation, the system is fully functional without a web browser. To run the optional browser-based TigerGraph GraphStudio User Interface or Admin Portal, you need an appropriate browser:
Component | Minimum | Recommended |
CPU* | 4 cores for <500MB data, 8 cores for >500MB data (64-bit processor) | 16+ cores (64-bit processor) |
Memory* | 8 GB | ≥ 64GB |
Storage* | 20 GB | ≥ 1TB, RAID10 volumes for better I/O throughput. SSD storage is recommended. |
Network | 1 Gigabit Ethernet adapter | 10Gigabit Ethernet adapter for inter-node communication |
On-Premises hosting | Java JDK version | GCC version (C/C++) |
RedHat 6.5 to 6.9 (x64) | Yes | 1.8.0_141 | 4.8.2 |
RedHat 7.0 to 7.8 (x64) | Yes | 1.8.0_141 | 4.8.2 |
RedHat 8.0 to 8.2 (x64) | Yes | 1.8.0_141 | 4.8.2 |
Centos 6.5 to 6.9 (x64) | Yes | 1.8.0_141 | 4.8.2 |
Centos 7.0 to 7.4 (x64) | Yes | 1.8.0_141 | 4.8.2 |
Centos 8.0 to 8.2 (x64) | Yes | 1.8.0_141 | 4.8.2 |
Ubuntu 14.04 LTS Ubuntu 16.04 LTS Ubuntu 18.04 LTS (x64) | Yes | 1.8.0_141 | 4.8.4 |
Debian 8 (jessie) | Yes | 1.8.0_141 | 4.8.4 |
Browser | Chrome | Safari | Firefox | Opera | Edge | Internet Explorer |
Supported version | 54.0+ | 11.1+ | 59.0+ | 52.0+ | 80.0+ | 10+ |
Installing Single-machine and Multi-machine systems
This guide describes how to install the TigerGraph platform either as a single node or as a multi-node cluster, interactively or non-interactively.
If you signed up for the Enterprise Free license, you also have access to the TigerGraph platform as a Docker image or a virtual machine (VirtualBox) image. Follow the instructions in Getting started to start up TigerGraph in a Docker container or with VirtualBox.
Before you can install the TigerGraph system, you need the following:
One or more machines that meet the minimum Hardware and Software Requirements.
A sudo user with the same username and login credential on every machine.
If sudo privilege is not available, please contact TigerGraph support for workarounds.
A license key provided by TigerGraph (not applicable to Enterprise Free)
A TigerGraph system package
If you do not yet have a TigerGraph system package, you can request one at the following address: https://www.tigergraph.com/get-tigergraph
If you are installing a cluster, ensure that every machine has the same SSH port and the port stays open during installation
TigerGraph's installation script support both single-node and cluster installation, and the user can choose to install either interactively and non-interactively.
The following describes the procedure to install TigerGraph on Linux interactively. The filename of your package may vary, depending on the product edition and version. For the examples here, we use the filename tigergraph-<version>.tar.gz
, which should be replaced by the actual filename of your package.
Extract the package by running the following command. A folder named tigergraph-<version>-offline
will be created.
Navigate to the tigergraph-<version>-offline
folder and run script install.sh
with the following commands:
The installer will ask for the following information, for which you may choose to hit Enter to skip and use the system default or enter a new value:
Your agreement to the License Terms and Conditions
Your license key (not applicable for Enterprise Free)
Username for the Linux user who will own and manage the TigerGraph platform
The installer creates a Linux user who is the only authorized user that can run gadmin
commands to manage the TigerGraph Platform, and for whom this username is for
Password for the Linux user who will own and manage the TigerGraph platform
Path to where the installation folder will be
Path to where the data folder will be
Path to where the log folder will be
Path to where the temp folder will be
The SSH port for your machine
To see what the default settings are, read the Installation options section below.
Since license keys are long – over 100 characters long. If you copy-and-paste the license key, be careful not to accidentally include an end-of-line character.
TigerGraph cluster configuration enables the graph database to be partitioned and distributed across multiple server nodes in a local network. After you have answered the questions described in the previous step, the installation script will ask for the following to complete cluster configuration:
The number of nodes in your cluster. Each node will be given an alias following the input (m1
, m2
, m3
, etc.)
If this is a single-node installation, enter 1
The IP address of each node
Username and credentials information of the sudo user
Every machine in the cluster must have a sudo user with the same username and password or SSH key.
Permission to set NPT time synchronization
Permission to set firewall rules among the cluster nodes
In TigerGraph 3.x, the installation machine can be within or outside the cluster. If outside the cluster, the installation machine still needs to be a Linux machine.
After all the questions are answered, the script will proceed to installation. A screenshot of the interactive installation is shown below:
After installation is complete, you can switch to the Linux user who owns the platform (created in Step 2) with the following command :
At the prompt, enter the password that was set in Step 2.
After switching to the correct user, you now have access to gadmin
commands. Confirm successful installation by running gadmin status
. If the system is installed correctly and the license is activated, the command should report that all services are up and ready. Since there is no graph data loaded yet, GSE and GPE will show "Warmup"
.
The following describes the procedure to install TigerGraph on Linux non-interactively.
Extract the package by running the following command. A folder named tigergraph-<version>-offline
will be created.
Navigate to the tigergraph-<version>-offline
folder. Inside the folder, there is a file named install_conf.json
. For non-interactive mode installation, the user must review and modify all the settings in the file install_conf.json
before running the installer.
Below is an example of the install_conf.json
file:
Here is a description of all the fields in the config file:
"BasicConfig"
"TigerGraph"
: Information about the Linux user that will be created by the installer who owns and manages the TigerGraph platform.
"Username"
: Username of the Linux user.
"Password"
: Password of the Linux user.
"SSHPort"
: Port number used to establish SSH connections.
"PrivateKeyFile"
(optional): Absolute path to a valid private key file. If left empty, TigerGraph will generate one named tigergraph.rsa
automatically.
"PublicKeyFile"
(optional): Absolute path to a valid public key file. If left empty, TigerGraph will generate one named tigergraph.pub
automatically.
"RootDir"
"AppRoot"
: Absolute path to where application folder will be.
"DataRoot
": Absolute path to where the data folder will be.
"LogRoot"
: Absolute path to where the log folder will be.
"TempRoot"
: Absolute path to where the temp folder will be.
"License"
: Your TigerGraph license string.
"Node List"
: A JSON array of the nodes in the cluster. Each machine in the cluster is defined as a key-value pair, where the key is a machine alias (m1, m2, m3, etc) and the value is the IP address of the node.
"AdvancedConfig"
"ClusterConfig"
: Cluster configurations
"LoginConfig"
: Login configurations
"SudoUser"
: Username of the sudo user who will be used to execute the installation on all nodes.
"Method"
: Authentication method for SSH. Enter "P"
to use password authentication and "K"
to use key-based authentication.
"P"
: Password of the sudo user.
"K"
: Absolute path to the SSH key to be used to authenticate the sudo user.
"ReplicationFactor"
: Replication factor of the cluster.
If you would like to enable the High Availability (HA) feature, please make sure you have at least 3 nodes in the cluster and set the replication factor to be greater than 1. For example, if your cluster has 6 nodes, you could set the replication factor to be 2 or 3. If you set the replication factor to be 2, then the partitioning factor will be 6 / 2 = 3. Therefore, 3 nodes will be used for one copy of the data, and the other 3 nodes will be used as a replica copy of the data.
Ensure that the total number of nodes can be fully divided by the replication factor. Otherwise, some nodes may not be utilized as parts of the HA cluster.
-n
optionStart the non-interactive installation process by running the install.sh
script with the -n
option:
The following default settings will be applied if no parameters are specified:
The installer will create a Linux user with username tigergraph
and with password tigergraph
. This user will be the only user authorized to run gadmin
commands to manage the TigerGraph platform and services.
If there is already a user named tigergraph
, this user will be designated as the platform owner and no other user will be created.
The default root directory for the installation would be /home/tigergraph/tigergraph
with the App/Data/Log/Temp files within it :
App Path : /home/tigergraph/tigergraph/app
Data Path :/home/tigergraph/tigergraph/data
Log Path :/home/tigergraph/tigergraph/log
Temp Path :/home/tigergraph/tigergraph/tmp
The root directory for the installation (referred to as <tigerGraph_root_dir>
) is a folder called tigergraph
located in the tigergraph user's home directory, i.e., /home/tigergraph/tigergraph
.
The installation can be customized by running command-line options with the install.sh
script:
Then as the TigerGraph user, run the following Linux command:
If you have the TigerGraph platform installed on a multi-node cluster, when running the guninstall
command on a single node in the cluster, you will have the option to uninstall TigerGraph from all of the nodes in the cluster or just a single node.
This guide covers two advanced license issues:
Activating a System-Specific License
Usage limits enforced by certain license keys
This section provides step-to-step instructions for activating or renewing a TigerGraph license, by generating and installing a license key unique to that TigerGraph system. This document applies to both non-distributed and distributed systems. In this document, a cluster acting cooperatively as one TigerGraph database is considered one system.
A valid license key activates the TigerGraph system for normal operation. A license key has a built-in expiration date and is valid on only one system. Some license keys may apply other restrictions, depending on your contract. Without a valid license key, a TigerGraph system can perform certain administrative functions, but database operations will not work. To activate a new license, a user first configures their TigerGraph system. The user then collects the fingerprint of the TigerGraph system (license seed) using a TigerGraph-provided utility program. Then the collected materials are sent to TigerGraph or an authorized agent via email or web form. TigerGraph certifies the license based on the collected materials and sends a license key back to the user. The user then installs the license key on their system using another TigerGraph command. A new license key (e.g., one with a later expiration) can be installed on a live system that already has a valid license; the installation process does not disrupt database operations.
Before beginning the license activation process, the TigerGraph package must be installed on each server, and the TigerGraph system must be configured with gadmin.
Company/Organization name
A new license key file will be certificated and sent back to you.
Copy the license key file to a directory on the TigerGraph system where the TigerGraph Linux user has read permission.
Run the following three commands to install the license key:
If the installation completes successfully, the message "install license successfully" will be displayed in the console.
After a license key has been installed successfully on a TigerGraph system, the information of the installed license is available via either the CLI command gadmin license status
or via the following REST API:
Some license keys include a limit on the graph size, or on the number and size of machines which may be used, or restrict the use of certain optional features. In the case of a memory usage or graph size limit, when a TigerGraph system reaches its license's limit, additional data will not be loaded into the graph. You may still query the graph and delete data. To check whether or not you have exceeded your license limits, use the command gstatusgraph and collect the VertexCount, EdgeCount, and Partition Size. Compare this information to the limits established for your license.
The output may include a warning message such such as the following:
By Design, TigerGraph has built-in HA for all the internal critical components from the beginning. This includes GPE, GSE, REST API Servers, etc. However, the user-facing applications (GSQL and GraphStudio) were designed to be set up by customers based on their High Availability (HA) needs. This included building solutions using non-TigerGraph components. With 3.1 release, TigerGraph will support native HA functionality for user-facing applications as well. This simplifies and streamlines HA deployment for users completely. For Operations personnel, this will reduce the operational overhead while enhancing the availability for end users.
Before we elaborate the design, we need to understand the topology of how TigerGraph services are deployed in a cluster. TigerGraph nodes in a cluster are organized as ‘m1’, ‘m2’, and so on. Although all nodes in the cluster serve the same function - store data and participate in query execution, m1 is a special node. GSQL server runs on this node to address critical services such as storing client metadata as well as managing connections between client and server. With this feature, m1 will no longer serve as the only node for GSQL server connections. In the new design, other nodes will run standby GSQL servers to provide high availability for client connections.
In the 3.1 release, primary GSQL server will continue to perform all the tasks handled by GSQL server prior to 3.1 release. This includes:
Process client connections
Querying requests from GSQL clients
User management requests including token management
In addition to these, when Primary fails, a standby server will switch to become the Primary server, and when the old Primary server is back to normal function, it will become a GSQL Standby server.
Redirect requests to Primary Server
Help Primary server to check for source data file existence and parse file header (if ANY is chosen)
There is no change in how GSQL Client works.
Users store the following data on m1 node that is needed for query execution:
GSQL loader's Token functions
ExprFunctions
ExprUtil
This is part of the user source code that TigerGraph system uses to compile. Prior to 3.1 release, this information is available to GSQL server only on m1 node. Typically, users can modify these files directly on the machine. But with HA, the Primary GSQL may not be in m1, and can be switched to any other machine at any time. Users have to make sure all the machines have the same content whenever there are updates to the files. This is a new requirement for users.
GSQL server will retrieve the User source code files in the following priority order when it needs them:
Via github/github enterprise (if configuration is set),
Files uploaded via PUT,
Default files that are shipped with the product
This requires public network access, or github enterprise server access. User need to provide the following gadmin configuration:
Example:
When GSQL server needs to compile the files, it will retrieve them from github if the GitHub access is configured as above. It will retry 3 times, with timeout=5s for each time. If the connection fails, it will go to the next priority level method, i.e. file uploaded via PUT.
We are introducing new GSQL commands to address this need. These commands will allow users to upload and download the user source files.
Upload source code
Example:
Download source code
Example:
The uploaded files will be saved to all nodes. Users will need to have either ‘superuser’ and ‘global_designer’ roles to have the sufficient privileges to run the PUT/GET commands.
When calling GET command, the user can download the corresponding file from the Primary node, to a local directory at the current cluster node.
When calling PUT command, the local file will be copied to all of the cluster nodes, including itself.
Example usage scenario to update of the files is as follows:
For each cluster node, TokenBank.cpp is stored at:
ExprFunctions.hpp and ExprUtil.hpp files are stored at:
Full path should be provided including the file name for PUT/GET, eg:
Notice that in the first command, we use absolute path, while in the second command, we use relative path. Both are supported. But “~” is not supported (eg: “~/tmp/x.hpp”).
Additionally, users can also use the commands in the following manner as well:
Use a folder name, and automatically default name will be added. For example:
It will use ExprFunctions.hpp under the directory "/home/path/tmp" for PUT.
It will create/overwrite the file "home/path/tmp/TokenBank.cpp".
If the file name is given in the path, its file extension must be consistent with the corresponding file. For example:
is not allowed, since PUT/GET ExprFunctions must use “.hpp” as file extension.
If the corresponding file is not found, the GSQL Primary server will use the default file in the package. These default files are at:
In Pre-3.1 release design, the file path used in loading jobs refers to the file in m1, unless the user specifies machine name before the path (ALL, ANY, m1, m2,…). In the new HA design, the Primary server can be running on any machine, and can be switched. This means GSQL server may or may not find the file. To be back-compatible we prefix a machine name if the client is in TigerGraph cluster.
Users can specify the node ID before the path using: ALL, ANY, m1, m2 and so forth. Declaring ALL or ANY as host ID will load files from every cluster node.
User can use form like “m1|m3|m4” to declare the combination of several nodes.
If the hosts are not specified, it will look for the host ID of the current node that is running the loading job, (through searching the nodes in $(gadmin config get GSQL.BasicConfig.Nodes)). If not found, it will use node “m1” by default.
Data source can be created and used with a file path or a JSON string, same as above.
GSQL client can connect to GSQL server in the different ways with the following priority order:
Users can specify the ip and port when calling GSQL client using “gsql -i” or “gsql -ip”. For example:
GSQL clients will try these ips and ports one by one. Notice the port is optional, it will use 14240 by default, which is the default port for GSQL server.
If “gsql -i” or “gsql -ip” are not used, GSQL client will search the file gsql_server_ip_config where the user runs the GSQL client. The file gsql_server_ip_config should be a one-line file such as shown below. GSQL client will traverse the ips and ports in the file in its order.
Similarly, the port number is also optional, using 14240 by default.
If “gsql -i” or “gsql -ip” are not used, and the file gsql_server_ip_config does not exist where “gsql” is called, GSQL client will try to connect to the local server (127.0.0.1:8123).
Use gadmin config to get/set the following configurations related to GSQL High Availability.
The first is the heartbeat interval in milliseconds. The second (“max misses”) is the total timeout for switching to the Primary server which will measure the number of heartbeat intervals. It must be at least 2 to allow 1 heartbeat miss.
For example, if we use “IntervalMS = 2000” and “max misses = 4” as shown above, then the total timeout is 2s×4 = 8 seconds. So the current Primary server will be switched if its heartbeat has stopped for more than 8 seconds.
TigerGraph supports native HA functionality for its application server, which serves the APIs for TigerGraph's GUI - GraphStudio and Admin Portal. The application server follows the active-active architecture, in which the server is always on m1 and all replicas of m1. If one server falls offline, you can use the other servers without any loss of functionality.
When you deploy TigerGraph in a cluster with multiple replicas, it is ideal to set up load balancing to distribute network traffic evenly across the different servers. This page discusses what to do when a server dies when you haven't set up load balancing, and the steps needed to set up load balancing for the application server.
When a server dies, users can proceed to the next available server within the cluster to resume the operations. For example, assuming the TigerGraph cluster has Application Server on m1 and m2. If the server on m1 dies, users can access m2 to use GraphStudio and Admin Portal.
Keep in mind that any long-running operation that is currently in process when the server dies will be lost.
When you deploy TigerGraph in a cluster with multiple replicas, it is ideal to set up load balancing to distribute network traffic evenly across the different servers.
One possible choice for setting up load balancing is through the use of Nginx.
Here is an example Nginx configuration for the upstream and server directives:
An active health check can be set on the following endpoint if using Nginx Plus:
/api/ping
When creating or using an existing security group in Step 3, make sure it allows requests from the load balancer to port 14240 of the instances in the target group.
In Step 4, set the health check URL to /api/ping
In Step 5, enter 14240 for the port of your instances.
After successfully creating your load balancer, you should now be able to access GraphStudio through the load balancer's DNS name. The DNS name can be found under the "Description" tab of your load balancer in the Amazon EC2 console.
Some different TigerGraph specific settings are required during Application Gateway setup:
Under the section “Configuration Tab”
For step 5, where it states to use port 80 for the backend port, use port 14240 instead.
In the same window, enable “Cookie-based affinity”.
Pick port from backend HTTP settings: yes
Path: /api/ping
HTTP Settings: The HTTP settings associated with the backend pool create during the Application Gateway setup
After successfully creating the Application Gateway, you should now be able to access GraphStudio from the frontend IP associated with the Application Gateway.
Click “Specify port name mapping”, and use 14240 for the port
For the port, use 14240.
For the path, use /api/ping
.
After successfully creating the load balancer, you should now be able to access GraphStudio from the frontend IP associated with the load balancer.
The Lightweight Directory Access Protocol ( LDAP ) is an industry-standard protocol for accessing and maintaining directory information services across a network. Typically, LDAP servers are used to provide centralized user authentication service. The Tigergraph system supports LDAP authentication by allowing a TigerGraph user to log in using an LDAP username and credentials. During the authentication process, the GSQL server connects to the LDAP server and requests the LDAP server to authenticate the user.
GSQL LDAP authentication supports any LDAP server that follows LDAPv3 protocol. StartTLS/SSL connection is also supported.
SASL authentication is not yet supported. Some LDAP server are configured to require a client certificate upon connection. Client certificate is not yet supported in GSQL LDAP authentication.
In order to manage the user roles and privileges, the TigerGraph GSQL server employs two concepts—proxy user and proxy group.
A proxy user is a GSQL user created to correspond to an external LDAP user. When operating within GSQL, the external LDAP user's roles and privileges are determined by the proxy user.
A proxy group is a GSQL user group that is used to manage a group of proxy users who share similar properties/attributes in LDAP.
An existing LDAP user can log in to GSQL only when the user matches at least one of the existing proxy groups' criteria. Once the criteria are satisfied, a proxy user will be created for the LDAP user. The roles and privileges of the proxy user are at least as permissive as the proxy group(s) he belongs to. It is also possible to change the roles of a specific proxy user independently. When the roles and privileges of a proxy group changes, the roles and privileges of all the proxy users belonging to this proxy group change accordingly.
To configure a TigerGraph system to use LDAP, there are two main configuration steps:
Configure the LDAP Connection.
Configure GSQL Proxy Groups and Users.
To enable and configure LDAP, run three commands.
Configure LDAP:
The gadmin program will then prompt the user for the settings for several LDAP configuration parameters.
2.Apply the configuration:
3.Restart the gsql server:
An example configuration is shown below.
Below is an explanation of each configuration parameter.
Set to "true" to enable LDAP; "false" to disable LDAP.
Hostname of LDAP server.
Port of LDAP server.
Base DN (Distinguished Name), in order for GSQL to perform the LDAP search.
This specifies the LDAP attribute to search when the GSQL server looks up the usernames in the LDAP server upon login. For example, in the configuration shown above, when a user logs in with the "-u john" option, the GSQL server will search the "uid" attribute in LDAP to find "john" and check the credentials only after "john" is found.
These options are needed when the LDAP server is not publicly readable. In this case, the admin DN and corresponding password need to be specified in order for the GSQL server to connect to the LDAP server.
When set to "none", TigerGraph uses insecure LDAP connection. This can be changed to a secure connection protocol: "starttls" or "ssl".
When starttls or ssl is used, a truststore path as well as its password needs to be configured.
Currently, the TigerGraph system supports two trustore formats: pkcs12 and jks.
When specified, the GSQL server will blindly trust any LDAP sever.
This section explains how to configure a GSQL proxy group in order to allow LDAP user authentication.
A GSQL proxy group is created by the CREATE GROUP command with a given proxy rule. For example, assume there is an attribute called "role" in the LDAP directory, and "engineering" is one of the "role" attribute values. We can create a proxy group with the proxy rule "role=engineering". Different roles can then be assigned to the proxy group. An example is shown below. When a user logins, the GSQL server searches for the user's entry in the LDAP directory. If the user's LDAP entry matches the proxy rule of an existing proxy group, a proxy user is created to which the user will login in.
The SHOW GROUP command will display information about a group. The DROP GROUP command deletes the definition of a group.
Only users with the admin and superuser role can create, show, or drop a group.
Nothing needs to be configured for a proxy user. As long as the proxy rule matches, the proxy user will be automatically created upon login. A proxy user is very similar to a normal user. The minor differences are that a proxy user cannot change their password in GSQL and that a proxy user comes with default roles inherited from the proxy group that they belong to.
Admin_dn is the "distinguished name" of an LDAP entry. In LDAP, "distinguished name" is often abbreviated as dn. When configuring this field, a dn entry with read permission on the ldap directory is expected. Configuring a dn with no read permission will result in an error. Not configuring this field will likely result in an error since the LDAP server is typically not publicly readable. Please note that only the dn field will be accepted for this entry. All other entries will result in an authentication error. The corresponding password for the configured dn should also be set correctly in the configured entry "security.ldap.admin_password ".
It depends on what type of protocol your LDAP server uses. SSL/TLS is very common in enterprise use today. When SSL is used, the port is typically 636 instead of default port 389.
You need to configure the truststore when SSL/TLS is used in the LDAP server. The truststore's path, password, and format need to be configured accordingly. We support two formats—JKS and PKCS12. The JKS is Java KeyStore. The corresponding certificates for the LDAP server need to be imported to the JKS for successful authentication. Different truststore formats are typically interchangeable.
This might be the case if SSL/TLS is enabled from the LDAP server side but you don't have a certificate. You can set "security.ldap.secure.trust_all" to true to bypass the SSL/TLS certificate checking.
"Parameter error" means some of the LDAP configurations are not set properly. Most often it is because admin_dn, admin_password, or the login username and password are not set correctly. Unfortunately, we cannot know exactly what field is wrong because the LDAP server side does not respond back with such detail.
Creation and management of multiple users and roles is available in the Enterprise Edition only.
The TigerGraph platform provides a complete and robust feature set to manage and control user privilege and authentication of GSS operations:
Creation and management of multiple TigerGraph users
Granting to each user a role on a particular graph, each role entailing a set of privileges
Oauth 2.0-style user authentication
Extensible framework, so that additional security- and user- related capabilities can be added in future releases
The TigerGraph system offers two options for credentials.
username-password pair
a token: a unique 32-character string which can be used for REST++ requests, with an expiration date.
When the TigerGraph platform is first installed, user authentication is disabled. The installation process creates a gsql superuser who has the name tigergraph and password tigergraph. As long as user tigergraph's password is tigergraph, gsql authentication remains disabled. This is designed for user convenience in single-user configurations or installations which do not require security, such as demo and training installations. The behavior is compatible with early TigerGraph versions which did not support multiple roles or multiple graphs.
Because there are two ways to access the TigerGraph system, either through the GSQL shell or through REST++ requests, there are two steps needed to set up a secure system with user authentication for both points of entry:
To enable user authentication for GSQL: change the password of the tigergraph user to something other than tigergraph.
To enable Oauth 2 authentication for REST++, use the gadmin program to configure the RESTPP.Authentication parameter. See details below.
More details about each of these two steps are below.
To enable user authentication for GSQL: change the password of the tigergraph user to something other than tigergraph. See ALTER PASSWORD below.
To run a single GSQL command or command file, the user must provide their username and password. The graph also needs to be specified. To specify the username in the command line, use the -u option. The user can also provide their password with the -p option. If the password is not provided on the command line, the system will then prompt the user for their password, so this method is only appropriate for interactive use. If -u not used, then the system will assume that the request is coming from the default tigergraph user. It will then prompt for tigergraph's password (assuming GSQL authentication is enabled). Note that if -u is not used and authentication is disabled, then the system simply responses to all requests, as it did in earlier versions (unprotected administrative mode).
Use the -g parameter to specify which graph on which to operate.
To enter the GSQL interactive shell, simply omit the <command> from the command line. The user does not need to provide credentials again inside the shell. The example below shows two users entering the shell with their passwords. The user does not need to specify a graph to enter the interactive shell.
Authorization for gadmin
Currently, authorization for gadmin
commands comes from Linux, and is not related to GSQL authorization. In short, only the Linux TigerGraph user can run gadmin
.
Details: During installation, the user selects a name and password for the TigerGraph Linux user. The default user and password are tigergraph and tigergraph, respectively. This user is a Linux user; the installer will create a Linux account if needed. Only the TigerGraph Linux user can run gadmin
. This Linux user is unrelated to the TigerGraph default user mentioned in the GSQL Authentication section.
The REST++ server implements OAuth 2.0-style authorization as follows: Each user can create one or more secrets (unique pseudorandom strings). Each secret is associated with a particular user and the user's privileges for a particular graph. Anyone who has this secret can invoke a special REST endpoint to generate authorization tokens (other pseudorandom strings). An authorization token can then be used to perform TigerGraph database operations via other REST endpoints. According to OAuth 2.0 protocol, each token will expire after a certain period of time. The TigerGraph default lifetime for a token is 1 month.
Each REST++ request should contain an authorization token in the HTTP header. The REST++ server reads the header. If the token is not valid, REST++ will refuse to run the query and instead will return an authentication error.
The token authentication of REST++ can be turned on by using the following commands:
secret
(required): the user's secret
lifetime
(optional): the lifetime for the token, in seconds. The default is one month, approximately 2.6 million seconds.
Once REST++ authentication is enabled, a token should always be included in the HTTP header. If you are using curl to format and submit your REST++ requests, then use the following syntax:
When you use the RUN QUERY command in the GSQL language, this triggers a curl command within the GSQL system. GSQL will automatically use (and generate, if necessary) a token in the curl request for an authorized user.
The TigerGraph system includes seven predefined roles — superuser, admin, designer, querywriter, queryreader, and observer. Each role has a fixed and logical set of privileges to perform operations. These roles form a hierarchy, with superuser being at the top. Broadly speaking,
An observer (formerly "public") can log on, view the schema and other catalog details for its designated graph, and change their own password.
A queryreader has all observer privileges, and can also run existing loading jobs and queries for its designated graph.
A querywriter has all queryreader privileges, and can also create queries and run data-manipulation commands on its designated graph.
A designer (formerly "architect") has all querywriter privileges, and can modify the schema, create loading jobs for its designated graph.
A globaldesigner has all designer privileges, and can create global schema as well as create objects. Additionally, this role will have the ability to delete graph created by the same user, but will not have the ability to run ‘Clear graph store’ command.
An admin has all designer privileges, and can also create or drop users and grant or revoke roles for its designated graph. That is, an admin can control the existence and privileges of other users on its graph.
A superuser automatically has admin privileges on all graphs, and can also create global vertex and edge types, create multiple graphs, and clear the database.
The detailed permissions for each role are listed in the following table. Except for the superuser and globaldesigner, the scope of privilege is always limited to one's own graph. In some cases, the behavior of the operation depends on one's privilege level. More detailed descriptions of the User Management commands are given later in this document. For details about the Graph Definition, Loading, Querying, and Modification commands, see the GSQL Language Reference documents.
Commands not listed above are by default accessible with at least observer).
The TigerGraph installation process creates one user called tigergraph who has the superuser role. The superuser role has full privilege to perform any action, included creating or removing other users, and assigning roles to the other users. A superuser can create other superusers, who would also have full privilege.
The user tigergraph is permanent. It cannot be dropped by another admin user.
Most of the commands in this section can be run only by a superuser or an admin user. The exception is SHOW USER. Any user can display their own profile.
If a username contains more than ASCII alphanumeric characters, it is recommended that the name be enclosed in backquote characters, to ensure that the name is treated as a literal string. This applies to the CREATE/DROP USER and GRANT/REVOKE ROLE commands.
Required privilege: superuser, admin Create a new user. GSQL will prompt for the user name and password.
The maximum size of all user information, including users, secrets, and tokens is set by the configuration GSQL.UserInfoLimit.UserCatalogFileMaxSizeByte
. It is set to 2097152 by default and has a hard limit of 2097152. In reality, this allows for around 18000 users if every user has a token and a secret.
Required privilege: superuser, admin Delete the listed users.
The command takes effect with no warning and cannot be undone.
Required privilege: any Display user's name, role, secret, and token. Non-admin/superuser users see only their own information. Admin/superuser users see information for all users.
Required privilege: superuser, admin Grant a role (or revoke a role) for a user, which add s (or removes) privileges.
The example below grants the queryreader role to two users, revokes it from one of the them (jk), and then grants the querywriter role to both users.
Even if user is granted superuser role, all previous granted roles for the specific user are still displayed.
When user authentication is enabled, the TigerGraph system will execute a requested operation only if the requester provides credentials for a user who has the privilege to perform the requested operation.
The TigerGraph system offers two options for credentials.
user name and password pair.
a token: a unique 32-character string that can be used for REST++ requests. A token expires 1 month from the date of creation by default
The following set of commands are used to create and manage passwords, authentication secrets, and authentication tokens.
Like any other GSQL commands, the user must supply credentials to run these commands. In order to create a secret, the user must supply their password.
Use the above ALTER PASSWORD
command to change a user's password. Any user can change their own password, but only superuser
or admin
can change other users' passwords.
If a username is not provided, the command changes the password of the current user. As an admin/superuser
, to change the password of another user, specify the username of the user whose password you wish to change:
Secrets are unique strings that serve as a user's credentials in certain circumstances. A user can have multiple secret strings. Each secret is associated with one user and their role for one graph. If the role is revoked, the secret also becomes invalid.
Use the CREATE SECRET
command to generate a secret for the current user and graph. It is optional to provide an alias for the secret.
Beginning with TigerGraph 3.1.4, the system will generate a random alias for the secret if the user does not provide an alias for that secret. Randomly generated aliases begin with AUTO_GENERATED_ALIAS_
and include a random 7-character string.
Known issue:
Prior to TigerGraph 3.1.4, secrets created without aliases and whose unmasked values are lost cannot be dropped. If you are running a version of TigerGraph prior to v3.1.4, make sure you always create a a secret with an alisa.
Use SHOW SECRET
to list all secrets of the current user. The secrets will be masked and only the first and last three characters of the secrets will be shown. The alias of the secret and the graph that the secret is associated with will also be listed:
Use the DROP SECRET
command to drop a secret. Since a user can have multiple secrets, the secret to drop must be specified in the command. You can specify a secret either by the secret string itself or by its alias.
To uninstall TigerGraph, open the command line of the Linux server and switch to the TigerGraph user, which is created during :
If your system is currently using an older string-based license key that does not use a license seed, please contact for the procedure to upgrade to the new system-specific license type.
Collect the fingerprint of the whole TigerGraph system using the command gadmin license seed <host_signature_type>
, which can be executed on any machine in the system. The command packs all the collected data to generate the license seed and writes it into a file. When the command has completed successfully, it outputs the path of the file to the console.
Depending on the host machine, the user needs to choose the appropriate type of host signature for gadmin
to collect. The options are: aws
. azure
, gcp
, hardware,
and node-id
. If you are generating the seed on a cloud instance, choose the corresponding cloud provider for the host signature type. If you are generating the seed on your own machines, choose either hardware
or node-id
. Signatures generated with the hardware
parameter will use unique hardware information that persists through software changes, while signatures generated with node-id
will use a unique that may change during an OS reinstall. Most users installing their own instances should use the hardware
option.
Send the license seed file to TigerGraph, either through our license activation web portal (preferred) or by email to If using email, please include the following information:
Contract number. If you do not know your contract number, please contact your sales representative or
To find out which node hosts the application server, run in the bash terminal of any active node in the cluster. The output will show you which nodes are hosting a GUI server.
The server directives should specify your nodes' addresses which you want to load balance. Since TigerGraph requires session persistence, the load balancing methods will be limited to ip_hash or hash, unless you have access to Nginx Plus, which then means any load balancing method may be used with session persistence setup:
Otherwise, only a passive health check is available. See Nginx documentation for more information:
If your applications are provisioned on AWS, another choice for load balancing is through the use of an .
To create an application load balancer, follow AWS's guide to . The following configurations apply as you follow the guide:
After following the steps and creating your load balancer, in your target group.
If your instances are provisioned on Azure, you can set up an . Follow the steps for setting up an Application Gateway outlined here:
After the Application Gateway is complete, we need to create a custom health probe in order to check the health/status of our Application Servers. You can follow the following steps outlined here: When filling out the health probe information, the fields below should have the following values:
If your instances are provisioned on Google Cloud, you can set up an: You can follow Google’s provided steps in their documentation for setup here:
When :
When :
Lastly, we need to set up session affinity for our load balancer. This is outlined in GCP documentation here:
In order to choose and specify your LDAP configuration settings, you must understand some basic LDAP concepts. One reference for LDAP concepts is .
A search filter is optional. When configured, the search is only performed for the LDAP entries that satisfy the filter. The filter must strictly follow LDAP filter format, i.e., the condition must be wrapped by parentheses, etc. A description of the different types of filters is available at . The official specification for LDAP filters is available at .
Congratulations! This means the LDAP is working. However, TigerGraph cannot find a matching rule for the login user. Please create a proxy group for the user. See documents for creating a proxy group .
TigerGraph's role-based access control system naturally extends to a multiple graph system: A user is granted a role on a particular graph. The superuser role (new in TigerGraph 1.2) is defined for administration of the entire unified supergraph.
TigerGraph users exist only with the TigerGraph platform; they are different than operating system users . When the system is first installed, an initial user is automatically created. The default name for this initial user is tigergraph , with password tigergraph . This user has full administrative privilege and can create additional users and can set their privileges (see ). For simplicity, we will refer to this initial superuser as the tigergraph user.
If user authentication is enabled (see the section ), the TigerGraph system will execute a requested operation only if the requester provides credentials for a user who has the privilege to perform the requested operation.
A user must have a secret before they create a token. Secrets are generated in GSQL (see CREATE SECRET below). The endpoint is used to create a token. The endpoint has two parameters:
The maximum number of tokens that can be created is set by a configuration named GSQL.UserInfoLimit.TokenSizeLimit
. Its default value is 20000 and can be changed with commands.
GSQL deletes expired tokens at regular intervals. The length of this interval in seconds is set by a system configuration named GSQL.TokenCleaner.IntervalTimeSec
and is set to 10800
by default, which means TigerGraph deletes all expired tokens every 10800 seconds (3 hours). To change this interval, use :
You can also give expired tokens a grace period with the GSQL.TokenCleaner.GraceTimeSec
configuration. This configuration indicates the number of seconds after a token expires during which it will not be deleted automatically by GSQL and is set to 0
by default. To change the grace period, use :
A superuser can create and manage users globally, including creating admin users for local graphs. An admin can create and manage users with their local graph.
The maximum number of users on a database is set by a configuration named GSQL.UserInfoLimit.UserSizeLimit
. Its default value is 12000 and can be changed with commands.
The ON GRAPH clause is required unless the role being granted/revoked is superuser.
A user can have more than one role. For example, jk can be a queryreader on the Hogwarts graph and a querywriter on the London graph.
Command Type | Operations | super- user | admin | global-designer | designer | query- writer | query- reader | observer |
Status | Ls | x | x | x | x | x | x | x |
User Management | Create/Drop User | x | x | - | - | - | - | - |
Show User | x | x | x | x | x | x | x |
Alter (Change) Password | x | x | x | x | x | x | x |
Grant/Revoke Role | x | x | - | - | - | - | - |
Create/Drop/Show Secret | x | x | x | x | x | x | - |
Schema Design | Create/Drop Vertex/Edge/Graph | x | - | x | - | - | - | - |
Clear Graph Store | x | - | - | - | - | - | - |
Drop All | x | - | - | - | - | - | - |
Use Graph | x | x | x | x | x | x | x |
Use Global | x | x | x | x | x | x | x |
Create/Run Global Schema_Change Job | x | - | x | - | - | - | - |
Create/Run Schema_Change Job | x | x | x | x | - | - | - |
Loading and Querying | Create/Drop Loading Job | x | x | x | x | - | - | - |
Create/Interpret/ Install/Drop Query | x | x | x | x | x | - | - |
Typedef | x | x | x | x | x | - | - |
Offline to Online Job Translation | x | x | x | x | x | - | - |
Run Query | x | x | x | x | x | x | - |
Run Loading Job | x | x | x | x | x | x | - |
Data Modification | Upsert/Delete/ Select Commands | x | x | x | x | x | - | - |
TigerGraph supports secure data-in-flight communication, using SSL/TLS encryption protocol. This applies to any outward-facing channel, including GSQL clients, RESTPP endpoints, and the GraphStudio web interface. When SSL/TLS is enabled, HTTPS takes the place of HTTP for RESTPP and GraphStudio connections.
You should have basic knowledge about how SSL works:
What the SSL certificate and key are used for
That an SSL certificate is bound to a domain
How an SSL certificate chain works
A good primer on SSL is available to https://httpd.apache.org/docs/2.4/ssl/ssl_intro.html
TigerGraph uses the Nginx web server, so SSL configuration makes use of some built-in support in Nginx.
http://nginx.org/en/docs/http/configuring_https_servers.html
The two main options for obtaining an SSL Certificate are to generate your own self-signed certificate or to purchase a certificate from a trusted Certificate Authority. Regardless of which method you choose, your certificate should be chained to a trusted root certificate embedded in your browser. The options and details for producing a trusted SSL certificate are beyond the scope of this document. The focus of this document is how to configure your TigerGraph system to use the certificate to enable SSL.
First, obtain an SSL certificate from a trusted agent of your choice. Certificate vendors will provide clear instructions for ordering a certificate and then for installing it on your system.
Then you can configure the certificate with gadmin config entry ssl
There are multiple ways to create a self-signed certificate. One example is shown below.
For simplicity, the method below will use the root certificate directly as the HTTPS server certificate. This method is satisfactory for testing but should not be used for a production system.
In the example below, the Common Name value should be your server hostname, since HTTPS certificates are bound to domain names.
For security reasons, the certificates can only be used with permission 600 or less.
gadmin
With the self-signed certificate successfully generated, you can configure it with gadmin
, so that all the HTTP traffic will be protected with SSL.
TigerGraph's SSL only accepts PEM-encoded certificates. If you have a certificate encoded in other formats (e.g. DER), you need to convert it to a PEM-encoded certificate first.
After saving the settings, apply the configuration settings.
Then restart the following services: gsql
, nginx
, ts3
, and gui
.
Now you may test the connection.
A direct curl request to the server will fail due to certificate verification failure:
In v1.2, the default TCP/IP port for Nginx has changed from 44240 to 14240, to avoid possible port conflicts with Zookeeper.
You may use the -k option to turn off the verification, but it is unsafe and not recommended.
To successfully make requests with curl, you will need to specify the certificate by using the --cacert
parameter:
Export/Import is a complement to Backup/Restore, not a substitute.
The GSQL EXPORT and IMPORT commands perform a logical backup and restore. A database export contains the database's data, and optionally some types of metadata, which can be subsequently imported in order to recreate the same database, in the original or in a different TigerGraph platform instance.
To import an exported database, ensure that the export files are from a database that was running the exact same version of TigerGraph as the database that you are importing into.
Known Issues (Updated Feb 16th):
User-defined loading jobs containingDELETE
statements are not exported correctly.
If a graph contains vertex or edge types with a composite key, the graph data is exported in a nonstandard format which cannot be reimported.
Available to users with the superuser
role only.
The EXPORT GRAPH
command reads the data and metadata for one or more graphs and writes the information to a zip file in the designated folder. If no options are specified, then a full backup is performed, including schema, data, template information, and user profiles.
NOTE: The export directory should be empty before running EXPORT GRAPH because all contents are zipped and compressed.
The EXPORT GRAPH
command exports all graphs in the database.
The export contains four categories of files:
Data files in csv format, one file for each type of vertex and each type of edge.
GSQL DDL command files created by the export command. The import command uses these files to recreate the graph schema(s) and reload the data.
Copies of the database's queries, loading jobs, and user-defined functions.
GSQL command files used to recreate the users and their privileges.
The following files are created in the specified directory when exporting and are then zipped into a single file called ExportedGraphs.zip.
If the file is password-protected, it can only be unzipped using GSQL IMPORT. The security feature prevents users from directly unzipping it.
A DBImportExport_<graphName>.gsql
(DBImportExporttag_<graphName>
for tag-based graphs) for each graph called <graphName> in a multigraph database that contains a series of GSQL DDL statements that do the following:
Create the exported graph, along with its local vertex, edge, and tuple types,
Create the loading jobs from the exported graphs
Create data source file objects
Create queries
A graph_<graphName>/
folder for each graph in a multigraph database containing data for local vertex/edge types in <graphName>. For each vertex or edge type called <type>, there is one of the following two data files:
vertex_<type>.csv
edge_<type>.csv
global.gsql
- DDL job to create all global vertex and edge types, and data sources.
tuple.gsql
- DDL job to create all User Defined Tuples.
Exported data and jobs used to restore the data:
GlobalTypes/
- folder containing data for global vertex/edge types
vertex_name.csv
edge_name.csv
run_loading_jobs.gsql
- DDL created by the export command which will be used during import:
Temporary global schema change job to add user-defined indexes. This schema job is dropped after it is has run.
Loading jobs to load data for global and local vertex/edges.
Database's saved queries, loading jobs, and schema change jobs
SchemaChangeJob/
- folder containing DDL for schema change jobs. See section "Schema Change Jobs" for more information
Global_Schema_Change_Jobs.gsql contains all global schema change jobs
graphName_Schema_Change_Jobs.gsql contains schema change jobs for each graph "graphName"
Tokenbank.cpp
- copy of <tigergraph.root.dir>/app/<VERSION_NUM>/dev/gdk/gsql/src/TokenBank/TokenBank.cpp
ExprFunctions.hpp
- copy of <tigergraph.root.dir>/app/<VERSION_NUM>dev/gdk/gsql/src/QueryUdf/ExprFunctions.hpp
ExprUtil.hpp
- copy of <tigergraph.root.dir>/app/<VERSION_NUM>/dev/gdk/gsql/src/QueryUdf/ExprUtil.hpp
Users:
users.gsql
- DDL to create all exported users and import Secrets and Tokens, and grant permissions.
If not enough disk space is available for the data to be exported, the system returns an error message indicating not all data has been exported. Some data may have already been written to disk. If an insufficient disk error occurs, the files will not be zipped, due to the possibility of corrupted data which would then corrupt the zip file. The user should clear enough disk space, including deleting the partially exported data, before reattempting the export.
It is possible for all the files to be written to disk and then to run out of disk space during the zip operation. If that is the case, the system will report this error. The unzipped files will be present in the specified export directory.
If the timeout limit is reached during export, the system returns an error message indicating not all data has been exported. Some data may have already been written to disk. If a timeout error occurs, the files will not be zipped. The user should increase the timeout limit and then rerun the export.
The timeout limit is controlled by the session parameter export_timeout
. The default timeout is ~138 hours. To change the timeout limit, use the command from the GSQL shell:
Available to users with the superuser
role only.
The IMPORT GRAPH
command unzips the file ExportedGraph.zip
located in the designated folder, and then runs the GSQL command files within.
WARNING: IMPORT GRAPH
looks for specific filenames. If either the zip file or any of its contents are renamed by the user, IMPORT GRAPH may fail.
WARNING: IMPORT GRAPH
erases the current database (equivalent to running DROP ALL
). The current version does not support incremental or supplemental changes to an existing database (except for the --keep-users option)
There are two sets of loading jobs:
Those that were in the catalog of the database which was exported. These are embedded in the file DBImportExport_graphName.gsql
Those that are created by EXPORT GRAPH and are used to assist with the import process. These are embedded in the file run_loading_jobs,gsql.
The catalog loading jobs are not needed to restore the data. They are included for archival purpose.
Some special rules apply to importing loading jobs. Some catalog loading jobs will not be imported.
If a catalog loading job contains DEFINE FILENAME F = "/path/to/file/"
, the path will be removed and the imported loading job will only contain DEFINE FILENAME F
.
This is to allow a loading job to still be imported even though the file may no longer exist or the path may be different due to moving to another TigerGraph instance.
If a specific file path is used directly in the LOAD statement, and the file cannot be found, the loading job cannot be created and will be skipped.
For example, LOAD "/path/to/file" to vertex v1
cannot be created if /path/to/file
does not exist.
Any file path using $sys.data_root
will be skipped.
This is because the value of $sys.data_root
is not retained from export. During import, $sys.data_root
is set to the root folder of the import location.
There are two sets of schema change jobs:
Those that were in the catalog of the database which was exported. These are stored in the folder /SchemaChangeJobs.
Those that were created by EXPORT GRAPH and are used to assist with the import process. These are in the run_loading_jobs.gsql command file. The jobs are dropped after the import command is finished with them.
The database's schema change jobs are not executed during the import process. This is because if a schema change job had been run before the export, then the exported schema already reflects the result of the schema change job. The directory /SchemaChangeJobs contains these files:
Global_Schema_Change_Jobs.gsql contains all global schema change jobs
<graphName>_Schema_Change_Jobs.gsql contains schema change jobs for each graph <graphName>.
In v3.0, importing and exporting clusters is not fully automated. The database can be exported and imported by following some additional steps.
Rather than creating a single export zip file, export will create a file for each machine. Before exporting, specific folders must be created on each server using the following commands:
Then run the export command on one server. The EXPORT command does not bundle all the files to one server, and it does not compress each server's files to one zip. Some files, including the data files, will be exported to each server, to the folders created above. Some files will be only on the local server where EXPORT GRAPH was run.
You may only import to a cluster that has the same number and configuration of servers as the data from which the export originated. Transfer the files from one export server to a corresponding import server. That is, copy the files from
export_server_n:/path/to/export_directory
to
import_server_n:/path/to/import/directory
2. Manually modify the loading jobs
On the main server, edit the run_loading_jobs.gsql files as follows.
Find the line(s) of the form:
LOAD "sys.data_root/.../<vertex_or_edge_type>.csv"
Close to it should be similar line that is commented out which have the "all:" data source directive:
#LOAD "all:sys.data_root/.../<vertex_or_edge_type>.csv"
See the example below:
Comment out the LOAD line and uncomment the LOAD all: line. Be sure that you do this for all data source files.
3. Run the IMPORT GRAPH command from the main server (e.g., the one that corresponds to the server where EXPORT GRAPH was run).
This page documents a list of advanced Linux commands that simplify platform operations that are performed often during debugging, especially on high availability (HA) clusters. Only the TigerGraph platform owner - the Linux user created during installation has access to the commands on this page.
Users are advised to use these commands only at the guidance and recommendation of TigerGraph support.
This command allows you to connect to another node in your cluster via SSH.
With huge data volumes, data loading can be time-consuming. If you find yourself often loading huge volumes of data into an empty graph, and your data volume is so large that your loading jobs are taking hours to complete, you might consider using offline loading to speed up data loading.
In order to use offline loading, all the filename variables in the loading job must take an initial path value. After creating the loading job and ensuring that all the data files are referenced correctly in the loading job, use the options -g
and -j
to specify the graph and loading job to run. During offline loading, your database is focused on loading data and will not be able to handle requests and queries.
Offline loading deletes all existing graph data before it starts. Back up your data before using offline loading.
-g <graph_name>
: Name of the graph whose loading job to run
-j <loading_job_name>
: Name of the loading job to run
The following command runs the loading job load_ldbc_snb
on the graph ldbc_snb
:
You can also provide the graph name and the loading job name with a config file written in Bash:
Once you have the config file, you can run gautoloading.sh
with the config file instead of the -g
and -j
options:
This command allows you to copy files from the current node to target folders on multiple nodes at the same time. The file or directory on the current node specified by the source path will be copied into the target folder on every node. If the target folder does not exist at the path given, the target folder will be created automatically. You can also specify multiple source files or directories, in which case, the source paths need to be absolute paths, put in quotes, and separated by space.
You can specify the nodes where you want the copy operation to occur in the following ways:
gscp all <source_path> <target_dir>
will execute the command on all nodes
gscp <component_name> <source_path> <target_dir>
will execute the command on nodes where the component you specified is running
gscp <node_list> <source_path> <target_dir>
will execute the command on the nodes you specify in the node list
This command downloads a file or directory from every specified node to the target directory on the current node.
This command allows you to run commands on a specified list of nodes in your cluster one by one, and the output from every node will be visible to the terminal. grun
will wait for the command to finish running on one node before executing the command on the next node.
You can specify which nodes to run commands on in the following ways:
grun all '<command>'
will execute the command on all nodes
grun <component_name> '<command>'
will execute the command on nodes where the component you specified is running
grun <node_list> '<command>'
will execute the commands on the nodes you specify in the node list
This command allows you to run commands on a specified list of nodes in your cluster in parallel, and the output will be visible to the terminal where the grun_p
command was run. You can specify the nodes to run commands on in the following ways:
grun_p all '<command>'
will execute the command on all nodes
grun_p <component_name> '<command>'
will execute the command on nodes where the component you specified is running
grun_p <node_list> '<command>'
will execute the commands on the nodes you specify in the node list. The list of nodes should be separated by a comma, e.g.: m1,m2
This command returns the private IP address of your current node.
This command returns your current node number as well as all servers that are running on the current node.
In this example, m1
is the current node number, and ADMIN#1
, admin#1
etc. are all servers that are running on m1
.
The gssh
command, when used without arguments, outputs information about server deployments in your cluster. The output contains the names and IP addresses of every node. For each node, the output shows the full list of servers that are running on the node. For each server, the output shows the full list of the nodes that the server is running on.
This command returns the size of your data, the number of existing vertices and edges, as well as the number of deleted and skipped vertices on every node in your cluster. If you are running TigerGraph on a single node, it will return the same information that one node.
GBAR - Graph Backup and Restore
Graph Backup And Restore (GBAR), is an integrated tool for backing up and restoring the data and data dictionary (schema, loading jobs, and queries) of a single TigerGraph node.
The backup feature packs TigerGraph data and configuration information into a directory on the local disk or a remote AWS S3 bucket. Multiple backup files can be archived. Later, you can use the restore feature to roll back the system to any backup point. This tool can also be integrated easily with Linux cron to perform periodic backup jobs.
The current version of GBAR is intended for restoring the same machine that was backed up. For help with cloning a database (i.e., backing up machine A and restoring the database to machine B), please contact support@tigergraph.com.
The -y
option forces GBAR to skip interactive prompt questions by selecting the default answer. There is currently one interactive question:
At the start of restore, GBAR will always ask if it is okay to stop and reset the TigerGraph services: (y/N)? The default answer is yes.
Before using the backup or the restore feature, GBAR must be configured.
Run gadmin config entry system.backup
. At each prompt, enter the appropriate values for each config parameter.
After entering the configuration values, run the following command to apply the new configurations
Note:
For S3 configuration, the AWS access key and secret are not provided, then GBAR will use the attached IAM role.
You can specify the number of parallel processes for backup and restore.
You must provide username and password using GSQL_USERNAME and GSQL_PASSWORD environment variables.
To perform a backup, run the following command as the TigerGraph Linux user:
Depending on your configuration settings, your backup archive will be output to your local backup path and/or your AWS S3 bucket.
A backup archive is stored as several files in a folder, rather than as a single file. The backup tag acts like a filename prefix for the archive filename. The full name of the backup archive will be <backup_tag>-<timestamp>
, which is a subfolder of the backup repository.
If System.Backup.Local.Enable
is set to true
, the folder is a local folder on every node in a cluster, to avoid massive data moving across nodes in a cluster.
If System.Backup.S3.Enable
is set to true
, every node will upload data located on the node to the s3 repository. Therefore, every node in a cluster needs access to Amazon S3. If IAM policy is used for authentication, every node in the cluster needs to be given access under the IAM policy.
GBAR Backup performs a live backup, meaning that normal operations may continue while the backup is in progress. When GBAR backup starts, GBAR will check if there are running loading jobs. If there are, it will pause loading for 1 minute to generate a snapshot and then continue the backup process. You can specify the loading pausing interval by the environment variable PAUSE_LOADING
.
And then, it sends a request to the admin server, which then requests the GPE and GSE to create snapshots of their data. Per the request, the GPE and GSE store their data under GBAR’s own working directory. GBAR also directly contacts the Dictionary and obtains a dump of its system configuration information. In addition, GBAR gathers the TigerGraph system version and customized information including user-defined functions, token functions, schema layouts and user-uploaded icons. Then, GBAR compresses each of these data and configuration information files in tgz format and stores them in the <backup_tag>-<timestamp>
subfolder on each node. As the last step, GBAR copies that file to local storage or AWS S3, according to the Config settings, and removes all temporary files generated during backup.
The current version of GBAR Backup takes snapshots quickly to make it very likely that all the components (GPE, GSE, and Dictionary) are in a consistent state, but it does not fully guarantee consistency.
Backup does not save input message queues for REST++ or Kafka.
This command lists all generated backup files in the storage place configured by the user. For each file, it shows the file’s full tag, its size in human-readable format, and its creation time.
Before restoring a backup, you should ensure that the backup you are restoring from is in the same exact version as your current version of TigerGraph.
To restore a backup, run the following command:
If GBAR can verify that the backup archive exists and that the backup's system version is compatible with the current system version, GBAR will shut down the TigerGraph servers temporarily as it restores the backup. After completing the restore, GBAR will restart the TigerGraph servers.
Restore is an offline operation, requiring the data services to be temporarily shut down. The user must specify the full archive name ( <backup_tag>-<timestamp>
) to be restored. When GBAR restore begins, it first searches for a backup archive exactly matching the archive name supplied in the command line. Then it decompresses the backup files to a working directory. Next, GBAR will compare the TigerGraph system version in the backup archive with the current system's version, to make sure that the backup archive is compatible with that current system. It will then shut down the TigerGraph servers (GSE, RESTPP, etc.) temporarily. Then, GBAR makes a copy of the current graph data, as a precaution. Next, GBAR copies the backup graph data into the GPE and GSE and notifies the Dictionary to load the configuration data. Also, GBAR will notify the GST to load backup user data and copy the backup user-defined token/functions to the right location. When these actions are all done, GBAR will restart the TigerGraph servers.
Note: GBAR restore does not estimate the uncompressed data size and check whether there is sufficient disk space.
The primary purpose of GBAR is to save snapshots of the data configuration of a TigerGraph system, so that in the future the same system can be rolled back (restored) to one of the saved states. A key assumption is that Backup and Restore are performed on the same machine, and that the file structure of the TigerGraph software has not changed.
Restore needs enough free space to accommodate both the old gstore and the gstore to be restored.
To remove a backup, run the gbar remove
command:
The command removes a backup from the backup storage path. To retrieve the tag of a backup, you can use the gbar list
command.
Run gbar cleanup
to delete the temporary files created during backup or restore operations:
The following example describes a real example, to show the actual commands, the expected output, and the amount of time and disk space used, for a given set of graph data. For this example, an Amazon EC2 instance was used, with the following specifications:
Single instance with 32 vCPU + 244GB memory + 2TB HDD.
Naturally, backup and restore time will vary depending on the hardware used.
To run a daily backup, we tell GBAR to backup with the tag name daily.
The total backup process took about 31 minutes, and the generated archive is about 49 GB. Dumping the GPE + GSE data to disk took 12 minutes. Compressing the files took another 20 minutes.
To restore from a backup archive, a full archive name needs to be provided, such as daily-20180607232159. By default, restore will ask the user to approve to continue. If you want to pre-approve these actions, use the "-y" option. GBAR will make the default choice for you.
For our test, GBAR restore took about 23 minutes. Most of the time (20 minutes) was spent decompressing the backup archive.
Note that after the restore is done, GBAR informs you were the pre-restore graph data (gstore) has been saved. After you have verified that the restore was successful, you may want to delete the old gstore files to free up disk space.
The Single Sign-On (SSO) feature in TigerGraph enables you to use your organization's identity provider (IDP) to authenticate users to access TigerGraph GraphStudio and Admin Portal UI.
Currently we have verified following the identity providers which support SAML 2.0 protocol:
For supporting additional IDPs, please inquire sales@tigergraph.com and submit a feature request.
In order to use Single Sign-On , you need to perform four steps :
Configure your identity provider to create a TigerGraph application.
Provide information from your identity provider to enable TigerGraph Single Sign-On .
Create user groups with proxy rules to authorize Single Sign-On users.
Change the password of the tigergraph user to be other than the default, if you haven't done so already.
We assume you already have TigerGraph up and running , and you can access GraphStudio UI through a web browser using the URL:
http://tigergraph-machine-hostname:14240
If you enabled SSL connection, change http to https. If you changed the nginx port of the TigerGraph system, replace 14240 with the port you have set.
Here we provide detailed instructions for identity providers that we have verified. Please consult your IT or security department for how to configure the identity provider for your organization if it is not listed here.
After you finish configuring your identity provider, you will get an Identity Provider Single Sign-On URL , Identity Provider Entity Id , and an X.509 certificate file idp.cert . You need these 3 things to configure TigerGraph next.
After logging into Okta as the admin user, click Admin button at the top-right corner.
Click Add Applications in the right menu.
Click Create New App button in the left toolbar.
In the pop up window, choose SAML 2.0 and click Create .
Input TigerGraph (or whatever application name you want to use) in App Name , and click Next . Upload a logo if you like.
Enter the Assertion Consumer Service URL / Single sign on URL , and SP Entity ID .
Both are URLs in our case. You need to know the hostname of the TigerGraph machine. If you can visit GraphStudio UI through a browser, the URL contains the hostname. It can be either an IP or a domain name.
The Assertion Consumer Service URL , or Single sign on URL, is
http://tigergraph-machine-hostname:14240/api/auth/saml/acs
The SP entity id URL is:
http://tigergraph-machine-hostname:14240/gsqlserver/gsql/saml/meta
Scroll to the bottom for Group Attribute Statements. Usually you want to grant roles to users based on their user group. You can give a name to your attribute statement; here we use group . For filter, we want to return all group attribute values of all users, so we use Regex .* as the filter. Click Next after you set up everything.
In the final step, choose whether you want to integrate your app with Okta or not. Then click Finish .
Now your Okta identity provider settings are finished. Click View Setup Instructions button to gather information you will need to setup TigerGraph Single Sign-On.
Here you want to save Identity Provider Single Sign-On URL and Identity Provider Issuer (usually known as Identity Provider Entity Id ). Download the certificate file as okta.cert, rename it as idp.cert , and put it somewhere on the TigerGraph machine. Let's assume you put it under your home folder: /home/tigergraph/idp.cert. If you installed TigerGraph in a cluster, you should put it on the machine where the GSQL server is installed (usually it's the machine whose alias is m1).
Finally, return to previous page, go to the Assignments tab, click the Assign button, and assign people or groups in your organization to access this application.
After logging into Auth0, click Clients in the left navigation bar, and then click CREATE CLIENT button.
In the pop-up window, enter TigerGraph (or whatever application name you want to use) in the Name input box. Choose Single Page Web Application , and then click the CREATE button.
Click Clients again. In the Shown Clients list, click the settings icon of your newly created TigerGraph client.
Scroll down to the bottom of the settings section, and click Show Advanced Settings .
Click the Certificates tab and then click DOWNLOAD CERTIFICATE. In the chooser list, choose CER. Rename the downloaded file as idp.cert , and put it somewhere on the TigerGraph machine. Let's assume you put it under your home folder: /home/tigergraph/idp.cert. If you installed TigerGraph in a cluster, you should put it on the machine where the GSQL server is installed ( usually it's the machine whose alias is m1 ).
Click the Endpoints tab, and copy the text in the SAML Protocol URL text box. This is the Identity Provider Single Sign-On URL that will be used to configure TigerGraph in an upcoming step.
Scroll up to the top of the page, click the Addons tab, and switch on the toggle at the right side of the SAML2 card.
In the pop-up window, enter the Assertion Consumer Service URL in the Application Callback URL input box:
http://tigergraph-machine-hostname:14240/api/auth/saml/acs
Scroll down to the end of the settings JSON code, click the DEBUG button, and log in as any existing user in your organization in the pop-up login page.
If login in successfully, the SAML response will be shown in decoded XML format. Scroll down to the attributes section. Here you will see some attribute names, which you will use to set proxy rules when creating groups in an upcoming configuration step.
Return to the previous pop-up window and click the Usage tab. Copy the Issuer value. This is the Identity Provider Entity Id that will be used to configure TigerGraph in an upcoming step.
Click the Settings tab, scroll to the bottom of the pop-up window, and click the SAVE button. Close the pop-up window.
According to the SAML standard trust model, a self-signed certificate is considered fine. This is different from configuring a SSL connection, where a CA-authorized certificate is considered mandatory if the system goes to production.
There are multiple ways to create a self-signed certificate. One example is shown below.
First, use the following command to generate a private key in PKCS#1 format and a X.509 certificate file. In the example below, the Common Name value should be your server hostname (IP or domain name).
Second, convert your private key from PKCS#1 format to PKCS#8 format:
From a TigerGraph machine, run the following command: gadmin config entry Security.SSO.SAML
Answering the questions is straightforward; an example is shown below.
Since v2.3, the requirements for the Security.SSO.SAML.SP.Hostname parameter changed. The url must be a full url, starting with protocol (such as http) and ending with port number.
The reason we change Security.SSO.SAML.ResponseSigned to false is because some identity providers (e.g., Auth0) don't support signed assertion and response at the same time. If your identity provider supports signing both, we strongly suggest you leave it as true.
After making the configuration settings, apply the config changes, and restart gsql.
In order to authorize Single Sign-On users, you need create user groups in GSQL with proxy rules and grant roles on graphs for the user groups.
In TigerGraph Single Sign-On, we support two types of proxy rules. The first type is nameid equations; the second type is attribute equations. Attribute equations are more commonly used because usually user group information is transferred as attributes to your identity provider SAML assertions. In the Okta identity provider configuration example, it is transferred by the attribute statement named group . By granting roles to a user group, all users matching the proxy rule will be granted all the privileges of that role. In some cases if you want to grant one specific Single Sign-On user some privilege, you can use a nameid equation to do so.
For example, if you want to create a user group SuperUserGroup that contains the user with nameid admin@your.company.com only, and grant superuser role to that user, you can do so with the following command:
Suppose you want to create a user group HrDepartment which corresponds to the identity provider Single Sign-On users having the group attribute value "hr-department", and want to grant the queryreader role to that group on the graph HrGraph:
Don't forget to enable User Authorization in TigerGraph by changing the password of the default superuser tigergraph to other than its default value. If you do not change the password, then every time you visit the GraphStudio UI, you will automatically log in as the superuser tigergraph.
Now you have finished all configurations for Single Sign-On. Let's test it.
Visit the GraphStudio UI in your browser. You should see a Login with SSO button appear on top of the login panel:
If after redirecting back to GraphStudio, you return to the login page with the error message shown below, that means the Single Sign-On user doesn't have access to any graph. Please double check your user group proxy rules, and roles you have granted to the groups.
If your Single Sign-On fails with error message show below, that means either some configuration is inconsistent between TigerGraph and your identity provider, or something unexpected happened.
You can check your GSQL log to investigate. First, find your GSQL log file with the following:
Then, grep the SAML authentication-related logs:
Focus on the latest errors. Usually the text is self-descriptive. Follow the error message and try to fix TigerGraph or your identity provider's configuration. If you encounter any errors that are not clear, please contact support@tigergraph.com .
The TigerGraph graph data store uses a proprietary encoding scheme which both compresses the data and obscures the data unless the user knows the encoding/decoding scheme. In addition, the TigerGraph system supports integration with industry-standard methods for encrypting data when stored in disk ("data at rest").
Data at rest encryption can be applied at many different levels. A user can choose to use one or more level.
File system encryption employs advanced encryption algorithms. Some tools allow the user to select from a menu of encryption algorithms. It can be done either in kernel mode or user mode. To run in kernel mode, superuser permission is required.
Since Linux 2.6, device-mapper has been an infrastructure, which provides a generic way to create virtual layers of block devices with transparent encryption blocks using the kernel crypto API.
In Ubuntu, full-disk encryption is an option during the OS installation process. For other Linux distributions, the disk can be encrypted with dm-encrypt .
A commonly used utility is eCryptfs , which is licensed under GPL, and it is built into some kernels, such as Ubuntu.
If root privilege is not available, a workaround is to use FUSE (Filesystem in User Space) to create a user-level filesystem running on top of the host operating system. While the performance may not be as good as running in kernel mode, there are more options available for customization and tuning.
In this example, we use dm-crypt to provide kernel-mode file system encryption. The dm-crypt utility is widely available and offers a choice of encryption algorithms. It also can be set to encrypt various units of storage – full disk, partitions, logical volumes, or files.
The basic idea of this solution is to create a file, map an encrypted file system to it, and mount it as a storage directory for TigerGraph with R/W permission only to authorized users.
Before you start, you will need a Linux machine on which
you have root permission,
the TigerGraph system has not yet been installed,
and you have sufficient disk space for the TigerGraph data you wish to encrypt. This may be on your local disk or on a separate disk you have mounted.
Install cryptsetup (cryptsetup is included with Ubuntu, but other OS users may need to install it with yum).
Install the TigerGraph system.
Grant sudo privilege to the TigerGraph OS user.
Stop all TigerGraph services with the following commands: gadmin stop all -y gadmin stop admin -y
Acting as the tigergraph OS user, run the following export commands to set variables. Replace the placeholders enclosed in angle brackets <...> with the values of your choice:
Create a file for TigerGraph data storage.
Change the permission of the file so that only the owner of the file (that is, only the tigergraph user who created the file in the previous step) will be able to access it:
Associate a loopback device with the file:
Encrypt storage in the device. cryptsetup will use the Linux device mapper to create, in this case, $encrypted_file_path . Initialize the volume and set a password interactively with the password you set to $encryption_password :
If you are trying to automate the process with a script running with root TTY session , you may use the following command:
Open the partition, and create a mapping to $encrypted_file_path :
If you are trying to automate the process with a script running with root TTY session , you may use the following command:
Clear the password from bash variables and bash history.
The following commands may clear your previous bash histories as well. Instead, you may edit ~/.bash_history to selectively delete the related entries.
Create a file system and verify its status:
Mount the new file system to /mnt/secretfs:
Change the permission to 700 so that only $db_user has access to the file system:
Move the original TigerGraph files to the encrypted filesystem and make a symbolic link. If you wish to encrypt only the TigerGraph data store (called gstore), use the following commands:
There are other TigerGraph files which you might also consider to be sensitive and wish to encrypt. These include the dictionary, kafka data files, and log files. You could selectively identify files to protect or you could encrypt the entire TigerGraph folder(App/Data/Log/TempRoot). In this case, simply move $tigergraph_data_root instead of $tigergraph_data_root/gstore.
The data of TigerGraph data is now stored in an encrypted filesystem. It will be automated decrypted when the tigergraph user (and only this user) accesses it.
To automatically deploy this encryption solution, you may
Chain all the steps as a bash script
Remove all "sudo" since the script will be running as root.
Run the script as root user after TigerGraph Installation.
The setup scripts contain your encryption password. To follow good security procedures, do not leave your password in plaintext format in any files on your disk. Either remove the setup scripts or edit out the password.
Encryption is usually CPU-bound rather than I/O-bound. If CPU usage reamains below 100%, encryption should not cause much performance slowdown. A performance test using both small and large queries supports this prediction: for small (~1 sec) and large (~100 sec) queries, there is a ~5% slowdown due to filesystem encryption.
We used the TPC-H dataset with scale factor 10 ( http://www.tpc.org/tpch/ ). The data size is 23GB after loading into TigerGraph..The write test (data loading) was done by running a loading job and then killing the GPE with SIGTERM (to exit gracefully) to ensure that all kafka data is consumed.The read test (GSE cold start) measures the time from "gadmin start gse" until "online" appears in "gadmin status gse".
Major cloud service providers often provide their own methodologies for encrypting data at rest. For Amazon EC2, we recommend users start by reading the AWS Security Blog: How to Protect Data at Rest with Amazon EC2 Instance Store Encryption .
In this section, we provide a simple example for configuring file system encryption for a TigerGraph running on Amazon EC2. The steps are based on those given in How to Protect Data at Rest with Amazon EC2 Instance Store Encryption , with some additions and modifications.
The basic idea of this solution is to create a file, map an encrypted file system to it, and mount it as a storage directory for TigerGraph with permission only to authorized users.
Angle brackets <...> are used to mark placeholders which you should replace with your own values (without the angle brackets).
Make sure you have installed and configured AWS CLI with keys locally.
If you don't have a KMS key, you can create it first:
From the IAM console , choose Encryption keys from the navigation pane.
Select Create Key , and type in <your-key-alias>
For Step 2 and Step 3 , see https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html for advice.
In Step 4 : Define Key Usage Permissions , select <your-role-name>
The role now has permission to use the key.
In this section, you launch a new EC2 instance with the new IAM role and a bootstrap script that executes the steps to encrypt the file system.
The script in this section requires root permission, and it cannot be run manually through an ssh tunnel or by an unprivileged user.
In the EC2 console , launch a new instance (see this tutorial for more details). Amazon Linux AMI 2017.09.1 (HVM), SSD Volume Type (If NOT using Amazon Linux AMI, a script the installs python, pip and AWS CLI needs to be added in the beginning).
In Step 3: Configure Instance Details
In IAM role , choose <your-role-name>
In User Data , paste the following code block after replacing the placeholders with your values and appending TigerGraph installation script
It may take a few minutes for the script to complete after system launch.
Then, you should be able to launch one or more EC2 machines with an encrypted folder under /mnt/secretfs that only OS user tigergraph can access.
Encryption is usually CPU-bound rather than I/O bound. If CPU usage is below 100%, TigerGraph tests show no significant performance downgrade.
If you have a version 1.0 string-type license key, then during initial platform installation, you can either specify your license key as an argument, for example:
Or you may input it when prompted.
To apply a new license key string, use the following command:
If you have a version 2.0 file-type license key which is linked to a specific machine or cluster:
If you have a version 1.0 string-type license key, the following command will tell you your key's expiration date:
If you have a version 2.0 file-type license key which is linked to a specific machine or cluster, then run the following command:
If you are running TigerGraph v3.0+, run the following command:
The following command tells you the basic summary of each component:
If you want to know more, including process information, memory/cpu usage information of each component, use the -v option for verbose output.
To find out the port of a service, use the gadmin config get <port_name>
command:
To list and edit all ports, use the following command:
To change the port number of one service, use the following command:
To backup the current system:
Please be advised that GBAR only backs up data and configuration. No logs or binaries will be backed up.
To restore an existing backup:
Please be advised that running restore will STOP the service and ERASE existing data.
You can get statistics of Graph data on TigerGraph database instance using gstatusgraph utility:
Due to a known bug, gstatusgraph command will count each undirected edge as two edges. To get an accurate number of undirected edges, user should use the built-in queries instead. The message below is sent as a warning to users when gstatusgraph is used.
"[WARN ] Above vertex and edge counts are for internal use which show approximate topology size of the local graph partition. Use DML to get the correct graph topology information"
TigerGraph provides a RESTful API to tell request statistics. Assuming REST port is 9000, use command below:
If you need to restart everything, use the following:
If you know which component(s) you want to restart,you can list them:
Multiple component names are separated by spaces.
Normally it is not necessary to manually turn off any services. However if you wish to, use the stop command.
There are a few typical causes for a service being down:
Use following command to figure out where are log files for each component:
To log at the log file for a particular component:
Timeout is applied to any request coming into TigerGraph system. If a request runs longer than the Timeout value, it will be killed. The default timeout value is 16 second.
If you knows that your query will run longer than the value, configure all related timeouts to a bigger value. To do this:
Input a value you expected, the unit is in second. Then apply the config to the system and restart the service.
The timeout can also be changed for each query, but only when calling the REST endpoint. You would need to use a timeout value each time you run a query, otherwise the default timeout value will be assumed.
A core dump file is produced by the OS when a certain signal causes a process to terminate. The core dump is a disk file containing an image of the process's memory at the time of termination. This image can be used in a debugger (e.g., gdb) to inspect the state of the program at the time that it terminated.
The TigerGraph installation process configures the operating system to place core dump files in the TigerGraph root directory, with the name core-%e-%s-%p.%t, where
%e: executable filename (without path prefix)
%s: signal number which caused the dump
%p: PID of dumped process
%t: time of dump, expressed as seconds since the epoch
The coredump configuration was set by the following command:
If you want to alter the location or file name template, you can edit the contents of /proc/sys/kernel/core_pattern
This page lists the configuration parameters that are available for gadmin config
.
To change a parameter, use the following command:
After updating a parameter, run gadmin config apply
to apply the change and restart the corresponding services to make the change take effect.
Clicking the button will navigate to your identity provider's login portal. If you have already logged in there, you will be redirected back to GraphStudio immediately. After about 10 seconds, the verification should finish, and you are authorized to use GraphStudio. If you haven't login at your identity provider yet, you will need to log in there. After logging in successfully, you will see your Single Sign-On username when you click the User icon at the upper right of the GraphStudio UI.
If this is the initial installation or you are updating a previous key file, then please see the document
If you are updating from a version 1.0 key string to a version 2.0 key file, please contact for the correct procedure.
A description of each component is given in the Glossary section of the document.
GBAR is the utility to do backup and restore of TigerGraph system. Before a backup, GBAR needs to be configured. Please see for details.
Expired license key. Double check your license key expiration date, and contact if it is expired. After applying a new license key, your service will come back online. Usually, TigerGraph will reach out before your license key expires. Please act accordingly when that happens.
Not enough memory. TigerGraph is a memory intensive system. When there is not much free memory, Linux may kill a process based on memory usage. Please check your memory usage after TigerGraph starts. We suggest at least 30% free memory after TigerGraph starts up. To confirm if one of TigerGraph's processes is a victim, use to check.
Not enough free disk space. TigerGraph writes data, logs, as well as some temporary files onto disk(s). It requires enough free space to function properly. If TigerGraph service or one of its components is down, please check whether there is enough free space on the disk using .
GStore size
Backup file size
Backup time
Restore time
219GB
49GB
31 mins
23 mins
Encryption Level
Description
TigerGraph Support
Hardware
Use specialized hard disks which perform automatic encryption on write and decryption on read (by authorized OS users)
Invisible to TigerGraph
Kernel-level file system
Use Linux built-in utilities to encrypt data. Root privilege required.
Invisible to TigerGraph
User-level file system
Use Linux built-in utilities and customized libraries to encrypt data. Root privilege is not required.
Invisible to TigerGraph
GSE Cold Start (read)
Load Data (write)
original
45s
809s
encrypted
47s
854s
% slowdown
4.4%
5.8%
Name | Description | Example |
Admin.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
Admin.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
Admin.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
Admin.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
Admin.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for Admin |
|
Admin.BasicConfig.Nodes | The node list for Admin |
|
Admin.Port | The port for Admin |
|
Name | Description | Example |
Controller.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
Controller.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
Controller.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
Controller.BasicConfig.LogConfig.LogLevel | The log level("DEBUG","INFO","WARN","ERROR","PANIC","FATAL"), default is INFO |
|
Controller.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
Controller.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for Controller |
|
Controller.BasicConfig.Nodes | The nodes to deploy Controller |
|
Controller.ConfigRepoRelativePath | The relative path (to the System.DataRoot) of config repo where the service config files are stored |
|
Controller.FileRepoRelativePath | The relative path (to the System.DataRoot) of the file repo for file management |
|
Controller.FileRepoVersionNum | The maximum version of files to keep in the file repo |
|
Controller.LeaderElectionHeartBeatIntervalMS | The maximum interval(milliseconds) at which each service should call controller leader election service to be considered alive. |
|
Controller.LeaderElectionHeartBeatMaxMiss | The maximum number of heartbeat can be missed before one service is considered dead by controller |
|
Controller.Port | The serving grpc port for Controller |
|
Name | Description | Example |
Dict.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
Dict.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
Dict.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
Dict.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
Dict.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for Dict |
|
Dict.BasicConfig.Nodes | The node list for Dict |
|
Dict.Port | The port for Dict |
|
Name | Description | Example |
ETCD.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
ETCD.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
ETCD.BasicConfig.LogConfig.LogLevel | The log level("DEBUG","INFO","WARN","ERROR","PANIC","FATAL"), default is INFO |
|
ETCD.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
ETCD.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for ETCD |
|
ETCD.BasicConfig.Nodes | The node list for ETCD |
|
ETCD.ClientPort | The port of ETCD to listen for client traffic |
|
ETCD.DataRelativePath | The data dir of etcd under $DataRoot |
|
ETCD.ElectionTimeoutMS | Time (in milliseconds) for an election to timeout |
|
ETCD.HeartbeatIntervalMS | Time (in milliseconds) of a heartbeat interval |
|
ETCD.MaxRequestBytes | Maximum client request size in bytes the server will accept |
|
ETCD.MaxSnapshots | Maximum number of snapshot files to retain (0 is unlimited) |
|
ETCD.MaxTxnOps | Maximum number of operations permitted in a transaction |
|
ETCD.MaxWals | Maximum number of wal files to retain (0 is unlimited) |
|
ETCD.PeerPort | The port of ETCD to listen for peer traffic |
|
ETCD.SnapshotCount | Number of committed transactions to trigger a snapshot to disk |
|
Name | Description | Example |
Executor.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
Executor.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
Executor.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
Executor.BasicConfig.LogConfig.LogLevel | The log level("DEBUG","INFO","WARN","ERROR","PANIC","FATAL"), default is INFO |
|
Executor.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
Executor.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for Executor |
|
Executor.BasicConfig.Nodes | The nodes to deploy Executors |
|
Executor.DataRelativePath | The data dir of executor under $DataRoot |
|
Executor.FileTransferPort | The port for Executor to do file transfer |
|
Executor.FileVersionNum | The maximum version of files to keep |
|
Executor.Port | The serving port for Executor |
|
Executor.WatchDogIntervalMS | The process status check interval (ms) |
|
Name | Description | Example |
FileLoader.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
FileLoader.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
FileLoader.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
FileLoader.BasicConfig.LogConfig.LogLevel | The log level("OFF", "BRIEF", "DEBUG", "VERBOSE"), default is BRIEF |
|
FileLoader.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
FileLoader.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for FileLoader |
|
FileLoader.Factory.DefaultLoadingTimeoutSec | The default per request loading timeout (s) for FileLoader |
|
FileLoader.Factory.DefaultQueryTimeoutSec | The default query timeout (s) for FileLoader |
|
FileLoader.Factory.DynamicEndpointRelativePath | FileLoader's relative (to data root) path to store the dynamic endpoint |
|
FileLoader.Factory.DynamicSchedulerRelativePath | FileLoader's relative (to data root) path to store the dynamic scheduler |
|
FileLoader.Factory.EnableAuth | Enable authentication of FileLoader |
|
FileLoader.Factory.HandlerCount | FileLoader's handler count |
|
FileLoader.Factory.StatsIntervalSec | FileLoader's time interval to collect stats (e.g. QPS) |
|
FileLoader.GPEResponseBasePort | The port of FileLoader to accept GPE response |
|
FileLoader.GSEResponseBasePort | The port of FileLoader to accept GSE response |
|
FileLoader.ReplicaNumber | The number of replica of fileloader per node |
|
Name | Description | Example |
GPE.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
GPE.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
GPE.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
GPE.BasicConfig.LogConfig.LogLevel | The log level("OFF", "BRIEF", "DEBUG", "VERBOSE"), default is BRIEF |
|
GPE.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
GPE.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for GPE |
|
GPE.BasicConfig.Nodes | The node list for GPE |
|
GPE.Disk.CompressMethod | The compress method of GPE disk data |
|
GPE.Disk.DiskStoreRelativePath | The path(relative to temp root) to store GPE temporary disk data |
|
GPE.Disk.LoadThreadNumber | The number of threads to load from disk |
|
GPE.Disk.SaveThreadNumber | The number of threads to save to disk |
|
GPE.EdgeDataMemoryLimit | The memory limit of edge data |
|
GPE.GPE2GPEResponsePort | The GPE port for receiving response back from other GPEs |
|
GPE.GPERequestPort | The GPE port for receiving requests |
|
GPE.IdResponsePort | The GPE port for receiving id response from GSE |
|
GPE.Kafka.BatchMsgNumber | The number of messages to send in one batch when using async mode. The producer will wait until either this number of messages are ready to send or queue buffer max ms is reached. |
|
GPE.Kafka.CompressCodec | This parameter allows you to specify the compression codec for all data generated by this producer. Valid values are none, gzip and snappy. |
|
GPE.Kafka.FetchErrorBackoffTimeMS | How long to postpone the next fetch request for a topic+partition in case of a fetch error. |
|
GPE.Kafka.FetchWaitMaxTimeMS | The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy fetch min bytes. |
|
GPE.Kafka.MsgMaxBytes | Maximum transmit message size. |
|
GPE.Kafka.QueueBufferMaxMsgNumber | The maximum number of unsent messages that can be queued up the producer when using async mode before either the producer must be blocked or data must be dropped. |
|
GPE.Kafka.QueueBufferMaxTimeMS | Maximum time to buffer data when using async mode. |
|
GPE.Kafka.QueueMinMsgNumber | Minimum number of messages per topic+partition in the local consumer queue. |
|
GPE.Kafka.RequestRequiredAcks | This field indicates how many acknowledgements the leader broker must receive from ISR brokers before responding to the request. |
|
GPE.MemoryLimitMB | The total topology memory limit GPE |
|
GPE.RebuildThreadNumber | The number of rebuild threads for GPE |
|
GPE.VertexDataMemoryLimit | The memory limit of vertex data |
|
Name | Description | Example |
GSE.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
GSE.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
GSE.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
GSE.BasicConfig.LogConfig.LogLevel | The log level("OFF", "BRIEF", "DEBUG", "VERBOSE"), default is BRIEF |
|
GSE.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
GSE.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for GSE |
|
GSE.BasicConfig.Nodes | The node list for GSE |
|
GSE.IdRequestPort | The id request serving port of GSE |
|
GSE.RLSPort | The serving port of GSE RLS |
|
Name | Description | Example |
GUI.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
GUI.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
GUI.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
GUI.BasicConfig.LogConfig.LogLevel | The log level("DEBUG","INFO","WARN","ERROR","PANIC","FATAL"), default is INFO |
|
GUI.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
GUI.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for GUI |
|
GUI.BasicConfig.Nodes | The node list for GraphStudio |
|
GUI.ClientIdleTimeSec | The maximum idle time of client-side GraphStudio and AdminPortal before inactivity logout |
|
GUI.Cookie.DurationSec | GUI Cookie duration time in seconds |
|
GUI.Cookie.SameSite | Default mode: 1; Lax mode: 2; Strict mode: 3; None mode: 4 |
|
GUI.DataDirRelativePath | The relative path of gui data folder (to the System.DataRoot) |
|
GUI.HTTPRequest.RetryMax | GUI http request max retry times |
|
GUI.HTTPRequest.RetryWaitMaxSec | GUI HTTP request max retry waiting time in seconds |
|
GUI.HTTPRequest.RetryWaitMinSec | GUI HTTP request minimum retry waiting time in seconds |
|
GUI.HTTPRequest.TimeoutSec | GUI HTTP request timeout in seconds |
|
GUI.Port | The serving port for GraphStudio Websocket communication |
|
GUI.RESTPPResponseMaxSizeBytes | The RESTPP response size limit bytes. |
|
GUI.TempDirRelativePath | The relative path of gui temp folder (to the System.TempRoot) |
|
GUI.TempFileMaxDurationDay | GUI temp file max duration time in days |
|
Name | Description | Example |
Gadmin.StartStopRequestTimeoutMS | The start/stop service default request timeout in milliseconds |
|
Name | Description | Example |
Informant.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
Informant.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
Informant.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
Informant.BasicConfig.LogConfig.LogLevel | The log level("DEBUG","INFO","WARN","ERROR","PANIC","FATAL"), default is INFO |
|
Informant.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
Informant.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for Informant |
|
Informant.BasicConfig.Nodes | The nodes to deploy Informant |
|
Informant.DBRelativePath | The relative path (to the System.DataRoot) of informant database source folder |
|
Informant.GrpcPort | The grpc server port for Informant |
|
Informant.RestPort | The restful server port for Informant |
|
Informant.RetentionPeriodDay | The period in days for local database records to keep, set -1 for keeping forever |
|
Name | Description | Example |
Kafka.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
Kafka.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
Kafka.BasicConfig.LogConfig.LogLevel | The log level for kafka ("TRACE", "DEBUG", "INFO", "WARN", "ERROR", "FATAL" "OFF") |
|
Kafka.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
Kafka.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for Kafka |
|
Kafka.BasicConfig.Nodes | The node list for Kafka |
|
Kafka.DataRelativePath | The data dir of kafka under $DataRoot |
|
Kafka.IOThreads | The number of threads for Kafka IO |
|
Kafka.LogFlushIntervalMS | The threshold of time for flushing log (ms) |
|
Kafka.LogFlushIntervalMessage | The threshold of message for flushing log |
|
Kafka.MessageMaxSizeMB | The maximum size of a message of Kafka to be produced (megabytes) |
|
Kafka.MinInsyncReplicas | The minimal number of insync replicas that must acknowledge, when producer sets acks to 'all' |
|
Kafka.NetworkThreads | The number of threads for Kafka Network |
|
Kafka.Port | The serving port for Kafka |
|
Kafka.RetentionHours | The minimum age of a log file of Kafka to be eligible for deletion (hours) |
|
Kafka.RetentionSizeGB | The minimum size of a log file of Kafka to be eligible for deletion (gigabytes) |
|
Kafka.TopicReplicaFactor | The default replica number for each topic |
|
Name | Description | Example |
KafkaConnect.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
KafkaConnect.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
KafkaConnect.BasicConfig.LogConfig.LogLevel | The log level for kafka connect ("TRACE", "DEBUG", "INFO", "WARN", "ERROR", "FATAL" "OFF") |
|
KafkaConnect.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
KafkaConnect.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for Kafka connect |
|
KafkaConnect.BasicConfig.Nodes | The node list for Kafka connect |
|
KafkaConnect.OffsetFlushIntervalMS | The interval at which Kafka connect tasks' offsets are committed |
|
KafkaConnect.Port | The port used for kafka connect |
|
KafkaConnect.ReconnectBackoffMS | The amount of time to wait before attempting to reconnect to a given host |
|
KafkaConnect.RetryBackoffMS | The amount of time to wait before attempting to retry a failed fetch request to a given topic partition |
|
Name | Description | Example |
KafkaLoader.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
KafkaLoader.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
KafkaLoader.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
KafkaLoader.BasicConfig.LogConfig.LogLevel | The log level("OFF", "BRIEF", "DEBUG", "VERBOSE"), default is BRIEF |
|
KafkaLoader.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
KafkaLoader.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for KafkaLoader |
|
KafkaLoader.Factory.DefaultLoadingTimeoutSec | The default per request loading timeout (s) for KafkaLoader |
|
KafkaLoader.Factory.DefaultQueryTimeoutSec | The default query timeout (s) for KafkaLoader |
|
KafkaLoader.Factory.DynamicEndpointRelativePath | KafkaLoader's relative (to data root) path to store the dynamic endpoint |
|
KafkaLoader.Factory.DynamicSchedulerRelativePath | KafkaLoader's relative (to data root) path to store the dynamic scheduler |
|
KafkaLoader.Factory.EnableAuth | Enable authentication of KafkaLoader |
|
KafkaLoader.Factory.HandlerCount | KafkaLoader's handler count |
|
KafkaLoader.Factory.StatsIntervalSec | KafkaLoader's time interval to collect stats (e.g. QPS) |
|
KafkaLoader.GPEResponseBasePort | The port of KafkaLoader to accept GPE response |
|
KafkaLoader.GSEResponseBasePort | The port of KafkaLoader to accept GSE response |
|
KafkaLoader.ReplicaNumber | The number of replica of kafkaloader per node |
|
Name | Description | Example |
KafkaStreamLL.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
KafkaStreamLL.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
KafkaStreamLL.BasicConfig.LogConfig.LogLevel | The log level for Kafka stream LoadingLog ("TRACE", "DEBUG", "INFO", "WARN", "ERROR", "FATAL" "OFF") |
|
KafkaStreamLL.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
KafkaStreamLL.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for Kafka stream LoadingLog |
|
KafkaStreamLL.BasicConfig.Nodes | The node list for Kafka stream LoadingLog |
|
KafkaStreamLL.Port | The port used for Kafka stream LoadingLog |
|
KafkaStreamLL.StateDirRelativePath | The relative folder path for Kafka stream LoadingLog state |
|
Name | Description | Example |
RESTPP.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
RESTPP.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
RESTPP.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
RESTPP.BasicConfig.LogConfig.LogLevel | The log level("OFF", "BRIEF", "DEBUG", "VERBOSE"), default is BRIEF |
|
RESTPP.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
RESTPP.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for RESTPP |
|
RESTPP.BasicConfig.Nodes | The node list for RESTPP |
|
RESTPP.FCGISocketBackLogMaxCnt | RESTPP fcgi socket backlog max length which is the listen queue depth used in the listen() call. |
|
RESTPP.FCGISocketFileRelativePath | The relative path of FCGI socket for RESTPP-Nginx communitation under $TempRoot |
|
RESTPP.Factory.DefaultLoadingTimeoutSec | The default per request loading timeout (s) for RESTPP |
|
RESTPP.Factory.DefaultQueryTimeoutSec | The default query timeout (s) for RESTPP |
|
RESTPP.Factory.DynamicEndpointRelativePath | RESTPP's relative (to data root) path to store the dynamic endpoint |
|
RESTPP.Factory.DynamicSchedulerRelativePath | RESTPP's relative (to data root) path to store the dynamic scheduler |
|
RESTPP.Factory.EnableAuth | Enable authentication of RESTPP |
|
RESTPP.Factory.HandlerCount | RESTPP's handler count |
|
RESTPP.Factory.StatsIntervalSec | RESTPP's time interval to collect stats (e.g. QPS) |
|
RESTPP.GPEResponsePort | The port of RESTPP to accept GPE response |
|
RESTPP.GSEResponsePort | The port of RESTPP to accept GSE response |
|
RESTPP.HttpServer.Enable | Enable RESTPP's http server |
|
RESTPP.HttpServer.Port | RESTPP's http server port |
|
RESTPP.HttpServer.WorkerNum | RESTPP's http server worker number |
|
RESTPP.NginxPort | The port of RESTPP to accept upstream Nginx requests |
|
Name | Description | Example |
System.AppRoot | The root directory for TigerGraph applications |
|
System.AuthToken | The authorization token for TigerGraph services |
|
System.Backup.CompressProcessNumber | The number of concurrent process for compression during backup |
|
System.Backup.Local.Enable | Backup data to local path |
|
System.Backup.Local.Path | The path to store the backup files |
|
System.Backup.S3.AWSAccessKeyID | The AWS access key ID for s3 bucket of backup |
|
System.Backup.S3.AWSSecretAccessKey | The secret access key for s3 bucket |
|
System.Backup.S3.BucketName | The S3 bucket name |
|
System.Backup.S3.Enable | Backup data to S3 path |
|
System.Backup.TimeoutSec | The backup timeout in seconds |
|
System.CrossRegionReplication.Enabled | Enable Kafka Mirrormaker |
|
System.CrossRegionReplication.GpeTopicPrefix | The prefix of GPE Kafka Topic, by default is empty. |
|
System.CrossRegionReplication.PrimaryKafkaIPs | Kafka mirrormaker primary cluster's IPs, separator by ',' |
|
System.CrossRegionReplication.PrimaryKafkaPort | Kafka mirrormaker primary cluster's KafkaPort |
|
System.DataRoot | The root directory for data |
|
System.Event.EventInputTopic | Kafka topic name of event input queue |
|
System.Event.EventOffsetFolderRelativePath | The relative path (to the System.DataRoot) of the folder to keep track of Kafka offsets for event input/output queue |
|
System.Event.EventOutputTopic | Kafka topic name of event output queue |
|
System.Expansion.TimeoutSec | The backup timeout in seconds |
|
System.HostList | The aliases and hostnames/IPs for nodes |
|
System.License | The license key for TigerGraph system |
|
System.LogRoot | The root directory for TigerGraph logs |
|
System.SSH.ConfigFileRelativePath | The relative path (to the System.DataRoot) of SSH config file |
|
System.SSH.Port | SSH port |
|
System.SSH.User.Password | OS User password (optional if using privatekey) |
|
System.SSH.User.Privatekey | OS user private key path |
|
System.SSH.User.Username | OS Username for TigerGraph database |
|
System.TempRoot | The temporary directory for TigerGraph applications |
|
Name | Description | Example |
TS3.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
TS3.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
TS3.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
TS3.BasicConfig.LogConfig.LogLevel | The log level("DEBUG","INFO","WARN","ERROR","PANIC","FATAL"), default is INFO |
|
TS3.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
TS3.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for TS3 |
|
TS3.BasicConfig.Nodes | The node list for TS3 |
|
TS3.BufferSize | The buffer size of TS3 |
|
TS3.DBRelativePath | The relative path (to the System.DataRoot) of TS3 database source folder |
|
TS3.DbTrace | Enable tracing for db operations |
|
TS3.Metrics | The metrics TS3 will be collecting |
|
TS3.RetentionPeriodDay | The period in days for local database records to keep, set -1 for keeping forever |
|
Name | Description | Example |
TS3Server.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
TS3Server.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
TS3Server.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
TS3Server.BasicConfig.LogConfig.LogLevel | The log level("DEBUG","INFO","WARN","ERROR","PANIC","FATAL"), default is INFO |
|
TS3Server.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
TS3Server.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for TS3Serer |
|
TS3Server.BasicConfig.Nodes | The node list for TS3Server(Currently only support one node) |
|
TS3Server.GrpcPort | The grpc api port for TS3Server |
|
TS3Server.RestPort | The restful api port for TS3Server |
|
Name | Description | Example |
ZK.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
ZK.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
ZK.BasicConfig.LogConfig.LogLevel | The log level for zk ("TRACE", "DEBUG", "INFO", "WARN", "ERROR", "FATAL" "OFF") |
|
ZK.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
ZK.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for ZK |
|
ZK.BasicConfig.Nodes | The node list for Zookeeper |
|
ZK.DataRelativePath | The data dir of zookeeper under $DataRoot |
|
ZK.ElectionPort | The port for Zookeeper to do leader election |
|
ZK.ForceSync | The force syncronize property of zookeeper |
|
ZK.InitLimit | The amount of time, in ticks(by default 2s for one tick), to allow followers to connect and sync to a leader. Increased this value as needed, if the amount of data managed by ZooKeeper is large |
|
ZK.Port | The serving port for Zookeeper |
|
ZK.QuorumPort | The port for Zookeeper to do peer communication |
|
Name | Description | Example |
GSQL.BasicConfig.Env | The runtime environment variables, separated by ';' |
|
GSQL.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
GSQL.BasicConfig.LogConfig.LogLevel | GSQL log level: ERROR, INFO, DEBUG |
|
GSQL.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
GSQL.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for GSQL |
|
GSQL.BasicConfig.Nodes | The node list for GSQL |
|
GSQL.CatalogBackupFileMaxDurationDay | The maximum number of days for catalog backup files to retain |
|
GSQL.CatalogBackupFileMaxNumber | The maximum number of catalog backup files to retain |
|
GSQL.DataRelativePath | The data dir of gsql under $DataRoot |
|
GSQL.EnableStringCompress | Enable string compress |
|
GSQL.GithubBranch | The working branch in provided repository. Will use 'master' as the default branch |
|
GSQL.GithubPath | The path to the directory in the github that has TokenBank.cpp, ExprFunctions.hpp, ExprUtil.hpp, e.g. sample_code/src |
|
GSQL.GithubRepository | The repository name, e.g. tigergraph/ecosys |
|
GSQL.GithubUrl |
|
GSQL.GithubUserAcessToken | The credential for github. Set it to 'anonymous' for public access, or empty to not use github |
|
GSQL.GrpcMessageMaxSizeMB | The maximum size of grpc message request of gsql |
|
GSQL.MaxAuthTokenLifeTimeSec | The maximum lifetime of auth token in seconds, 0 means unlimited |
|
GSQL.OutputTokenBufferSize | The buffer size for output token from GSQL |
|
GSQL.Port | The server port for GSQL |
|
GSQL.QueryResponseMaxSizeByte | Maximum response size in byte |
|
GSQL.RESTPPRefreshTimeoutSec | Refresh time in Seconds of Restpp |
|
GSQL.SchemaIndexFileNumber | File number |
|
GSQL.TokenCleaner.GraceTimeSec | The grace time (in seconds) for expired tokens to exist without being cleaned |
|
GSQL.TokenCleaner.IntervalTimeSec | The running interval of TokenCleaner in seconds |
|
GSQL.UserInfoLimit.TokenSizeLimit | The max number of tokens allowed |
|
GSQL.UserInfoLimit.UserCatalogFileMaxSizeByte | The file size limit for user metadata in byte |
|
GSQL.UserInfoLimit.UserSizeLimit | The max number of users allowed |
|
GSQL.WaitServiceOnlineTimeoutSec | Timeout to wait for all services online |
|
Name | Description | Example |
Nginx.AllowedCIDRList | The whitelist of IPv4/IPv6 CIDR blocks to restrict the application access, separate in comma. |
|
Nginx.BasicConfig.LogConfig.LogFileMaxDurationDay | The maximum number of days to retain old log files based on the timestamp encoded in their filename |
|
Nginx.BasicConfig.LogConfig.LogFileMaxSizeMB | The maximum size in megabytes of the log file before it gets rotated |
|
Nginx.BasicConfig.LogConfig.LogRotationFileNumber | The maximum number of old log files to retain |
|
Nginx.BasicConfig.LogDirRelativePath | The relative path (to the System.LogRoot) of log directory for Nginx |
|
Nginx.BasicConfig.Nodes | The node list for Nginx |
|
Nginx.ClientMaxBodySize | The maximum request size for Nginx in MB |
|
Nginx.ConfigTemplate |
|
Nginx.Port | The serving port for Nginx |
|
Nginx.ResponseHeaders | The customized headers in HTTP Response |
|
Nginx.SSL.Cert | Public certificate for SSL. (Could use @cert_file_path to parse the certificate from file) |
|
Nginx.SSL.Enable | Enable SSL connection for all HTTP requests |
|
Nginx.SSL.Key | Private key for SSL. (Could use @key_file_path to parse the key from file) |
|
Nginx.WorkerProcessNumber | The number of worker processes for Nginx |
|
Name | Description | Example |
Security.LDAP.AdminDN | Configure the DN of LDAP user who has read access to the base DN specified above. Empty if everyone has read access to LDAP data: default empty |
|
Security.LDAP.AdminPassword | Configure the password of the admin DN specified above. Needed only when admin_dn is specified: default empty |
|
Security.LDAP.BaseDN | Configure LDAP search base DN, the root node to start the LDAP search for user authentication: must specify |
|
Security.LDAP.Enable | Enable LDAP authentication: default false |
|
Security.LDAP.Hostname | Configure LDAP server hostname: default localhost |
|
Security.LDAP.Port | Configure LDAP server port: default 389 |
|
Security.LDAP.SearchFilter | Configure LDAP search base DN, the root node to start the LDAP search for user authentication. |
|
Security.LDAP.Secure.Protocol | Enable SSL/StartTLS for LDAP connection [none/ssl/starttls]: default none |
|
Security.LDAP.Secure.TrustAll | Configure to trust all LDAP servers (unsafe): default false |
|
Security.LDAP.Secure.TruststoreFormat | Configure the truststore format [JKS/PKCS12]: default JKS |
|
Security.LDAP.Secure.TruststorePassword | Configure the truststore password: default changeit |
|
Security.LDAP.Secure.TruststorePath | Configure the truststore absolute path for the certificates used in SSL: default empty |
|
Security.LDAP.UsernameAttribute | Configure the username attribute name in LDAP server: default uid |
|
Security.SSO.SAML.AssertionSigned | Require Identity Provider to sign assertions: default true |
|
Security.SSO.SAML.AuthnRequestSigned | Sign AuthnRequests before sending to Identity Provider: default true |
|
Security.SSO.SAML.BuiltinUser | The builtin user for SAML |
|
Security.SSO.SAML.Enable | Enable SAML2-based SSO: default false |
|
Security.SSO.SAML.IDP.EntityId |
|
Security.SSO.SAML.IDP.SSOUrl |
|
Security.SSO.SAML.IDP.X509Cert | Identity Provider's x509 Certificate filepath: default empty. You can use @/cert/file/path to pass the certificate from a file. |
|
Security.SSO.SAML.MetadataSigned | Sign Metadata: default true |
|
Security.SSO.SAML.RequestedAuthnContext | Authentication context (comma separate multiple values) |
|
Security.SSO.SAML.ResponseSigned | Require Identity Provider to sign SAML responses: default true |
|
Security.SSO.SAML.SP.Hostname |
|
Security.SSO.SAML.SP.PrivateKey | Content of the host machine's private key. Require PKCS#8 format (start with "BEGIN PRIVATE KEY"). You can use @/privatekey/file/path to pass the certificate from a file. |
|
Security.SSO.SAML.SP.X509Cert | Content of the x509 Certificate: default empty. You can use @/cert/file/path to pass the certificate from a file. |
|
Security.SSO.SAML.SignatureAlgorithm | Signiture algorithm [rsa-sha1/rsa-sha256/rsa-sha384/rsa-sha512]: default rsa-sha256 |
|
The url that is used for github enterprise, e.g.
The template to generate nginx config. Please use @filepath
to parse template from file. Check the default template first at : Don't modify the reserved keywords(string like UPPER_CASE) in template.)
Identity Provider Entity ID: default
Single Sign-On URL: default
TigerGraph Service Provider URL: default