Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Release Date: June 30, 2020.
For v2.1 and older, contact TigerGraph Support
For the running log of bug fixes, see the Change Log.
Major revisions (e.g., from TigerGraph 2 to TigerGraph 3) are the opportunity to deliver significant improvements. While we make every effort to maintain backward compatibility, in selected cases APIs have changed or deprecated features have been dropped, in order to advance the overall product.
Data migration: A tool is available to migrate the data in TigerGraph 2.6 to TigerGraph 3.0. Please contact TigerGraph Support for assistance.
Query and API compatibility:
Some gadmin syntax has changed. Notably. gadmin set config
is now gadmin config set
. Please see Managing with gadmin.
Some features which were previously deprecated have been dropped. Please see V3.0 Removal of Previously Deprecated Features for a detailed list.
TigerGraph 3.0 is a major revision, with several major new features, many smaller feature additions or improvements, and important performance improvements. Users who are currently running
Perform ACCUM computation and graph updates on any vertex or edge match in the pattern.
(BETA) Multiple (Conjunctive) Path Patterns: Easily search for complex graph patterns by combining multiple linear paths.
Queries run in Distributed mode support nearly the full scope of the GSQL language.
Run queries in Interpreted Mode
Design and use a MultiGraph schema
Internationalized UI: Chinese now available. Other languages planned
Improved Explore Graph display panel with more options
Export/Import options for just schema, just one graph, with or without user profiles
Significantly faster installation time, especially for clusters
Allows users to maintain more than one set of binaries to be installed
Admin - Better standardization of gadmin
command format
Admin - license key is not displayed in gadmin
output
GSQL - Complex Edge types: one edge type can be used for multiple pairs of vertex types
GSQL - accumulate += vertex and edge attributes in ACCUM & POST-ACCUM
GSQL - can use Global SetAccum accumulators in a FROM clause
GSQL - TYPEDEF accumulator type, for returning results in nested queries
GSQL - FROM clause: Source vertex set has same flexibility as Target vertex set
GSQL - backquote escape to encapsulate username strings
GSQL - string comparison in lexicographical order
GSQL - User's OAuth secret is masked in the output of SHOW USER / SHOW SECRET
GSQL - Reserved Word lists have been greatly reduced
GSQL - Change behavior of CREATE OR REPLACE QUERY - do not install the new query
GSQL/GraphStudio - session timeout with user-configurable time limit
GSQL/GraphStudio - installing a query does not block other users from running queries
REST - query endpoints can use GET (parameters in URL) or POST (parameters in payload)
REST /requesttoken endpoint can use GET or POST
REST - Correction: Use port 14240 to access GSQL server, not 8123
GraphStudio - Users can search for a vertex type on the Design Schema page, to aid the design of large schemas
GraphStudio - Support for Microsoft Internet Explorer and Edge browsers
GraphStudio - Remove 500MD limit on file upload size
GraphStudio - Set query timeout limit
GraphStudio - Refresh the Graph Exploration client with the latest data on the database server
GraphStudio - Improved graph display panel, with persistent menu for zoom in/out, refresh display (re-fetch from the database server), and change layout style.
GraphStudio - Write Queries button to download the query to a file
GraphStudio - accumulator values persist after adjusting the display
GraphStudio - other users can run other queries while one user is installing a query
Clarified that outdegree() might not report the latest values under heavy workload.
Release Date: 2020-09-05
Remove SSH requirement for Platform services except for one time operations like Install, Upgrade and Start of executor daemon process.
Robustness improvement to allow HA service reliability for cluster configuration management
Log the directory tree of backup directories
Remove ssh dependency
Support JSON output for gbar list
Record GSQL password and add password check for restore
Zookeeper load reduction by caching static/runtime nodes in dictionary
JSON File Upload support
Load only if the vertex does not yet exist (true Insert semantics instead of Upsert semantics
GraphStudio
The No-Code Data Migration feature is in Alpha release. Your feedback would be appreciated.
The No-Code Visual Query Builder is in Beta release. Your feedback would be appreciated.
AdminPortal
Automated Cluster Scale Out
In TigerGraph 3.0, automated cluster scale-out is not supported. There is workaround available using the Backup/Restore utility. Customers are requested to reach out to TigerGraph support for Cluster scale-out operation.
HA/Node Failure Scenarios: Core data service operations like Querying and Updates will work as usual.
Impact of Failure of a node in the cluster:
gadmin
operations will not work if one of the nodes is offline [NOTE: fixed in v3.0.5]. There is workaround available. Metadata operations will not work if one of the nodes is offline. Metadata operations include Install Query / GBAR / GSQL commands such as User and Token management.
m1 node is required to run user-facing applications such as GSQL and GraphStudio. This is because GSQL/Graph Studio metadata is stored on m1 node only.
Multiple (Conjunctive) Path Patterns:
There are no known functional problems, but the performance has not be optimized. Your feedback would be appreciated.
DML type check error in V2 Syntax:
GSQL will report a wrong type check error for Query block with multiple POST-ACCUM clauses and Delete/Update attribute operation.
gstatusgraph (Graph Statistics)
Graph statistics (gstatusgraph) may give inaccurate number when the cluster has node names too similar, where a node name is a part of another node name. It's usually seen in cluster has 10+ nodes.
A quick fix is available at: https://tigergraph.freshdesk.com/support/solutions/articles/5000859550-how-to-patch-gstatusgraph-in-3-0-0
(NEW) User-defined Indexing for Vertex types Users can decide where to add indexes to speed up query performance. Also, available in GraphStudio.
(NEW) Dynamic Querying Queries can now be installed in a generic format, so that the schema specifics (graph, vertex, edge, and attribute names and types) are given as run-time parameters.
More powerful Pattern Matching
More powerful Distributed Mode queries
(BETA) No-Code Visual Query Builder for No-Code querying
(ALPHA) No-Code Data Migration from Relational Database to Graph
Enhancements
(NEW) User Management Page with Role/Privileges assignment capabilities
(NEW) License Overview Page with Resource usage Information
(NEW) Export/Import Utility
(NEW) Parallel Installer
GSQL - STRING COMPRESS should be used with caution. See Data Types
Support for SSH removal from Cluster operations
Support gadmin commands with failed nodes in cluster
Backup/Restore Enhancements
System reliability Improvements
Data Loading Improvement
LOAD statement have NEW_VERTEX_ONLY option
DROP USER is disallowed by Admin users. Only allowed by Superusers.
If an error occurs in one of the TigerGraph system's components, it may issue an error code and/or an error message. If the system was handling a user request, the error code and message may be in the JSON response (see GSQL Query Output Format ). The error information may be in a log file.
When the GSQL Server completes a request, it returns an Exit Code
When the GSQL Client completes a request, it returns an Exit Code
TigerGraph 2.x contained some features which were labeled as deprecated. These features are no longer necessary because they have been superseded already by improved approaches for using the TigerGraph platform. The new approaches were developed because they use more consistent grammar, are more extensible, or offer higher performance. Therefore, TigerGraph 3.0 has streamlined the product by removing support for some of these deprecated features, listed below:
See Data Types in GSQL Language Reference
See Control Flow Statements in GSQL Language Reference
See Vertex Set Variable Declaration and Assignment
If a vertex type is specified, the vertex type must be within parentheses.
These are documented in several places throughout the GSQL Language Reference:
See PRINT Statement in 'Output Statements and File Objects'
See Run Built-in Queries in 'GSQL 101'
This troubleshooting guide is only up to date for v2.6 and below. Changes will be coming to this page for v3.0 and future releases.
The Troubleshooting Guide teaches you how check on the status of your TigerGraph system, and when needed, how to find the log files in order to get a better understanding of why certain errors are occurring. This section covers log file debugging for data loading and querying.
Before any deeper investigation, always run these general system checks :
The following command reveals the location of the log files :
You will be presented with a list of log files. The left side of the resulting file paths is the component for which the respective log file is logging information. The majority of the time, these files will contain what you are looking for. You may notice that there are multiple files for each TigerGraph component.
The .out file extension is for errors. The .INFO file extension is for normal behaviors.
In order to diagnose an issue for a given component, you'll want to check the .out log file extension for that component.
For TigerGraph v3.0, GCollect will be supported starting with v.3.0.
To aid in the effort of system debugging, there is a tool you can use to collect all relevant log files from around the time of a system malfunction or error. Collection of these files greatly improves the efficiency of the support process, as this minimizes the need to access a customer environment to diagnose issues remotely. This will also avoid delayed restart of services.
Here is the relevant information from the TigerGraph servers that will be collected when running the gcollect
command.
All the log files will be printed in the output directory, specified when running the gcollect
command, and each node has a subdirectory. Each component will have one or two log files.
The installation will quit if there are any missing dependency packages, and output a message. Please run bash install_tools.sh
to install all missing packages. You will need internet connection to install the missing dependencies.
The /home directory requires at least 200MB of space, or the installation will fail with an out of disk message. This is temporary only during installation and will be moved to the root directory once installation is complete.
The /tmp directory requires at least 1GB of space, or the installation will fail with an out of disk message
The directory in which you choose to install TigerGraph requires at least 20GB of space, otherwise the installation will report the error and exit.
If your firewall blocks all ports not defined for use, we recommend opening up internal ports 1000-50000.
If you are using a cloud instance, you will need to configure the firewall rules through the respective consoles.
e.g. Amazon AWS or Microsoft Azure
If you are managing a local machine, you can manage your open ports using the iptables
command. Please refer to the example below to help with your firewall configuration.
As of v3.0, we can run the installer with the -F flag, which will open tcp ports among the cluster nodes. This will resolve any firewall issues that may block the installation from completing.
To better help you understand the flow of a query within the TigerGraph system, we've provided the diagram below with arrows showing the direction of information flow. We'll walk through the execution of a typical query to show you how to observe the information flow as recorded in the log files.
From calling a query to returning the result, here is how the information flows: 1. Nginx receives the request.
You can click on the image below to expand it.
2. Nginx sends the request to Restpp.
3. Restpp sends an ID translation task to GSE and a query request to GPE. 4. GSE sends the translated ID to GPE, and the GPE starts to process the query. 5. GPE sends the query result to Restpp, and sends a translation task to GSE, which then sends the translation result to Restpp.
6. Restpp sends the result back to Nginx.
7. Nginx sends the response.
Multiple situations can lead to slower than expected query performance:
Insufficient Memory
When a query begins to use too much memory, the engine will start to put data onto the disk, and memory swapping will also kick in. Use the Li command: free -g
to check available memory and swap status. To combat this, you can either optimize the data structure used within the query or increase the physical memory size on the machine.
GSQL Logic Usually, a single server machine can process up to 20 million edges per second. If the actual number of vertices or edges is much much lower, most of the time it can be due to inefficient query logic. That is, the query logic is now following the natural execution of GSQL. You will need to optimize your query to tune the performance.
Disk IO
When the query writes the result to the local disk, the disk IO may be the bottleneck for the query's performance. Disk performance can be checked with this Linux command : sar 1 10
.
If you are writing (PRINT) one line at a time and there are many lines, storing the data in one data structure before printing may improve the query performance.
Huge JSON Response If the JSON response size of a query is too massive, it may take longer to compose and transfer the JSON result than to actually traverse the graph. To see if this is the cause, check the GPE log.INFO file. If the query execution is already completed in GPE but has not been returned, and CPU usage is at about 200%, this is the most probable cause. If possible, please reduce the size of the JSON being printed.
Memory Leak This is a very rare issue. The query will progressively become slower and slower, while GPE's memory usage increases over time. If you experience these symptoms on your system, please report this to the TigerGraph team.
Network Issues When there are network issues during communication between servers, the query can be slowed down drastically. To identify that this is the issue, you can check the CPU usage of your system along with the GPE log.INFO file. If the CPU usage stays at a very low level and GPE keeps printing ??? , this means network IO is very high.
Frequent Data Ingestion in Small Batches Small batches of data can increase the data loading overhead and query processing workload. Please increase the batch size to prevent this issue.
When a query hangs, or seems to run forever, it can be attributed to these possibilities :
Services are down
Please check that TigerGraph services are online and running. Run gadmin status
and possibly check the logs for any issues that you find from the status check.
Query infinite loop
To verify this is the issue, check the GPE log.INFO file to see if graph iteration log lines are continuing to be produced. If they are, and the edgeMaps log the same number of edges every few iterations, you have an infinite loop in your query.
If this is the case, please restart GPE to stop the query : gadmin restart gpe -y
.
Proceed to refine your query and make sure your loops within the query are able to break out of the loop.
Query is still running, it is just slow If you have a very large graph, please be patient. Ensure that there is no infinite loop in your query, and refer to the slow query performance section for possible causes.
GraphStudio Error If you are running the query from GraphStudio, the loading bar may continue spinning as if the query has not finished running. You can right-click the page and select inspect->console (in the Google Chrome browser) and try to find any suspicious errors there.
If a query runs and does not return a result, it could be due to two reasons: 1. Data is not loaded. From the Load Data page on GraphStudio, you are able to check the number of loaded vertices and edges, as well as a number of each vertex or edge type. Please ensure that all the vertices and edges needed for the query are loaded.
2. Properties are not loaded. The number of vertices and edges traversed can be observed in the GPE log.INFO file. If for one of the iterations you see activated 0 vertices, this means no target vertex satisfied your searching condition. For example, the query can fail to pass a WHERE clause or a HAVING clause. If you see 0 vertex reduces while the edge map number is not 0, that means that all edges have been filtered out by the WHERE clause, and that no vertices have entered into the POST-ACCUM phase. If you see more than 0 vertex reduces, but activated 0 vertices, this means all the vertices were filtered out by the HAVING clause.
To confirm the reasoning within the log file, use GraphStudio to pick a few vertices or edges that should have satisfied the conditions and check their attributes for any unexpected errors.
Query Installation may fail for a handful of reasons. If a query fails to install, please check the GSQL log file. The default location for the GSQL log is here :
Go down to the last error and it will point you to the error. This will show you any query errors that could be causing the failed installation. If you have created a user-defined function, you could potentially have a c++ compilation error.
If you have a c++ user-defined function error, your query will fail to install, even if it does not utilize the UDF.
The following example shows the system free memory is 69%
Using GraphStudio, you are able to see, from a high-level, a number of errors that may have occurred during the loading. This is accessible from the Load Data page. Click on one of your data sources, then click on the second tab of the graph statistics chart. There, you will be able to see the status of the data source loading, number of loaded lines, number of lines missing, and lines that may have an incorrect number of columns. (Refer to picture below.)
If you see there are a number of issues from the GraphStudio Load Data page, you can dive deeper to find the cause of the issue by examining the log files. Check the loading log located here:
Open up the latest .log file and you will be able to see details about each data source. The picture below is an example of a correctly loaded data file.
Here is an example of a loading job with errors :
From this log entry, you are able to see the errors being marked as lines with invalid attributes. The log will provide you the line number from the data source which contains the loading error, along with the attribute it was attempting to load to.
Normally, a single server running TigerGraph will be able to load from 100k to 1000k lines per second, or 100GB to 200GB of data per hour. This can be impacted by any of the following factors:
Loading Logic How many vertices/edges are generated from each line loaded?
Data Format Is the data formatted as JSON or CSV? Are multi-level delimiters in use? Does the loading job intensively use temp_tables?
Hardware Configuration Is the machine set up with HDD or SSD? How many CPU cores are available on this machine?
Network Issue Is this machine doing local loading or remote POST loading? Any network connectivity issues?
Size of Files How large are the files being loaded? Many small files may decrease the performance of the loading job.
High Cardinality Values Being Loaded to String Compress Attribute Type How diverse is the set of data being loaded to the String Compress attribute?
To combat the issue of slow loading, there are also multiple methods:
If the computer has many cores, consider increasing the number of Restpp load handlers.
Separate ~/tigergraph/kafka
from ~/tigergraph/gstore
and store them on separate disks.
Do distributed loading.
Do offline batch loading.
Combine many small files into one larger file.
When a loading job seems to be stuck, here are things to check for :
GPE is DOWN
You can check the status of GPE with this command : gadmin status gpe
If GPE is down, you can find the logs necessary with this command : gadmin log -v gpe
Memory is full
Run this command to check memory usage on the system : free -g
Disk is full
Check disk usage on the system : df -lh
Kafka is DOWN
You can check the status of Kafka with this command : gadmin status kafka
If it is down, take a look at the log with this command : vim ~/tigergraph/log/kafka/KAFKA#1.out
Multiple Loading Jobs By default, the Kafka loader is configured to allow a single loading job. If you execute multiple loading jobs at once, they will run sequentially.
If the loading job completes, but data is not loaded, there may be issues with the data source or your loading job. Here are things to check for:
Any invalid lines in the data source file. Check the log file for any errors. If an input value does not match the vertex or edge type, the corresponding vertex or edge will not be created.
Using quotes in the data file may cause interference with the tokenization of elements in the data file. Please check the GSQL Language Reference section under Other Optional LOAD Clauses. Look for the QUOTE parameter to see how you should set up your loading job.
Your loading job loads edges in the incorrect order. When you defined the graph schema, the from and to vertex order will affect the way you write the loading job. If you wrote the loading job in reversed order, the edges will not be created, possibly also affecting the population of vertices.
If you know what data you expect to see (number of vertices and edges, and attribute values), but the loaded data does not mean your expectations, there are a number of possible causes to investigate:
First, check the logs for important clues.
Are you reaching and reading all the data sources (paths and permissions)?
Is the data mapping correct?
Are your data fields correct? In particular, check data types. For strings, check for unwanted extra strings. Leading spaces are not removed unless you apply an optional token function to trim the extra spaces.
Do you have duplicate ids, resulting in the same vertex or edge being loading more than once. Is this intended or unintended? TigerGraph's default loading semantics is UPSERT. Check the loading documentation to maker sure you understand the semantics in detail:
https://docs.tigergraph.com/dev/gsql-ref/ddl-and-loading/creating-a-loading-job#cumulative-loading
Possible causes of a loading job failure are:
Loading job timed out If a loading job hangs for 600 seconds, it will automatically time out.
Port Occupied Loading jobs require port 8500. Please ensure that this port is open.
This section will only cover the debugging schema change jobs, for more information about schema changes, please read the Modifying a Graph Schema page.
Understanding what happens behind the scenes during a schema change.
DSC (Dynamic Schema Change) Drain - Stops the flow of traffic to RESTPP and GPE If GPE receives a DRAIN command, it will wait 1 minute for existing running queries to finish up. If the queries do not finish within this time, the DRAIN step will fail, causing the schema change to fail.
DSC Validation - Verification that no queries are still running.
DSC Apply - Actual step where the schema is being changed.
DSC Resume - Traffic resumes after schema change is completed. Resume will automatically happen if a schema change fails. RESTPP comes back online. All buffered query requests will go through after RESTPP resumes, and will use the new updated schema.
Schema changes are not recommended for production environments. Even if attributes are deleted, TigerGraph's engine will still scan all previous attributes. We recommend limiting schema changes to dev environments.
Schema changes are all or nothing. If a schema change fails in the middle, changes will not be made to the schema.
Failure when creating a graph
Global Schema Change Failure
Local Schema Change Failure
Dropping a graph fails
If GPE or RESTPP fail to start due to YAML error, please report this to TigerGraph.
If you encounter a failure, please take a look at the GSQL log file : gadmin log gsql
. Please look for these error codes:
Error code 8 - The engine is not ready for the snapshot. Either the pre-check failed or snapshot was stopped. The system is in critical non-auto recoverable error state. Manual resolution is required. Please contact TigerGraph support.
Error code 310 - Schema change job failed and the proposed change has not taken effect. This is the normal failure error code. Please see next section for failure reasons.
Another schema change or a loading job is running. This will cause the schema change to fail right away.
GPE is busy. Potential reasons include :
Long running query.
Loading job is running.
Rebuild process is taking a long time.
Service is down. (RESTPP/GPE/GSE)
Cluster system clocks are not in sync. Schema change job will think the request is stale, causing this partition's schema change to fail.
Config Error. If the system is shrunk manually, schema change will fail.
You will need to check the logs in this order : GSQL log, admin_server log, service log.
Admin_server log files can be found here : ~/tigergraph/log/admin/
You will want to take a look at the INFO file.
The service log is each of the services respectively. gadmin log <service_name>
will show you the location of these log files.
In this case, we see that RESTPP failed at the DRAIN stage. We need to first look at whether RESTPP services are all up. Then, verify that the time of each machine is the same. If all these are fine, we need to look at RESTPP log to see why it fails. Again, use the "DSC" keyword to navigate the log.
To check the status of GSE, and all other processes, run gadmin status
to show the status of key TigerGraph processes. As with all other processes, you are able to find the log file locations for GSE by the gadmin log
command. Refer to the Location of Log Files for more information about which files to check.
If the GSE process fails to start, it is usually attributed to a license issue, please check these factors :
License Expiration
gadmin status license
This command will show you the expiration date of your license.
Single Node License on a Cluster If you are on a TigerGraph cluster, but using a license key intended for a single machine, this will cause issues. Please check with your point of contact to see which license type you have.
Graph Size Exceeds License Limit Two cases may apply for this reason. The first reason is you have multiple graphs but your license only allows for a single graph. The second reason is that your graph size exceeds the memory size that was agreed upon for the license. Please check with your point of contact to verify this information.
Usually in this state, GSE is warming up. This process can take quite some time depending on the size of your graph.
<INCLUDE PROCESS NAME SHOWING CPU USAGE TO VERIFY THE "WARM UP" STATE>
Very rarely, this will be a ZEROMQ issue. Restarting TigerGraph should resolve this issue
gadmin restart -y
GSE crashes are likely due to and Out Of Memory issue. Use the dmesg -T
command to check any errors.
If GSE crashes, and there are no reports of OOM, please reach out to TigerGraph support.
If your system has unexpectedly high memory usage, here are possible causes :
Length of ID strings is too long GSE will automatically deny IDs with a length longer than 16k. Memory issues could also arise if an ID string is too long ( > 500). One proposed solution to this is to hash the string.
Too Many Vertex Types Check the number of unique vertex types in your graph schema. If your graph schema requires more than 200 unique vertex types, please contact TigerGraph support.
If your browser crashes or freezes (shown below), please refresh your browser.
If you suspect GraphStudio has crashed, first check gadmin status
to verify all the components are in good shape. Two known causes of GraphStudio crashes are :
Huge JSON response User-written queries often return very large JSON responses. There is a JSON size limiter, but this could still potentially cause an issue. This issue can be mitigated by editing he maximum response size in this file :
Very Dense Graph Visualization On the Explore Graph page, the "Show All Paths" query on a very dense graph is known to cause a crash.
To find the location of GraphStudio log files, use this command : gadmin log vis
Allowing GraphStudio DEBUG mode will print out more information to the log files. To allow DEBUG mode, edit the following configuration entry :
After editing the file, run gadmin restart vis -y
to restart the GraphStudio service. Follow along the log file to see what is happening : tail -f /home/tigergraph/tigergraph/log/gui/GUI#1.out
Repeat the error inducing operations in GraphStudio and view the logs.
There are list of known GraphStudio issues here.
If after taking these actions you cannot solve the issue, please reach out to support@tigergraph.com to request assistance.
Use the following command:
$ gsql --version
To see the version numbers of individual components of the platform:
$ gadmin version
Different servers are needed for different purposes, but the TigerGraph should automatically turn services on and off as needed. Please be sure that the Dictionary (dict) server is on when using the TigerGraph system:
To check the status of servers:
$ gadmin status
Yes. For the GSQL shell and language, first enter the shell (type gsql
from an operating system prompt). Then type the help
command, e.g.,
HELP
This gives you a short list of commands. Note that "help" itself is one of the listed commands; there are help options to get more details about BASIC
, QUERY
commands. For example,
HELP QUERY
$ gadmin help
User-defined identifiers are case-sensitive. For example, the names User
and user
are different. The GSQL language keywords (e.g., CREATE, LOAD, VERTEX) are not case-sensitive, but in our documentation examples, we generally show keywords in ALL CAPS to make them easy to distinguish.
An identifier consists of letters, digits, and the underscore. Identifiers may not begin with a digit. Identifiers are case sensitive. Special naming rules apply to accumulators (see the Query section).
Yes. You can create a text file containing a sequence of GSQL commands and then execute that file. To execute from outside the shell:
To execute the command file from within the shell:
Yes. Normally, an end-of-line character triggers execution of a line. You can use the BEGIN
and END
keywords to mark off a multi-line block of text that should not be executed until END
is encountered.
This is an example of a loading statement split into multiple lines using BEGIN and END:
When a license limit has been reached, your system will be placed in a read-only mode - incapable of loading anymore data. You will still be able to delete data and view the graph.
Alternately, a generic CREATE GRAPH statement can be used:
Property graphs can model data fields ("properties") as either a property of a vertex or edge or as a vertex linked to other vertices. If your property relates to an edge, it should be an attribute of that edge (for example, a Date attribute of a CustomerBoughtProduct edge). If your property relates to a vertex, you have a choice. The optimal choice depends on how you will typically use this attribute in your application. If you will frequently search or filter based on that data, we suggest your treat it as a separate vertex type. Otherwise, we recommend modeling this data as an attribute of the principal vertex.
Discontinued Feature
The UINT_SET and STRING_SET COMPRESS types have been discontinued since there is now equivalent functionality from the more general SET and SET types.
The TigerGraph MultiGraph service, an add-on option, supports logical partitions of one unified global graph. Each partition is treated as an independent local graph, with its own set of user privileges. Local graphs can overlap, to create a shared data space.
For performance reasons, we recommend to keep the number of different vertex and edge types under 5,000. The upper limit for the number of different vertex and edge types is approximately 10,000, depending on the complexity of the types.
From within the GSQL Shell, the ls
command lists the catalog : the vertex type, edge type, and graph type definitions, job definitions, query definitions, and some system configuration settings. If you have not set your active graph, then ls
will show only item which have global scope. To see graph-specific items (including loading jobs and queries), you must define an active graph.
To delete your entire catalog, containing not just your vertex, edge, and graph type definitions, but also your loading job and query definitions, use the following command:
GSQL>
DROP ALL
To delete just your graph schema, use the DROP GRAPH command:
GSQL>
DROP GRAPH g1
UPDATE Deleting the graph schema also erases the contents of the graph store. To erase the graph store without deleting the graph schema, use the following command:
GSQL>
CLEAR GRAPH STORE
In v2.0, the TigerGraph introduced a more powerful and comprehensive syntax which has several advantages:
The TigerGraph platform can handle concurrent loading jobs, which can greatly increase throughput.
The data file locations can be specified at compile time or at run time. Run-time settings override compile-time settings.
A loading job definition can include several input files. When running the job, the user can choose to run only part of the job by specifying only some of the input files.
Loading jobs can be monitored, aborted, and restarted.
The GSQL data loader reads text files organized in tabular or JSON format . Each field may represent numeric, boolean, string, or binary data. Each data field may contain a single value or a list of values (see How do I split a data field containing a list of values into separate vertices and edges? ).
The loader does not filter out extra white space (spaces or tabs). The user should filter out extra white space from the files before loading into the TigerGraph system.
The data field (or token ) separator can be any single ASCII character, including one of the non-printing characters. The separator is specified with the SEPARATOR phrase in the USING clause. For example, to specify the semicolon as the separator:
USING SEPARATOR=";"
To specify the tab character, use \t
. To specify any ASCII character, use \nn
where nn
is the character's ASCII code, in decimal. For example, to specify ASCII 30, the Record Separator (RS):
USING SEPARATOR="\30"
TigerGraph does not require fields to be enclosed in quotation marks, but is it recommended for string fields. If the QUOTE option is enabled, and if the loader finds a pair of quotation marks, then the loader treats the text within the quotation marks as one value, regardless of any separation characters that may occur in the value. The user must specify whether strings are marked by single quotation marks or double quotation marks.
USING QUOTE="single"
or
USING QUOTE="double"
For example, if SEPARATOR=","
and QUOTE="double"
are set, then when the following data are read,
"Lee, Tom" will be read as a single field. The comma between Lee and Tom will not separate the field.
No. You must specify either QUOTE="single"
or QUOTE="double"
.
The following three parameters should be considered for every loading job from a tabular input file:
The next two parameters, FILENAME and EOL are required if the job is an ONLINE_POST job:
All of the these five parameters are combined into one USING clause with a list of parameter/value pairs. The parameters may appear in any order.
The location of the USING clause depends on whether the job is an offline loading job or an online loading job. For offline loading, the USING clause appears at the end of the LOAD statement. For example:
For online loading, the USING clause appears at the end of the RUN statement
You can define a header line (a sequence of column names) within a loading job using a DEFINE HEADER statement, such as the following:
This statement must appear before the LOAD statement that wishes to use the header definition. Then, the LOAD statement must set the USER_DEFINED_HEADER parameter in the USING clause. A brief example is shown below:
Input data fields can always be referenced by position. They can also be referenced by name, if a header has been defined.
Position-based reference: The leftmost field is $0
, the next one is $1
, and so on.
Name-based reference: $"name"
, where name
is one of the header column names.
For example, if the header is
abc,def,ghi
then the third field can be referred to as either $2
or $"ghi"
.
First, to clarify the task, consider a graph schema with two vertex types, Book and Genre, and one edge type, book_genre:
Further, each row of the input data file contains three fields: bookcode , title , and genres , where genres is a list of strings associated with the book. For example, the first few lines of the data file could be the following:
The data line for bookcode 101 should generate one Book instance ("Harry Potter and the Philosopher's Stone"), four Genre instances ("fiction", "adventure", "fantasy", "young adult"), and four Book_Genre instances, connecting the Book instance to each of the Genre instances. This process of creating multiple instances from a list field (e.g., the genres field) is called flattening .
To flatten the data, we use a two-step load. The first LOAD statement uses the flatten() function to split the multi-value field and stores the results in a TEMP_TABLE. The second LOAD statement takes the TEMP_TABLE contents and writes them to the final edge type.
The flatten function has three arguments: (field_to_split, separator, number_of_parts_in_one_field). In this example, we want to split $2 (genres), the separator is the comma, and each field has only 1 part. So, the flatten function is called with the following arguments:flatten($2, ",",1)
. Using the example of data file , TEMP_TABLE t1 will then contain the following:
The second LOAD statement uses the TEMP_TABLE t1 to generates Genre vertex instances and book_genre_instances. While there are 7 rows shown in the sample TEMP_TABLE, only 6 Genre vertices will be generated, because there are only 6 unique values; "Fiction"
appears twice. Seven book_genre edges will be generated, one for each row in the TEMP_TABLE.
There is another version of the flatten function which has four arguments and which supports a two-level grouping. That is, the field contains a list of groups, each group composed of N subfields. The arguments are (field_to_split, group_separator, sub_field_separator, number_of_parts_in_one_group). For example, suppose the data line were organized this way instead:
Then the following loading statements would be appropriate:
No. One of the advantages of the TigerGraph loading system is the flexible relationship between input files and resulting vertex and edge instances. In general, there is a many-to-many relationship: one input file can generate many vertex and edge types.
From the LOAD statement perspective for a online loading job:
Each LOAD statement refers to one input file.
Each LOAD statement can have one or more resulting vertex types and one or more resulting edge types.
Hence, one LOAD statement can potentially describe the one-to-many mapping from one input file to many resulting vertex and edge types.
It is not necessary for every input line to always generate the same set of vertex types and edge types. The WHERE clause in each TO VERTEX | TO EDGE clause can be used to selectively choose and filter which input lines generate which resulting types.
This not an error. There can only be one instance of a certain edge type between any given pair of vertices, so the most recently loaded edge data will be the edge that you will see in the graph.
You can modify the schema in several ways:
Add new vertex or edge types
Drop existing vertex or edge types
Add or drop attributes from an existing vertex or edge type
Any schema change can invalidate existing loading jobs and queries.
-HARD must be in all capital letters.
The GSQL Query Language supports powerful graph querying, but it is also designed to perform powerful computations. GSQL is Turing-complete, so it can be considered a programming language. It can be used for simple SQL-like queries, but it also features control flow (IF, WHILE, FOREACH), procedural calls, local and global variables, complex data types, and accumulators to enable much more sophisticated use.
In the following table, baseType means any of the following: INT, UINT, FLOAT, DOUBLE, STRING, BOOL, VERTEX, EDGE, JSONARRAY, JSONOBJECT, DATETIME
Vertex and edge IDs (i.e., the unique identifier for each vertex or edge) are treated differently than user-defined attributes. Special keywords must be used to refer to the PRIMARY_ID, FROM, or TO id fields.
Vertices :
In a CREATE VERTEX statement, the PRIMARY_ID is required and is always listed first. User-defined attributes are optional and come after the required ID fields.
In a built-in query, if you wish to select vertices by specifying an attribute value, you use the attribute name (e.g., title):
In contrast, if you wish to reference vertices by the id value, the lowercase keyword primary_id
must be used. Note that that query does not use the id name pid
.
Edges :
In a CREATE EDGE statement, the FROM and TO vertex identifiers are required and are always listed first. The FROM and TO values should match the PRIMARY_ID values of a source vertex and a target vertex. In the example below, rating
and date_time
are user-defined optional attributes.
In a query, if you wish to select edges by specifying their FROM or TO vertex values, you must use the lowercase keywords from_id or to_id .
Yes. The maximum output size for a query is 2GB. If the result of a query would be larger than 2GB, the system may return no data. No error message is returned.
Also, for built-in queries (using the Standard Data Manipulation REST API), queries return at most 10240 vertices or edges.
INSTALL QUERY query_name is required for each GSQL query, after its initial CREATE QUERY query_name statement and before using RUN QUERY query_name . After INSTALL query has been executed, RUN QUERY can now be used.
Anytime after INSTALL QUERY, another statement, INSTALL QUERY -OPTIMIZE can be executed once. This operation optimizes all previously installed queries, reducing their run times by about 20%.
Optimize a query if query run time is more important to you than query installation time. The initial INSTALL QUERY operation runs quickly. This is good for the development phase.
The optional additional operation INSTALL QUERY -OPTIMIZE will take more time, but it will speed up query run time. This makes sense for production systems.
Legal:
Illegal:
In short, yes. They will not be executed at the same time, but the installations will be queued by the order in which they were received.
ListAccum can contain ListAccum.
MapAccum and GroupByAccum can contain any container accumulator except HeapAccum.
ArrayAccum is always nested.
Here is an example:
To write a loading job, you must know the format of the input data files, so that you can describe to GSQL how to parse each data line and convert it into vertex and edge attributes. To validate a loading job, that is, to check that the actual input data meet your expectations, and that they produce the expected vertices and edges, you can use two features of the RUN JOB command: the -DRYRUN option and loading a specified range of data lines.
The full syntax for an (offline) loading job is the following:
RUN JOB [-DRYRUN] [-n [
first_line_num
,]
last_line_num
]
job_name
The -DRYRUN
option will read input files and process data as instructed by the job, but it does not store data in the graph store.
The -n
option limits the loading job to processing only a range of lines of each input data file. The selected data will be stored in the graph store, so the user can check the results. The -n flag accepts one or two arguments. For example,
-n 50
means read lines 1 to 50.
-n 10,50
means read lines 10 to 50.
The special symbol $ is interpreted as "last line", so -n 10,$
means reads from line 10 to the end.
The following command lists the log locations of the log files:
If the platform has been installed with default file locations, so that <TigerGraph_root_dir> = /home/tigergraph/tigergraph, then the output would be the following:
As of v2.4, the GSQL log files have been moved in order to keep all logs in a standard directory.
GPE : general system performance logs. GSE : Graph services logs. RESTPP : REST API call logs. GSQL : General GSQL logs.
Each loading run creates a log file, stored in the folder <TigerGraph_root_dir>/dev/gdk/gsql/output. The filename load_output.log is a link to the most recent log file. This file contains summary statistics on the number of lines read, the vertices created, and various types of errors encountered. Or, you can type a shell command to find log paths "gadmin log".
The log files record detailed internal operations and state information in response to user actions. They provide vital information for diagnosing and debugging your system. All log files can be found in the /home/tigergraph/tigergraph/logs directory. Through typing the command gadmin log, you will be given all the file paths of the most commonly used log files.
GPE Logs - Graph Processing Engine Logs GSE Logs - Graph Storage Engine Logs GSQL Logs - System & Query Logs RESTPP Logs - API call Logs NGINX Logs - HTTP Request Logs VIS Logs - GraphStudio Logs
One possible explanation is that you have reached a capacity limit controlled by your product license. To check if this is the case, run the command gadmin status. If the limit has been reached, there will be a warning message, such as the following:
In Limited Capacity mode, additional data may not be inserted. Data may be queried and deleted.
This page will document all the changes to TigerGraph product including New Features and Bug Fixes.
Distributed Graph support and certain other enterprise-level features are available in the Enterprise Edition only. They do not pertain to the Developer Edition.
Release Date: 2020-11-11
Database Server
Audit Logging Enhancements
User information for all requests.
Request Status (request succeeded or failed) for all requests irrespective of access mode
Remove Hard timeout limit for Backup/Restore operations
Database Server
Platform: Resolve the issues where Kafka start-up will hang in certain OS and shell environment.
Platform: Backup/Restore hangs if there are too many files
Platform: Backup/Restore list error when backup files on S3 are corrupted
Engine: Builtin query running background blocks schema change
GSQL: Fix for SSL certificate exception
Release Date: 2021-01-15
Database Server
GSE/GPE segment consistency check utility
Integration with GSE/GPE consistency check utility with Backup/Restore
Increase in refresh timeout for RESTPP from 20 to 60 seconds;
Database Server
GSE replica synchronization for Zookeeper errors
Explicitly check replica follower status before automatic promotion to leader is allowed
RESTPP fix - memory leaks caused by timed out queries
Backup/Restore: Ensure GPE and GSE snapshots are done in correct order
Release Date: 2020-11-02
Database Server
Allow RESTPP to manage log files based on timestamp
Upgrade NGINX to 1.18 version
Correct status code to indicate GSQL operation result
Remove Hard timeout limit for Backup/Restore operations
Token Management Improvements:
Improve GSQL stability by setting a limit on number of tokens allowed
Logging improvement to indicate new and refreshed tokens separately
Database Server
Core: GSE follower replicas lag leader replica on the data updates
Core: Shuffle abort causing GPE crash
Core: Handle un-released lock gracefully during json print command failure
Core: Incremental Snapshot triggers creation of all segments causing delays
Core: Kafka loading fails when number of loaders exceed 10
GSQL: Query Install fails for batch installs
Backup/Restore hangs if there are too many files
Release Date: 2020-09-05
Database Server
Longer timeout for retrieving enum maps when using STRING COMPRESS
Socket timeout adjustment to improve RESTPP stability
Implement SetAccum<vertex> as bitset
Semantic check for println of File object for compiled query
Installer improvements
Enhancement to change the user and group separately.
Check permission of parent dir of App/Temp/Data/Log Roots
TigerGraph 2.x to 3.x Migration tool enhancements
Support for copying UDFs and other functions during migration
Enhanced license support for Cloud deployments
Enhanced upgrade version checking
Zookeeper client connection retry mechanism to avoid Zookeeper operation failures
Installer Configuration JSON format
Install Configuration is separated into basic configuration and advanced configuration sections
Support for allowing replication factor to be set during installation as opposed to limited HA on/off setting previously
Database Server
Core: GPE down during Backup for large number of files
Core: GPE will crash if the data comes from a machine without relevant metadata.
Core: Query failure due to string overflow
Core: Query with large UDF job didn't stop for configured time out setting
Platform: Kafka loading bug when number of loaders exceeds 10
Platform: Backup hangs when there are very large number of files in Graph Store
Platform: Backup reports successful operation even if it's actually incomplete
Platform: gadmin reset does not reset all files
GSQL: V2 syntax removes edge type that is excluded by Accum clause.
GSQL: Force query install should regenerate the endpoints
GSQL: Loading Job failed with SSL enabled
GSQL: Query installation performance issue for V2 syntax
GSQL: ArrayAccum value is not accessible in the ACCUM block when query is installed in distributed mode.
GSQL: Dictionary Fails when Tokens are too many
GSQL: Query installation fails due to schema change
GSQL: gsql_client strips out newlines when writing gsql queries by pasting into gsql shell
GraphStudio
Apply previous visualization result should handle empty saved schema
Displaying attribute for raw type in visualization should not use JSON stringify
Remove clear text user password in error log for migration from RDBMS to Graph
Release Date: 2020-06-30
Support for reload libudf command
Schema validation before apply settings
Relax Developer Edition restrictions
YAML parsing support for edge pairs
Support SPLIT for UDT loading, Load From/To Type from File
Data generator 2.0
Change log level by SIGUSER1, avoid unnecessary error log
Restpp self-report status
Allow users to remove data for reinstallation
Upgrade kafka to 2.3.0
Path pattern optimization with pattern flipping and PER clause
Combine service status and processState into one log event
Support validation of entry value during gadmin config set command
Add strong check for symlinks
Support to_datetime builtin function in expressions
Support string set filter for edge and target vertex
Support local vertex and edge with same name in multiple graphs
Index hint for interpret mode
Support string compress attributes in built-in Query filters
Enable jemalloc profiling
Utility function to get disk free percentage
Allow concurrent user query access during Query Installation
Support multiple-pair edge type
Schema change job for add/drop attribute index
Improved clear graph warning
New layout for logo and multiple graphs
Allow user edit header for sample data
Support multiple files upload
Cancel autofit for adding vertex and double click actions
Cancel auto login if user has logged out
Save JSON format of query result to local storage
Create Edge Type from Multiple Vertex Types to Multiple Vertex Types
Add on-demand heap profiling for jemalloc
Delete legacy ids data
Periodically force Jemalloc release memory to OS / on demand profiling
Change debug log in convertids into verbose
Print warning but no assert in ZMQ
Wrong JSON format for tempTables
Fix wrong check for loading job completion
Allow interpret query to recognize html encoded string constant
Handle logical type in json converter
Corrected URL decode for whitespace character
Add time before delete edges command to ensure rebuild has enough time to complete
Fix remove session bug for the aborted handler after 'ctrl + c'
Synchronize concurrent install queries
Change logic to check service status for cluster mode
Support the ‘=‘ operator SumAccum;
Drop vertex/edge/graph when there are local and global vertex/edge have the same name;
Support removing a SetAccum from another SetAccum;
Remove the reversed edge too when removing an edge;
Cannot create query due to the overflow of the size of the HeapAccum;
Query referred as subquery from interpreted mode query can not be dropped;
Index out of bound when ignoring the parameter checking for interpret query
Output error message for invalid job id
Fix codegen to insert a vertex/edge without attributes
Support file regexp in checking header of filename
Support the true value of key word header and transaction in the loading data job to be case-insensitive
Dedupe proxy user's own roles from groups
Make schema change metadata modification a transaction
Fix builtin k_step expansion query bug
Check disk space before exporting each vertex/edge type
Allowed non-English string constants in interpreted queries
Edge variable prints attribute by default
Print developer information only in gadmin status
Restrict symlinks and check their existence
Fix error message for new secret creation
Refactor keywords
Do not emit explorer config if saved exploration doesn't have it
Check for Valid date time
Extend wait time for progress bar finish
Add right border for side navigation
Upgrade color-picker
Fix check accumulator format
Fix percentage of performing schema change
Run interpreted query through websocket
Release Date 2020-08-21
Improved handling of query time outs for distributed queries.
Longer timeout for retrieving large memory map for attributes of STRING COMPRESS data type with large number of distinct values.
Backup jobs report incorrect successful runs
Incorrect type check logic for trim function;
Release Date 2020-08-14
Improvements to GSE Upsert performance
Add User Id information to RESTPP logs for all user initiated calls
Improvements to Query Installation performance time
Provide warning message when revoking a role from proxy user if needed
Core: GPE crash on unknown vertex / segment
Core: PostWriter needs to skip vertices if the internal vertex id is invalid one.
Core: Handle exception in ResponseThread of RemoteTopology
Core: Query re-installation issue caused by non-deterministic transformation
Core: Address Data Loading speed for hub loading
Core: Inconsistent result with and without using local accumulators
Core: RestPP payload scale issue due to 3rd party FCGI library
GSQL: GSQL pattern match - translation error when vertex type is the keyword "ANY"
GSQL: Issue with reduce function with Bitwise OR operator in the LOAD functions
GSQL: gsql_client strips out newlines when writing gsql queries by pasting into gsql shell
GSQL: Secrets and token associated with a graph and not removed during graph delete
GraphStudio: Displaying attribute for raw type in visualization should not use JSON stringify method
Release Date 2020-06-12
Allow concurrent user query access during Query Installation
GPE & GSE Data Sync Check Utility
Use of POST for /requesttoken API so that user password is not exposed
Write Performance improvements
Error handling and reporting improvements for Query Timeout and Failures
UX improvement for ‘Clear Graph’ command in GraphStudio
Ensure cleanup and compaction of delta records in a large transaction even in the event of TigerGraph service restart
Performance improvement to make Graph Updates faster by parallelizing and sharing transaction
Fix for the leftover Shuffle threads after Query Abort/Timeout
Change in the error message of AbortQuery request inside the Shuffle Operator
Bug fixes for GSE compaction feature to address exporting with mixed segments of data and load data from the database in worker mode
Fix for GSE crash triggered by schema change
Enable background thread on JEMALLOC for memory cleanup even when system is idle
/showprocesslist and /abortquery APIs do not list the running queries of old worker if RESTPP is refreshed
S3 loader header check doesn't apply file filter regex
GSQL V2 syntax does not handle ACCUM operator correctly
Fix for RESTPP timeout error
Release Date 2020-04-24
Remove SSH connection use dependency for GSQL Install Query command
New 'force' parameter to RebuildNow so that engine to start the rebuild.
Core: GSE crash in HA setup when CPU usage is extremely high
Core: Out Of Memory handling improvements to prevent GPE crash due bad memory allocation call
GLE: fix builtin query crash in worker due to graph id missing
Core: Skewed CPU usage for high-query throughput scenarios
Fixes in Rebuild to address broken edge count
Fix for 2.5.2 bug - Inconsistent query results when running non-distributed query on a cluster
Unable to find local vertex and edge with same name in multiple graphs
RESTPP memory leak due to yaml file
Reverse edge id is wrong when two local edges with reverse edge are created with same name
Release Date: 2020-04-24
New 'force' parameter to RebuildNow so that engine to start the rebuild.
Improved version of /abortquery so that query can be aborted more quickly
Fixes in Rebuild to address broken edge count
RESTPP memory leak due to yaml file
Builtin query crashed due to missing Graph Id
RESTPP crash for same vertex name in the global graph
Resolved the distributed query hanging issue which could block rebuild and schema change
Core: Skewed CPU usage for high-query throughput scenarios
Release Date: 2020-02-26
Ensure catalog data backed up before schema change
Support creation of two local edges with same name with one being a reverse edge
Support Local vertex and edge type with same name in multiple graphs in
Support for multi-lingual string constant in Interpret query mode
Upgrade to Release 2.5.2 leads to inconsistent query results
Compute resource usage spikes on particular node in cluster
GCleanUp failed to cleanup all pointers when adjusting thread
Release Date: 2020-01-27
TigerGraph 2.5.2 is not compatible with versions prior to 2.5.1. Customers who are using Pre-2.5.1 version and intending to migrate to 2.5.2 are advised to take backup of their existing version before upgrading to 2.5.2. This will enable them to downgrade back to the original Pre-2.5.1 version if nee
GPE: Increase MemoryCheck frequency based on Resource Usage
GPE: Abort Query if Memory usage crosses critical threshold
GSE: Support Log compaction as part of startup for GSE
GraphStudio: Support Multi-edge pair in design schema.
Core: Support OS RHEL 8.0 in Installer
REST: Increase the RESTPP reload timeout
GSQL: Change error message to specify user when default tigergraph user is dropped
GSQL: Make user tigergraph droppable
GraphStudio: Do not change layout when adding/updating/deleting vertex and edge
Core: GPE crashed running distributed LDBC query
GST: Incorrect vertex count in TigerGraph GraphStudio
Core: Shuffle deadlock causing full system memory use
Core: Replace GASSERT with GWARN in GDataBox
Core: BATCH_SIZE of Kafka loader set from GSQL console doesn't work
GPE: Schema Change failed due to Query Install OOM
GSQL: Quote in string key is not escaped
GraphStudio: Reverse edge filter doesn't work
Core: Don't display LDAP password in IUM
Release Date: 2019-11-25
Core: Distributed delete affects data consistency after GPE restart
Core: Shuffle hangs when sendingQueue is full
Core: Longevity test failing due to change in memory allocator (TCMalloc)
GPE: Crash after upgrade from 2.4.1 to 2.5
GPE: Serialization error when reading from input stream
GPE: Query state can result in race condition inside ReadOneDelta;
GPE: GPE crashes when a query calls a sub-query with a write operation
GSE: Script to resolve delete inconsistency between GSE and GPE
GSE: Multiple Kafka loading jobs fail
GSQL: Built-in function names in GSQL are case sensitive
GSQL: Interpret query doesn't work when authentication is on
GSQL: Deadlock when graph store is being cleared and authentication is on
GSQL: Token authentication returning null during Global schema change
GSQL: SSO login failure due to missing org.apache.santuario:xmlsec library
GraphStudio: Vertex to edge expansion settings are not retained
GBAR Backup: Backup failure if loading jobs are in progress
Release Date 2019-09-18
Improvements to fix possible crash, deadlock, overflow, and memory leak situations
Improve query performance stability
Fix some query string passing and parsing issues
Correct some inconsistencies between the documented specification and actual behavior
Improve robustness of Kakfa and S3 Loaders
Clean up files and graph properly after certain failed operations
Fix some installation issues
Release Date 2019-07-23
To select pattern matching support in a query, the syntax is now
CREATE QUERY ... SYNTAX v2
instead of
CREATE QUERY ... SYNTAX("v2")
GPE: Fix uint32 overflow
Loader: Allow temp_table to be used without flatten function
IDS: Disable empty UID
ZMQ: Fix crash on ill-formed message
Util: Fix Unix domain socket file not generated correctly in cron job
Util: Extend data size for GoutputStreamBuffer beyond 4GB
Connector: Fix first line is not ignored with has_header enabled
Connector: Fix failures on retrieving connector status
GSQL: Fix syntax version setting inconsistency issues
GSQL: Fix schema change with USING primary_id_as_attribute
GSQL: Fix JSON output format of requesttoken API
Admin Portal: Display correct counts of physical vertices and edges on each machine
Release Date 2019-06-25
GSQL: The built-in count() function gives the correct value in all cases.
GPE: startup hang
GSQL server start/stop command not working
LDAP config truncated by space
GSE: boolean values are not displayed correctly
Security issue CVE-2013-7459 caused by unused python crypto library
IUM status is displayed incorrectly in some cases;
Release Date 2019-04-01
GSQL: The built-in count() function may give the incorrect value for clustered systems after some vertices have been deleted.
GraphStudio: Send query pre-install dependency analysis result through WebSocket
GraphStudio: filter out improper attributes in when building filter expressions
GPE: fix wrong enumerator id issue
GPE: avoid using /tmp
GPE: handle exceptions for LIKE <expr>
GPE: Fix crash due to writing wrong size of STRING_LIST
GPE: Fix global schema change error which added local vertex twice
GSE (Developer Edition): Keep one copy of segment
Release Date 2019-02-19
GSQL: The built-in count() function may give the incorrect value for clustered systems after some vertices have been deleted.
Install: The IP list fetched by the installer could be incomplete.
Loading: Speed up batch-delta loading.
GraphStudio: Disable Install Query button for queryreader users.
GraphStudio: Re-initialize the database after import.
GraphStudio: Could not drop query with non-default username/password.
AdminPortal: Queries-Per-Second display didn't work if RESTPP authorization was enabled.
Schema change: Improve schema change stability by reducing schema change history and increasing gRPC max message limit.
GPE: Improve query HA stability.
GPE: Fix crash under certain conditions.
Core: Memory leak due to yamlcpp.
Core: compatibility issue between libc and ssh utility.
IUM: Fix exceptions due to legacy config entries.
Release Date: 2018-12-13
Distributed System: Fix possible deadlock and race conditions
GSE Storage Engine: Fix disk seek overflow
RESTPP: Optimize the memory consumption when system is idle
RESTPP: Optimize config reload time
GSQL: Fix query installation error with option -optimize
GSQL: Fix a code generation bug related to static variable
GSQL: Fix a compilation error when a statement is in nested if statement
GraphStudio: Security update for npm-run-all
GraphStudio: Change Help button to point to new docs.tigergraph.com site
Gadmin: Fix gadmin/ts3 restart and status error after changing port of TS3
Release Date: 2018-11-30
GraphStudio: Fix schema change bug (Note: In 2.2, GraphStudio now does not drop all data when making a schema change.)
GraphStudio: Fix display issue in Graph Explore when switch to a new graph
GraphStudio: Improve password security
GraphStudio: Modify URL to AdminPortal for better universal support
IUM: Fix kafka-loader configuration after cluster expansion
IUM: Resolve python module name conflict
IUM: Fix ssh_port is always 1 under bash interactive mode
GSE Storage Engine: Reduce memory consumption
RESTPP: Improve logging messages
Release Date: 2018-11-05
GraphStudio: When both a query draft and an installed query exist, Export Solution will keep the installed query code instead of the query draft
Admin Portal: Number of nodes in the cluster is reported as 0 when no graph yet exists
Release Date: 2018-11-05
GBAR Backup fails if HA is enabled
GSE status shows unknown with HA enabled
TS3 fails to collect QPS when RESTPP Authentication is enabled (Admin Portal QPS monitor will be unavailable in this case).
GraphStudio: When both a query draft and an installed query exist, Export Solution will keep the installed query code instead of the query draft.
Admin Portal: Number of cluster nodes is reported as 0 when no graph exists.
GSQL server error if schema is too large
In a cluster, not all servers may be aware of deleted vertices.
PAM limit set-up issue in installer
In MultiGraph, a local (FROM *, TO *) local edge has global side effects.
RESTPP's default API version is not set after installation
An engine bug which occasionally causes crash
SSH port configuration in installer.
Installation script checks that the machine meets the minimum RAM (8GB) and CPU (2-core) requirements.
For Ubuntu 16.04/18.04, support logon with systemd service.
Release Date: 2018-08-20
GBAR backup fails if HA is enabled.
TS3 fails to collect QPS when RESTPP Authentication is enabled (Admin Portal QPS monitor will be unavailable in this case).
GraphStudio: When both a query draft and an installed query exist, Export Solution will keep the installed query code instead of the query draft.
Admin Portal: Number of cluster nodes is reported as 0 when no graph exists.
Cluster configuration with HA enabled is wrong if the number of nodes is odd (3, 5, 7, 9...).
GraphStudio and GSQL inconsistent checking for some keywords
GBAR backup and restore fail if special character is in tag name
Release Date: 2018-08-15
Cluster configuration with HA enabled is wrong if the number of nodes is odd (3, 5, 7, 9...).
GraphStudio: When both a query draft and an installed query exist, Export Solution will keep the installed query code instead of the query draft.
TS3 fails to collect QPS when RESTPP Authentication is enabled (Admin Portal QPS monitor will be unavailable in this case).
Admin Portal: Number of cluster nodes is reported as 0 when no graph exists.
GSQL null pointer exception during schema change if a directed edge is dropped but its partner reverse edge is kept.
Some complex attribute types cannot be correctly posted via /graph endpoint.
In some cases, tuple on reverse edge crashes GPE.
GraphStudio throws an authentication error if RESTPP authentication is enabled.
License level control of MultiGraph functionality.
Release Date: 2018-07-24
GSQL null pointer exception during schema change if a directed edge is dropped but its partner reverse edge is kept.
Some complex attribute types cannot be correctly posted via /graph endpoint.
In some cases, tuple on reverse edge crashes GPE.
GraphStudio Export package is occasionally incomplete.
GSE status is always "not ready" if schema is too large.
Cannot modify RESTPP port configuration.
IUM error in a cluster when not running on node m1
If you have a problem with the procedure described in the please contact and summarize your issue in the email subject.
Each release comes with documentation addressing how to perform an upgrade. Upgrade instructions are documented in Installation guide. Please contact for help in your specific situation.
If you correctly installed the system and are now logged in as the TigerGraph system user, you should be able to enter the GSQL shell by typing the gsql
command from an operating system prompt. If this command has never worked, then probably the installation was not successful. If it works but you are not sure what to do next, please see theguide.
If you believe you have installed the system correctly (e.g., you followed the and received no errors, and the gsql
and gadmin
commands are now recognized), then please contact and summarize your issue in the email subject.
lists the command syntax for queries. See the "System Basics" section of theThe gadmin
administration tool also has a help menu and a manual page:
The general rule is that string literals within the GSQL language are enclosed in double quotation marks. For data that is to be imported (not yet in the GSQL data store), the GSQL loading language lets the user specify how data fields are delimited within your input files. The loading language has an option to specify whether single quotes or double quotes are used to mark strings. For more help on loading, see the "Loading Data" section of this document or of the
See also the "Language Basics" and "System Basics" sections of thedocument.
A TigerGraph graph schema consists of (A) one or more vertex types, (B) one or more edge types, and (C) a graph type. Each edge type is defined to be either DIRECTED or UNDIRECTED. The graph type is simply the list of vertex types and edges types which may exist in the graph. For more: See the section "Defining a Graph Schema" in the. Below is an example of a graph schema containing two vertex types, one edge type, and one graph type:
Each attribute of a vertex or edge has an assigned data type. v0.8 of the TigerGraph adds support for many more attribute types.: DATETIME, UDT, and container types LIST, SET, and MAP. The following is an abbreviated list. For a complete list and description, see the section "Attribute Data Types" of the
The GSQL language includes ADD, ALTER, and DROP commands. See the section "Update Your Data" in the or the section "Modifying a Graph Schema" in thefor details. Note that altering the graph schema will invalidate your old data loading and query jobs. You should create and install new loading and query jobs.
See also " "
To load structured data stored in files, you write a loading job and then execute it. See and the for introductory examples. Loading jobs can include instructions for parsing and processing the data, in order to perform many ETL tasks. See for the complete specifications. To load streaming data or data coming from other data stores, see .
Additional data formats are continually being added. See and the TigerGraph Ecosystem Github Repository's etl folder
Each tabular input data file should be structured as a table, in which each line represents a row, and each row is a sequence of data fields, or columns. A data field can contain string or numeric data. To represent boolean values, 0 or 1 is expected. A header line may be included, to associate a name with each column. A designated character separates columns. For example, if the designated separator character is the comma, this format is commonly called CSV, for Comma-Separated Values. Below is an example of a CSV file with a header. The uid
column is int type, name
is string type, avg_score
is float type, and is_member
is boolean type. See simple examples in Real-Life Data Loading and Querying Examples and a complete specification in the section "Creating a Loading Job" in
Yes. Two approaches are to use our or to periodically read from one or more files. A loading job lets you define a general loading process without naming the data source. Every time you call an online loading job, you name the source file. It can be a different file each time, or it can be the same file, if the contents of the file are changing over time. Also, if it happens that the loader re-reads a data line that it has encountered before, it will just reload the data (except for container attributes, e.g., a LIST attribute, using a reduce() loading function. In that case, there is an accumulative effect for re-reading a data line).
The GSQL Loading includes some built-in token functions (a token is one column or field of a data input line.) A user can also define custom token functions. Please see the section "Built-In Loader Token Functions" in the .
If there is already data in the graph store and you wish to insert more data, you have a few options. First, if you have bulk data stored in a file (local disk, remote or distributed storage), you can us e .
Second, if you have a few specific insertions, you can use the Upsert da ta command in the For Upsert, the data must be formatted in JSON format.
Third, you can write a query containing INSERT statements. The syntax is similar to SQL INSERT. (See ) The advantage of query-based INSERT is that the details (id values and attribute values) can be determined at run time and even can be based on an exploration and analysis of the existing graph. The disadvantage is that the query-insert job must be compiled first and data values must either be hardcoded or supposed as input parameters.
See the section "Modifying a Graph Schema" in
To make a known modification of a known vertex or edge: Option 1) Make a RESTPP endpoint request, to the POST /graph or DELETE /graph endpoint. See the .
Option 2) The Loading language includes an upsert command. The UPSERT statement performs a combined modify-or-add operation, depending on whether the indicated vertex or edge already exists. Examples of UPSERT are described in the document. The provides a full specification .
Option 3) The query language now includes an UPDATE statement which enables sophisticated selection of which vertices and edges to update and how to update them. Likewise, there is an INSERT statement in the query language. See the
You can write a query which selects vertices or edges to be deleted. See the DELETE subsections of the "Data Modification Statements" section in
If you wish to completely clear all the data in the graph store, use the CLEAR GRAPH STORE -HARD
command. Be very careful using this command; deleted data cannot be restored (except from a Backup). Note that clearing the data does not erase the catalog definitions of vertex, edge, and graph types. See also " "
Yes. The GSQL Query Language is a full-featured graph query-and-data-computation language. In addition, there is a small lightweight set of built-in query commands that can inspect the set of stored vertices and edges, but these built-in commands do not support graph traversal (moving from one vertex to another via edges). We refer to this as the Standard Data Manipulation API or the Built-in Query Language (described in and the )
For a first-time user: See the documents and then For users with some experience, a reference card is now available: GSQL Query Language Reference Card.
Three new types were introduced in v0.8: GroupByAccum, BitwiseAndAccum, and BitwiseOrAccum. Version 0.8.1. added ArrayAccum. This is a quick summary. For a more detailed explanation, see the "Accumulator Types" section of
See the section "Accumulators" in the document.
The data are in JSON format. See the section "Output Statements" in the
Yes. A ListAccum is like an array, a 1-dimensional array. If you nest ListAccums as the elements within an outer ListAccum, you have effectively made a 2-dimensional array. Please read Section "Nested Accumulators" in the for more details. Here is an example:
Yes, please read Section "Nested Accumulators" in the for more details. There are seven types of container accumulators: ListAccum, SetAccum, BagAccum, MapAccum, ArrayAccum HeapAccum, and GroupByAccum. Here the allowed combinations:
07/02/20 - Corrected the URL for the GET Schema REST endpoint
07/02/20 - Clarified the behavior of outdegree()
New features and described in .
New and modified features and described in the .
New and modified features and described in the
New and modified features and described in the .
See
See
See
Exit Code
Description
0
No Error
211
Syntax Error
212
Runtime Error
213
No Graph
255
Unknown Error
Exit Code
Description
0
No Error
41
Login or Authentication Error
201
Wrong Argument Error
202
Connection Error
203
Compatibility Error
204
Session Timeout
212
Runtime Error
255
Unknown Error
Deprecated Type
Alternate Approach
REAL
Use FLOAT or DOUBLE
INT_SET
Use SET<INT>
INT_LIST
Use LIST<INT>
STRING_SET_COMPRESS
Use SET<STRING COMPRESS>
STRING_LIST_CONPRESS
Use LIST<STRING COMPRESS>
UINT_SET
Use SET<INT>
UINT32_UINT32_KV_LIST
Use MAP<UINT, UINT>
INT32_INT32_KV_LIST
Use MAP<INT, INT>
UINT32_UDT_KV_LIST
Use MAP<UINT, UDT_type>, where UDT_type is a user-defined tuple type
INT32_UDT_KV_LIST
Use MAP<INT, UDT_type>, where UDT_type is a user-defined tuple type
Deprecated Statement
Alternate Statement
FOREACH ... DO ... DONE
FOREACH... DO... END
FOREACH (condition) { body }
FOREACH condition DO
body
END
IF (condition) {
body1
} else {
body2
}
IF condition THEN
body1
ELSE
body2
END
WHILE (condition) {
body
}
WHILE condition DO
body
END
Deprecated Statement
Alternate Statement
MySet Person = ...
MySet (Person) = ...
Deprecated Operation
Alternate Operation
CREATE JOB [loading job definition]
CREATE LOADING JOB [loading job definition]
RUN JOB [for loading and schema change jobs]
Specify the job type:
RUN LOADING JOB
RUN SCHEMA_CHANGE JOB
RUN GLOBAL SCHEMA_CHANGE JOB
CREATE / SHOW/ REFRESH TOKEN
To create a token, use the REST endpoint GET /requesttoken
offline2online
The offline loading job mode was discontinued in v2.0. Do not write loading jobs using this syntax.
Deprecated Syntax
Alternate Syntax
JSON API v1
v2 has been the default JSON format since TigerGraph 1.1. No alternate JSON version will be available.
PRINT ... TO_CSV [filepath]
Define a file object, then PRINT ... TO_CSV [file_object]
Deprecated Statement
Alternate Statement
SELECT count() FROM ...
// count may be out of date
SELECT approx_count(*) FROM ...
// same behavior as count(); may not include all the latest data updates
SELECT count(*) FROM ...
// exact, but slower than approx_count(*)
Primitive Types | Advanced Types | Complex Types |
INT UINT FLOAT DOUBLE BOOL STRING | STRING COMPRESS DATETIME | User-Defined Tuple (UDT) LIST SET MAP |
Parameter | Meaning of value | Allowed values | Comments |
SEPARATOR | specifies the special character that separates tokens (columns) in the data file | any single ASCII character | Required.
|
HEADER | whether the data file's first line is a header line which assigns names to the columns. In offline loading, the Loader reads the header line to obtain mnemonic names for the columns. In online loading, the Loader just skips the header line. | "true", "false" | Default = "false" |
QUOTE | specifies whether strings are enclosed in
single quotation marks: |
| Optional; no default value. |
Parameter | Meaning of value | Allowed values | Comments |
FILENAME | name of input data file | any valid path to a data file | Required for online loading. Not allowed for offloading loading |
EOL | the end-of-line character | any ASCII sequence | Default = |
bookcode | genre |
101 | fiction |
101 | adventure |
101 | fantasy |
101 | young adult |
102 | fiction |
102 | science fiction |
102 | Chinese |
Accumulators | data types |
SumAccum | INT, UINT, FLOAT, DOUBLE, STRING |
MaxAccum, MinAccum | INT, UINT, FLOAT, DOUBLE, VERTEX |
AvgAccum | INT, UINT, FLOAT, DOUBLE (output is DOUBLE) |
AndAccum, OrAccum | BOOL |
BitwiseAndAccum, BitwiseOrAccum | INT (acting as a sequence of bits) |
ListAccum, SetAccum, BagAccum | baseType, TUPLE, STRING COMPRESS |
ArrayAccum | accumulator, other than MapAccum, HeapAccum, or GroupByAccum |
MapAccum | key: baseType, TUPLE, STRING COMPRESS value: baseType, TUPLE, STRING COMPRESS, ListAccum, SetAccum, BagAccum, MapAccum, HeapAccum |
HeapAccum< tuple_type >(heapSize, sortKey [, sortKey_i]*) | TUPLE |
GroupByAccum | key: baseType, TUPLE, STRING COMPRESS accumulator: ListAccum, SetAccum, BagAccum, MapAccum |