Release Notes
TigerGraph Server 3.6.0 LTS was released on June 17th, 2022.
LTS versions are supported for 24 months from their initial release and should be the choice for production deployments.
-
TigerGraph Server 3.6.4 (October 10, 2023) fixes several issues and adds some mitigations.
-
TigerGraph Server 3.6.3 (February 9, 2023) fixes several issues
-
TigerGraph Server 3.6.2 (August 16, 2022) fixes several issues.
-
TigerGraph Server 3.6.1 (July 7, 2022) fixes a few issues.
-
The latest maintenance release of TigerGraph 3.6 is available on TigerGraph Cloud.
New Features
Below is a list of new features and improvements:
Elasticity
-
Added support for provisioning Elastic Read-only (ER) clusters (Preview)[1] with different partitions than the primary cluster.
-
Note: This feature is currently in the preview stage only available for enterprise TigerGraph Cloud customers. If you are a paid TigerGraph Cloud enterprise customer and want to set up an ER cluster for your environment, please open a support ticket.
-
Manageability
-
Introduced TigerGraph Kubernetes Operator (Preview) [1]. TigerGraph Kubernetes Operator allows you to automate operations such as the creation, status checking and deletion of TigerGraph clusters.
Security
-
Sensitive data such as user credentials in log and configuration files are now encrypted.
-
[3.6.4] Added additional security settings to the default headers for Nginx. (TP-3293)
-
[3.6.4] Tighten the file permissions for various files. (TP-3131, TP-3396)
Ecosystem integration
-
Added the ability to stream data from external Kafka clusters to Data Streaming Connector.
-
Added the ability to stream AVRO data for all supported sources with Data Streaming Connector.
Performance
-
Improved data loading speed and reliability.
-
Improved the speed and reliability of database catalog operations such as vertex and edge definition, schema changes, and query installation.
-
Optimized query compilation during schema changes to greatly enhance schema change performance.
Observability
-
Improved GSQL log debuggability by ensuring all metadata and data update operations share a common prefix.
-
Improved engine log debuggability by including the ID of the original request from RESTPP.
-
Added more detail to several error messages.
-
[3.6.4] Added reports of vertices with empty string IDs to the data loading logs.
Data API service
-
Added two parameters
target_vertex_must_exist
andsource_vertex_must_exist
to the REST endpoint to upsert data.
Query Language Enhancement
-
Reducer function
add()
now accumulates values when loading to aMAP
attribute.
GraphStudio and Admin Portal Enhancement
-
Added the ability to one-click install built-in Graph Data Science Library algorithms.
-
Added the ability to load data from Google Cloud Storage through GraphStudio UI.
-
Added Japanese language support for GraphStudio and Admin Portal.
Fixed Issues
Fixed in 3.6.4
TigerGraph Server 3.6.4 was released on October 10, 2023.
Upgrading
-
Fixed an issue in which the verifydict process could cause a schema error when upgrading from 3.6.0. (GLE-5137)
-
Fixed an issue with importing data which has user-defined tuples with the older default width for INT/UINT fields (4 bytes) instead of the current default width (8 bytes). (GLE-5620)
-
Improved output to be more specific when GSE is offline and
/deleted_vertex_check
reported all deleted vertices on the GPE side as ghost IDs, assuming GSE has no deletion. (CORE-3018) -
Fixed
/deleted_vertex_check
false alert when GSE doesn’t have a segment and there are no active vertices on the GPE. (CORE-2823)
Backup
-
Fix an error in the application server that was conflating the schedule for local backup with cloud backup. (TP-3436)
Exporting
-
Fixed an issue in which a loading job’s VERTEX_MUST_EXIST option was not being included in the output of SHOW, LS, or EXPORT, if the loading job used a TEMP_TABLE with the VERTEX_MUST_EXIST option. (GLE-5593)
In Replicated Clusters
-
Fixed an issue in which lastSeqId could get out of sync across a CRR multicluster system when a query installation was triggered by a schema change. (GLE-5403)
-
Fixed an issue in which a user registered on the primary cluster of a CRR system was not able to run a query on the secondary cluster, because the secondary cluster was inappropriately creating a new secret for the user. (GLE-5419)
-
Fixed an issue in which the Kafka offset of a GSQL server in a secondary cluster could become lost if the secondary cluster is shut down while active. (GLE-5411)
-
Fixed an issue in which deleting or refreshing an authentication token could cause cross-region replication to become out of sync.(GLE-5433)
-
Refined the self-protection tests that trigger an intentional GSE crash/panic so that receiving duplicate messages only triggers a crash if the receiver is the primary cluster in a replicated system.(CORE-2610)
-
Fixed an issue in which the GSE Kafka Syncer could get stuck if a secondary replica is promoted to be the primary replica (PR), and if the old PR’s term ID was larger than the new PR’s term ID. (CORE-2602)
-
Fixed a rare GSE crash caused by the lease thread not selecting a new GSE leader when the current GSE leader takes too long to respond to a heartbeat check due to high CPU load. Also added log messages to make diagnosis easily. (CORE-2871)
-
Fixed a null pointer exception in the TokenCleaner routine which is called when a follower transitions to leader. (GLE-5203)
Across a Single Cluster
-
Refined the self-tests for an error state among the GSE Kafka offsets in a cluster so that a GSE leader which doesn’t write any messages to Kafka during its term is not treated as an error. (CORE-2632, CORE-2485)
-
Fixed an issue in which a GPE leader might not be aware of vertices in a remote segment of a cluster needing to be deleted, if the segment was created before a rebuild and a restart occurred before the rebuild.(CORE-2480)
Loading
-
Fixed: an issue in which data loading could become stalled if the free disk space became low.(CORE-2608)
-
Added kafka_reader to the product bundle so it is available if needed for debugging (QA-5240)
GSQL
-
Fixed an issue with DROP GRAPH not succeeding if there is a local data source for the graph (CREATE DATA_SOURCE –LOCAL). (GLE-5460)
-
Fixed an issue with the DATETIME type being declared internally in two different places, causing possible confusion. (CORE-2541)
-
Fixed a race with the GPE possibly crashing during UDF compilation due to a race condition. (CORE-2618)
-
Fixed a GPE crash caused by threads continuing to use a query’s resources after a query has been aborted. (CORE-2828)
-
Fixed a GPE crash occurring when a query references a vertex type that is not in the local tag-based graph schema. (CORE-2831)
-
Fixed an issue where INSERT edge does not work for queries in DISTRIBUTED mode. (GLE-5135)
-
Fixed a deadlock issue during certain GSQL operations for users using OIDC or SAML authentication. (GLE-4819)
-
Fixed an issue where certain queries with nested accumulations would run in Interpreted mode but would not install (compile). (GLE-5278)
-
Fixed a bug where string parameters containing "&" were not being handled correctly. (GLE-5610)
-
Removed unnecessary locks when running GSQL in interpreted mode (GLE-5613)
-
Fixed an issue on a multi-node cluster where DROP QUERY of a query with a codegen error did not fully remove the bad query’s generated code. (GLE-5614)
REST Endpoints
-
Changed the deleted_vertex_check and data_consistency_check endpoints so that they can be run without authorization on a specific graph (GLE-5245)
GraphStudio
-
Fixed a null pointer exception when importing a GraphStudio solution package. (GLE-5612)
Mitigations
-
If a secondary replica (SR) is performing work on many vertices so that several messages are required, and if it sees a gap in the vertex IDs from one message to the message, the SR’s GSE will intentionally crash, to prevent entering a possibly inconsistent state. (CORE-2503)
-
If data is being loaded to a replicated system, and the process notices an inconsistency in the number of edges reporting on one replica vs. another, then the GPE will intentionally crash to prevent additional errors. (CORE-2646)
Fixed in 3.6.3
TigerGraph Server 3.6.3 was released on February 9, 2023.
Security
-
Fixed security vulnerabilities in Moment, JQuery UI, Eclipse Jersey, and ApacheHTTPClient.
-
Fixed security vulnerabilities in the Docker client.
-
Redacted sensitive data by default in the gadmin connector list command output. Introduced the
-v
or--verbose
flags to show the full output. -
Improved clarity of
write_role
andread_role
privileges to prevent misunderstandings and improve security. -
Strongly enhanced security of UDF file handling. UDF files are disabled by default and need to be manually enabled each use, preventing malicious uses.
-
Fixed an issue where two conflicting authorization tokens could be created on the same graph, causing an error.
-
Fixed an issue where an empty user ID could cause an error in rare cases.
-
Fixed an issue where an uncommonly short authorization token could cause an error.
-
Improved security of the base tigergraph folder on a server installation, preventing unauthorized modification.
Graph Engine
-
Fixed an issue where in rare cases a data filter would cause previously-entered data mappings to be deleted
-
Fixed an issue that sometimes caused a crash when exporting a graph that included a recursive query.
-
Improved handling of data sources that included the
\
character. -
Fixed an issue where the edge count between different replicas could in rare cases be inconsistent.
-
Fixed an issue where in rare cases data would be written incorrectly at the time of an engine restart.
-
Fixed an issue where in rare cases an accumulator could return unexpected results.
-
Fixed an issue that could cause a data loading failure or invalid vertex IDs after a GSE restart when some vertices were deleted.
-
Fixed an issue with edge parallel processing where an unusually high number of edge transactions could cause a crash or missing data resulting from invalid edge data access.
Backup and Restore
-
Improved the query install stage during backup and restore, increasing stability and improving error handling.
Kafka
-
Rebalanced Kafka replication to avoid placing too many replicas on the same node, which could lead to memory errors.
-
Fixed an issue with too much memory usage during large file loading with Kafka.
Fixed in 3.6.2
TigerGraph Server 3.6.2 was released on August 16th, 2022.
-
Improved query installation error handling during GBAR restore.
-
Fixed an issue that could cause timeout errors when running queries in rare cases.
-
Fixed an issue that caused cross-region replication to stop syncing when the primary cluster drops a loading job.
-
Fixed an issue where installation for a query that inserts a vertex/edge with a
DATETIME
attribute would fail. -
Fixed an issue where S3 loaders created in GraphStudio do not self-delete after loading sample data.
-
Fixed an issue where
/requesttoken
generates invalid tokens if the RESTPP service is down on a node in a cluster. -
Fixed an issue where GBAR restore would cause GPE config file permissions to change in rare cases.
-
Fixed an issue where the built-in queries such as
searchvertex
could cause out-of-memory issues. -
Fixed an issue in GraphStudio where adding a data mapping with a filter could delete all previously established mappings.
-
Increased restrictiveness of file permissions for snapshot files.
-
Fixed an issue where requests to GSQL endpoints would return an error if the request also included setting cookies.
Fixed in 3.6.1
TigerGraph Server 3.6.1 was released on July 7th, 2022.
-
Fixed an issue where high-frequency schema changes can cause GPE dysfunction in rare cases.
-
Fixed an issue in GraphStudio that caused the data mapping arrow between the file icon and an edge type to disappear in some cases.
-
Fixed an issue where certain internal API endpoints did not properly authenticate incoming requests.
Fixed in 3.6.0
-
Fixed a bug that caused issues reading the attributes of a vertex after cluster expansion if the vertex type had deleted attributes.
-
Fixed an issue that caused Graph Processing Engine (GPE) dysfunction in rare cases during concurrent read and write operations on the same vertex and its connected edges during data loading.
-
Fixed an issue that delayed the display of loading status on Load Data page in GraphStudio.
-
Fixed an issue where
v.getAttr()
function could cause GPE dysfunction if it is provided the wrong type. -
Fixed an issue where if a Kafka topic only has one message, it cannot be consumed.
-
Fixed an issue with Graph Storage Engine (GSE) leader election that caused GSE dysfunction.
-
Fixed an issue that prevented GraphStudio from accepting a
SET<VERTEX>
type parameter if the size of the parameter is greater than 10. -
Fixed an issue that could cause a query written in GSQL syntax V2 that uses a
POST-ACCUM
clause without anACCUM
clause to produce wrong results in rare cases. -
Fixed an issue that could cause a query written in GSQL syntax v2 that uses multiple
POST-ACCUM
clauses that refer to the source vertex alias to produce wrong results in rare cases. -
Fixed an issue that resulted in
SHOW LOADING STATUS ALL
command showing inaccurate loading job status when an instance has multiple graphs. -
Fixed an issue that caused KAFKACONN out-of-memory (OOM) issue when loading large datasets through the data streaming connector.
-
Fixed an issue that affected the high availability (HA) of the application server on Elastic Kubernetes Service (EKS).
-
Removed inaccurate loading metrics from the Load Data page in GraphStudio.
-
When a user tries to edit an inferred vertex from returned edges in GraphStudio’s visualized query results, they will now see a warning to modify their queries to return the vertices to be able to edit the vertices.
Deprecation and Compatibility Warnings
-
The
-OPTIMIZE
flag forINSTALL QUERY
is deprecated and is planned to be dropped in the 3.7 release. -
Begining with 3.6.3, a vertex having an empty string as its primary id cannot be loaded. Previously, such a vertex was permitted.
Known Issues
-
When available memory becomes very low in a cluster and there are a large number of vertex deletions to process, some remote servers might have difficulty receiving the metadata needed to be aware of all the deletions across the full cluster. The mismatched metadata will cause the GPE to go down.
-
If
System.Backup.Local.Enable
is set totrue
, this also enables a daily full backup at 12:00am UTC -
Vertex primary IDs are treated as STRING, regardless of whether they are declared as STRING, INT, or UINT. If a vertex is defined to have INT or UINT type IDs, but the user tries to load string values, the string values will be accepted, because the type is not checked. If the
primary_id_as_attribute
option is selected, then attending to perform numeric options on a pirmary id may encounter a data type conflict error. -
For a list of known issues for GraphStudio and Admin Portal, please see Known issues for GraphStudio and Known issues for Admin Portal.
Compatibility with TigerGraph 3.4
-
A single
POST-ACCUM
clause can no longer reference more than one vertex alias in Syntax V1. -
GET /requesttoken
endpoint is dropped. Please usePOST /requesttoken
to request authentication tokens instead.-
Using request body to store credentials is more secure than using query string. If you have a create a token request that puts the credentials in the query string, all you need to do is use the
POST
endpoint and move your credentials to the request body.
-
-
GET /gsqlserver/gsql/queryinfo
endpoint on port 14240 now returns the query input parameters in the same order as they are in the query instead of an unordered list. -
Deploying TigerGraph in Kubernetes now requires more service account permissions than previous versions. For details, see Prerequisites section in Quickstart with GKE, Quickstart with AKS, and Quickstart with EKS.
Compatibility with TigerGraph 3.1
The following changes were made to the built-in roles in TigerGraph’s Role-based Access Control
-
The built-in role
queryreader
can no longer run queries that include updates to the database.-
To emulate the old
queryreader
role, create a role with allqueryreader
privileges, and also grant theWRITE_DATA
privilege to the new role.
-
-
The built-in role
admin
can no longer create users-
To emulate the old
admin
role, create a global role with alladmin
privileges, and also grant theWRITE_USER
privilege to the new role.
-
-
To learn more about role management and the privileges of built-in roles, see:
Compatibility with TigerGraph 2
Major revisions (e.g., from TigerGraph 2 to TigerGraph 3) are the opportunity to deliver significant improvements. While we make every effort to maintain backward compatibility, in selected cases APIs have changed or deprecated features have been dropped, in order to advance the overall product.
Data migration: A tool is available to migrate the data in TigerGraph 2.6 to TigerGraph 3.0. Please contact TigerGraph Support for assistance.
Query and API compatibility:
-
Some gadmin syntax has changed. Notably.
gadmin set config
is nowgadmin config set
. Please see Managing with gadmin. -
Some features which were previously deprecated have been dropped. Please see V3.0 Removal of Previously Deprecated Features for a detailed list.
V3.0 Removal of Previously Deprecated Features
TigerGraph 2.x contained some features which were labeled as deprecated. These features are no longer necessary because they have been superseded already by improved approaches for using the TigerGraph platform.
The new approaches were developed because they use more consistent grammar, are more extensible, or offer higher performance. Therefore, TigerGraph 3.0 and above has streamlined the product by removing support for some of these deprecated features, listed below:
Data Types
Deprecated type | Alternate approach |
---|---|
|
Use |
|
Use |
|
Use |
|
Use |
|
Use |
|
Use |
|
Use |
|
Use |
|
Use |
|
Use |
Syntax for Control Flow Statements
Deprecated statement | Alternate statement |
---|---|
|
|
FOREACH (condition) { body } |
FOREACH condition DO body END |
IF (condition) { body1 } else { body2 } |
IF condition THEN body1 ELSE body2 END |
WHILE (condition) { body } |
WHILE condition DO body END |
Vertex set variable declaration
If a vertex type is specified, the vertex type must be within parentheses.
Deprecated Statement | Alternate Statement |
---|---|
|
|
Query, Job, and Token Management
Deprecated operation | Header 2 |
---|---|
|
Job types need to be specified:
|
|
Job types need to be specified:
|
|
To create a token, use the REST endpoint GET /requesttoken. |
|
The offline loading job mode was discontinued in v2.0. Do not write loading jobs using this syntax. |
Output
See PRINT Statement
Deprecated Syntax | Alternate Syntax |
---|---|
JSON API v1 |
v2 has been the default JSON format since TigerGraph 1.1. No alternate JSON version will be available. |
|
Define a file object, then |