Release Notes

TigerGraph Server 3.10.0 LTS was released on March 13th, 2024.

LTS versions are supported for 24 months from their initial release (X.X.0) and should be the choice for production deployments.

Key New Features

3.10.0

  • Change Data Capture (CDC) - This equips TigerGraph users with the capability to automatically capture and stream data changes to external Kafka systems maintained by the user. Additionally, CDC can be configured in Admin Portal.

  • Workload Queue - Configure workload queues so that queries are routed to the appropriate queues during runtime.

  • Spark Connector - A new dedicated connector used with Apache Spark to read data from a Spark DataFrame and write to TigerGraph.

  • Online Backup - Minimizes blocking time during backups, allowing successful execution of post requests for TigerGraph users.

  • Differential Backup - Ensure data files that have changed since the most recently completed full backup are backed up without any data lost.

  • Refined Upgrading process - Changes to the upgrading process to allow a TigerGraph upgrade without having to upgrade the K8’s operator.

  • Support Non-Interactive Upgrade - The user can use option -n to avoid input ( y/n ) to switch to the new version.

  • Global Schema Restrict to Global Scope - Users must have the global scope to interact with global schema change jobs (create, delete, run).

Detailed List of New and Modified Features

TigerGraph Server

GSQL Command and Querying Language

  • Context Functions - Context functions are a set of new built-in functions that provide insights into the user’s information and work inside INSTALLED queries, INTERPRET queries, and GSQL Functions.

  • Command Updates - Added new flag --local to commands gadmin start and gadmin stop reducing the time required to start/stop local services.

  • GSQL Data Streaming Improvements - Improvements to GSQL data streaming that reduce the CPU usage, improve performance, optimizes Disk usage, and increases stability and cohesion.

Loading

Schema

Querying and Query Management

Kubernetes Operator

  • Added new fields - .spec.tigergraphConfig in TigerGraph CR and a new option --tigergraph-config in kubectl-tg plugin.

  • Updates to cluster creation using YAML - Improvements to the configurations to align with the database.

  • Support for mounting multiple PVC and PV for pods - Added two optional fields additionalStorages and spec.storage to customize the PV for TigerGraph pods.

  • Support for customizing of pods - Customize the pods or containers, for example, users can add more customized labels and annotations or change the security context of the containers.

  • Pause a running cluster - Added a new field .spec.pause in TigerGraph CR and a new subcommand kubectl tg pause in kubectl-tg plugin. Users scan set .spec.pause=true to pause a running cluster and resume it by setting .spec.pause=false.

Security

TigerGraph Suite Updates

Admin Portal

GraphStudio

TigerGraph Insights

Fixed issues

Fixed and Improved in 3.10.0

Functionality

  • Fixed issue where if the primary node is offline, access to Graph Studio was interrupted, but resumed once the primary node is back online (APPS-258)

  • Fixed issue where some GPR and Interpret queries that specified the built-in filter() function would fail installation because of a row policy or tag filter (GLE-6448).

  • Fixed issue when restarting Restpp and resulted in the task count being greater than the actual number (TP-4498)

  • Fixed Issue in 3.9.3 and 3.10.0 versions could not run a GSQL query when a single node is down in a High Availability cluster. See 3.9.2 and below versions workaround for more details.

  • Fixed issue when changes would not save when switching to fullscreen and back in Insights (APPS-2197).

  • Fixed issues where a vertex would not move after expanding in Explore Graph (APPS-2540).

  • Fixed issue in Exception statements where if it was placed before any query-body statements, it would cause both branches of an IF-ELSE statement to be executed (GLE-3998).

  • Fixed issue where an error in how the ACCUM clause is transformed, results in a transformed query with a semantic error. See accumulator types for more details on valid types (GLE-5695).

  • Fixed issue when parsing a negative float parameter to GSQL CLI in {key:value} format would create an argument error (GLE-5875).

Crashes and Deadlocks

  • Fixed GPE crash during query execution when accumulators values are not vaild. See accumulator types for more details (GLE-4411).

Improvements

  • Improved by significantly reducing the CPU usage when a large number of loading jobs are started at the same time (TP-4159).

  • Improved the write speed of loading jobs (TP-4159).

  • Improved disk usage optimization by restricting a loading job in waiting status to only consumes disk resources when it actually writes data (TP-4474).

  • Improved stability and cohesion of the connector and loader, which helps create better synchronization and reduces inconsistencies in the statuses (TP-4158).

  • Improved significantly the pause time during backups from a few minutes to a couple of seconds, regardless of the data size. (CORE-3000).

  • Improved data consistency during the backup and restore process (Core-3000).

  • Improved availability when one KSL server in error state (TP-4378 & TP-4593).

  • Improved the required privilege for /rebuildnow and /deleted_vertex_check making both now Graph-level “READ DATA” privilege and now able to run on DR cluster in CRR feature.(CORE-3291).

  • Improved exception statements by adding a default exception format available in cases where the exception is not defined in the query (GLE-5854)

  • Improved long-running RESTPP requests and will now use less memory (CORE-3027).

  • Improved log files names from log.AUDIT to log.AUDIT-GSQL (GLE-6496).

  • Improved audit log timestamp format by extending format from 2023-12-20 14:42:50.25 to this 2023-12-20T14:42:50.243-07:00 (GLE-6395).

  • Improved userAgent field clarity in audit logs when authenticating failed. Audit log will now record the correct user agent (GLE-6404).

  • Improved audit logs by adding operating system’s username to the audit log record (GLE-6394).

  • Improved SearchFile experience by increasing the GRPC_CLIENT_TIMEOUT (APPS-2711).

  • Improved functionality of the ExprFunction file to automatically remove the leftover “to_string” function in ExprFunction file (GLE-5834).

  • Improved retention strategy for EventQueue that improved timely monitoring of the utilization of disk space (TP-4920).

  • Improved service logs accuracy to show SSO users username in log (APPS-2496).

Known Issues and Limitations

Description Found In Workaround Fixed In

In file INPUT and OUTPUT policy, if there exists 2 path (path1 and path2) in the configured policy list and path1 is parent path of path2, then path1 may not be effective.

3.2 and 3.10.0

Users should avoid using paths if they are nested.

For example, avoid this scenario, path2 = "/tmp/more" and path1= "/tmp".

TBD

It has been observed that an issue happens when RESTPP will send a request to all gpes, and if one is down, the request sent to it will timeout. Including the consistency_check request will also mark as timeout.

3.10.0

  1. Run /rebuildnow to rebuild all the segments.

    Running /rebuildnow when one gpe is down will result in the request timeout. This does not mean the request failed, instead only the currently running GPE will do the rebuild, and any rebuild requests sent to the down GPEs will result in a timeout.

  2. Run /data_consistency_check?realtime=false to check the consistency.

TBD

While running export graph if the disk space is not enough, or the data has not been detected, the export data will get stuck loading.

3.10.0

Restart all services in Admin Portal or the backend.

TBD

[tg_]ExprFunction.hpp will be automatically merged while importing single graphs. In some cases, query compilation may fail.

3.10.0

See Known Issues and Workarounds

TBD

Upgrading from a previous version of TigerGraph has known issues.

3.10.0

See section Known Issues and Workarounds for more details.

TBD

Input Policy feature has known limitations.

3.10.0

See section Input Policy Limitations for more details.

TBD

Change Data Capture (CDC) feature has known limitations.

3.10.0

See section CDC Limitations for more details.

TBD

If the FROM clause pattern is a multi-hop and the ACCUM clause reads both primitive and container type attributes or accumulators of a vertex, the internal query rewriting logic may generate an invalid rewritten output.

3.9.3

This results in the error message: It is not allowed to mix primitive types and accumulator types in GroupByAccum.

TBD

Users may see a high CPU usage caused by Kafka prefetching when there is no query or posting request.

3.9.3

TBD

TBD

GSQL query compiler may report a false error for a valid query using a vertex set variable (e.g. Ent in reverse_traversal_syntax_err) to specify the midpoint or target vertex of a path in a FROM clause pattern.

TBD

TBD

TBD

If a loading job is expected to load from a large batch of files or Kafka queues (e.g. more than 500), the job’s status may not be updated for an extended period of time.

3.9.3

In this case, users should check the loader log file as an additional reference for loading status.

TBD

When a GPE/GSE is turned off right after initiating a loading job, the loading job is terminated internally. However, users may still observe the loading job as running on their end.

3.9.3

Please see Troubleshooting Loading Job Delays for additional details.

TBD

For v3.9.1 and v3.9.2 when inserting a new edge in GPR and INTERPRET mode, the GPE will print out a warning message because a discriminator string is not set for new-inserted edges. Creating an inconsistent problem in delta message for GPR and INTERPRET mode.

3.9.2

Please see Troubleshooting Loading Job Delays for additional details.

3.9.3

GSQL EXPORT GRAPH may fail and cause a GPE to crash when UDT type has a fixed STRING size.

TBD

TBD

TBD

After a global loading job is running for a while a fail can be encountered when getting the loading status due to KAFKASTRM-LL not being online, when actually the status is online. Then the global loading process will exit and fail the local job after timeout while waiting the global loading job to finish.

TBD

TBD

TBD

When the memory usage approaches 100%, the system may stall because the process to elect a new GSE leader did not complete correctly.

TBD

This lockup can be cleared by restarting the GSE.

TBD

If the CPU and memory utilization remain high for an extended period during a schema change on a cluster, a GSE follower could crash, if it is requested to insert data belonging to the new schema before it has finished handling the schema update.

TBD

TBD

TBD

When available memory becomes very low in a cluster and there are a large number of vertex deletions to process, some remote servers might have difficulty receiving the metadata needed to be aware of all the deletions across the full cluster. The mismatched metadata will cause the GPE to go down.

TBD

TBD

TBD

Subqueries with SET<VERTEX> parameters cannot be run in Distributed or Interpreted mode.

TBD

(Limited Distributed model support is added in 3.9.2.)

TBD

Upgrading a cluster with 10 or more nodes to v3.9.0 requires a patch.

3.9

Please contact TigerGraph Support if you have a cluster this large. Clusters with nine or fewer nodes do not require the patch.

3.9.1

Downsizing a cluster to have fewer nodes requires a patch.

3.9.0

Please contact TigerGraph Support.

TBD

During peak system load, loading jobs may sometimes display an inaccurate loading status.

3.9.0

This issue can be remediated by continuing to run SHOW LOADING STATUS periodically to display the up-to-date status.

TBD

When managing many loading jobs, pausing a data loading job may result in longer-than-usual response time.

TBD

TBD

TBD

Schema change jobs may fail if the server is experiencing a heavy workload.

TBD

To remedy this, avoid applying schema changes during peak load times.

TBD

User-defined Types (UDT) do not work if exceeding string size limit.

TBD

Avoid using UDT for variable length strings that cannot be limited by size.

TBD

Unable to handle the tab character \t properly in AVRO or Parquet file loading. It will be loaded as \\t.

TBD

TBD

TBD

If System.Backup.Local.Enable is set to true, this also enables a daily full backup at 12:00am UTC.

3.9.0

TBD

3.9.1

The data streaming connector does not handle NULL values; the connector may operate properly if a NULL value is submitted.

TBD

Users should replace NULL with an alternate value, such as empty string "" for STRING data, 0 for INT data, etc. (NULL is not a valid value for the TigerGraph graph data store.)

TBD

Automatic message removal is an Alpha feature of the Kafka connector. It has several known issues.

TBD

TBD

TBD

The DATETIME data type is not supported by the PRINT … TO CSV statement.

3.9.0

TBD

3.9.1

The LDAP keyword memberOf for declaring group hierarchy is case-sensitive.

TBD

TBD

TBD

Compatibility Issues

Description Version Introduced

Users could encounter file input/output policy violations when upgrading a TigerGraph version. See Input policy backward compatibility.

v3.10.0

When a PRINT argument is an expression, the output uses the expression as the key (label) for that output value. To better support Antlr processing, PRINT now removes any spaces from that key. For example, count(DISTINCT @@ids) becomes count(DISTINCT@@ids).

v3.9.3+

Betweenness Centrality algorithm: reverse_edge_type (STRING) parameter changed to reverse_edge_type_set (SET<STRING>), to be consistent with edge_type_set and similar algorithms.

v3.9.2+

For vertices with string-type primary IDs, vertices whose ID is an empty string will now be rejected.

v3.9.2+

The default mode for the Kafka Connector changed from EOF="false" to EOF="true".

v3.9.2+

The default retention time for two monitoring services Informant.RetentionPeriodDays and TS3.RetentionPeriodDays has reduced from 30 to 7 days.

v3.9.2+

The filter for /informant/metrics/get/cpu-memory now accepts a list of ServiceDescriptors instead of a single ServiceDescriptor.

v3.9.2+

Some user-defined functions (UDFs) may no longer be accepted due to increased security screening.

  • UDFs may no longer be called to_string(). This is now a built-in GSQL function.

  • UDF names may no longer use the tg_ prefix. Any user-defined function that began with tg_ must be renamed or removed in ExprFunctions.hpp.

v3.9+

Deprecations

Description Deprecated Removed

The use of plaintext tokens in authentication is deprecated. Use OIDC JWT Authentication instead.

3.10.0

TBD

The command gbar is removed and is no longer available. However, if you are using a version of TigerGraph before 3.10.0 you can still use gbar to create a backup with gbar of the primary cluster. See also Backup and Restore with gbar on how to create a backup.

3.7

3.10.0

Vertex-level Access Control (VLAC) and VLAC Methods are now deprecated and will no longer be supported.

3.10.0

4.0

Spark Connection via JDBC Driver is now deprecated and will no longer be supported.

3.10.0

TBD

Build Graph Patterns is deprecated and will not be updated or supported and instead we are focusing on Insights as the tool of choice for building visual queries.

v3.9.3

TBD

Kubernetes classic mode (non-operator) is deprecated.

v3.9

TBD

The WRITE_DATA RBAC privilege is deprecated.

v3.7

TBD