TigerGraph DB Release Notes
TigerGraph Server 3.10.2 LTS was released on Oct 18, 2024.
TigerGraph Server 3.10.1 LTS was released on May 7, 2024.
TigerGraph Server 3.10.0 preview version was released on March 13, 2024.
LTS versions are supported for 24 months from their initial release (X.X.0) and should be the choice for production deployments.
Key New Features
-
Change Data Capture (CDC) - This equips TigerGraph users with the capability to automatically capture and stream data changes to external Kafka systems maintained by the user. Additionally, CDC can be configured in Admin Portal.
-
Workload Queue - Configure workload queues so that queries are routed to the appropriate queues during runtime.
-
Spark Connector - A new dedicated connector used with Apache Spark to read data from a Spark DataFrame and write to TigerGraph.
-
Online Backup - Minimizes blocking time during backups, allowing successful execution of post requests for TigerGraph users.
-
Differential Backup - Ensure data files that have changed since the most recently completed full backup are backed up without any data lost.
-
Refined Upgrading process - Changes to the upgrading process to allow a TigerGraph upgrade without having to upgrade the K8’s operator.
-
Support Non-Interactive Upgrade - The user can use option
-n
to avoid input (y/n
) to switch to the new version. -
Global Schema Restrict to Global Scope - Users must have the global scope to interact with global schema change jobs (create, delete, run).
Detailed List of New and Modified Features
TigerGraph Server
-
[3.10.1] Two new configurations to help tune re-builder scheduling logic - When running
GPE.BasicConfig.Env.
there are two new configurationsSegmentMetaFlushAlways
andSegmentMetaForceFlushIntervalSec
to help fine tune re-builder scheduling. -
[3.10.1] Ubuntu 22.04 is now certified - Ubuntu 22.04 is now one of TigerGraph’s certified OS systems.
-
[3.10.1] Audit log support for direct RESTPP API calls - Audit logs now record direct REST++ API Calls.
-
[3.10.1] Added new Return Codes:
-
REST-4000
- Response time exceeds the timeout limit. -
REST-10020
- License has expired. -
REST-10021
- Access to the file has failed. -
REST-30001
- The parameter is invalid (general error).
-
-
[3.10.1] New configuration field System.Metrics.IncludeHostName - Users now have the option to include hostname/ip in the metrics output in OpenMetrics format.
-
[3.10.1] Nodes can now be paired with hostnames in installation - Users now can set the node by ip or hostname during installation. This is supported in both interactive and non-interactive installation.
-
[3.10.1] Make request with GSQL-Secret - users can now use the authorization secret in a request header as a GSQL-Secret.
-
[3.10.1] Four new reserved keywords -
FUNCTION
,OPENCYPHER
,POLICY
, andROW
have all been added to the DDL GSQL reserved words and keywords list. -
Audit Logs - Audit logs maintain a historical record of activity events, noting the time, responsible user or service, and affected entity.
-
Import/Export Individual Graphs - Users can now import or export individually selected graphs.
-
Updates to the Set Up CRR - Updates to the Cross-Region Replication (CRR) setup enable this feature to flow into the
gadmin backup restore
process preventing a writable gap. -
Externalizing Kafka Configs using Config Provider - TigerGraph’s Kafka connector config will use a reference to retrieve the config value from an external source.
-
Two new flags for gadmin backup create - added two parameters to enable data consistency checks during the backup.
GSQL Command and Querying Language
-
Context Functions - Context functions are a set of new built-in functions that provide insights into the user’s information and work inside
INSTALLED
queries,INTERPRET
queries, and GSQL Functions. -
Command Updates - Added new flag
--local
to commands gadmin start and gadmin stop reducing the time required to start/stop local services. -
GSQL Data Streaming Improvements - Improvements to GSQL data streaming that reduce the CPU usage, improve performance, optimizes Disk usage, and increases stability and cohesion.
Loading
-
Support for Snowflake Data Warehouse - Added support for loading data from Snowflake, another popular data warehouse.
-
Support for PostgreSql Data Warehouse - Added support for loading data from PostgreSql, another popular data warehouse.
-
Avro Data Validation with KafkaConnect - The KafkaConnect feature flag
ErrorTolerance
enables data loading services to handle malformed data and report errors effectively -
Local file and Kafka loader Auto-Restart - Local file and Kafka loader will now auto restart if it unexpectedly quits.
Schema
-
New non-interactive mode GSQL operations - Added four new non-interactive mode for operations create user, alter password, export graph all, and import graph all.
Querying and Query Management
-
[3.10.1] New impact warnings when running schema change job - Added a warning message when running the global or local schema change job so, users can understand the impact of running a global or local schema change on a query.
-
Support Edge Accumulators - Support for edge accumulators for a single-hop distributed query.
-
Option -N to schema change job - Added an option to
RUN SCHEMA_CHANGE JOB
that skips recompile and reinstall queries.
System Management and Monitoring Enhancements
-
[3.10.2] Enhanced Metrics Reports Total and available capacity for CPU and memory are now reported by the
/informant/metrics
endpoint.
Kubernetes Operator
-
[3.10.1] Kubernetes Operator is GA - The Kubernetes Operator is now in GA with 3.10.1.
-
Added new fields -
.spec.tigergraphConfig
in TigerGraph CR and a new option--tigergraph-config
in kubectl-tg plugin. -
Updates to cluster creation using YAML - Improvements to the configurations to align with the database.
-
Support for mounting multiple PVC and PV for pods - Added two optional fields
additionalStorages
andspec.storage
to customize the PV for TigerGraph pods. -
Support for customizing of pods - Customize the pods or containers, for example, users can add more customized labels and annotations or change the security context of the containers.
-
Pause a running cluster - Added a new field
.spec.pause
in TigerGraph CR and a new subcommandkubectl tg pause
inkubectl-tg plugin
. Users scan set.spec.pause=true
to pause a running cluster and resume it by setting.spec.pause=false
.
Security
-
[3.10.1] SSO Match Strategy Extension - SSO match strategy has been extended to allow matches via regular expression.
-
[3.10.1] Added a JWT Token config -
Security.JWT.Audience
added to allow users to set a JWT Token authentication that verifies if theaud
(recipient for which the JWT is intended) defined in JWT Token matches the configured one or not. -
RBAC: Row Policy (Preview Feature) - is used to control access to specific rows of data in TigerGraph. See also RBAC Row Policy EBNF for examples.
-
Object-Based Privileges - This mechanism allows users to grant or revoke privileges based on specific objects. See Object-Based Privilege Tables for a complete list.
-
OIDC JWT Authentication - Provides token-based authentication in JSON web token (JWT) format, allows TigerGraph users better control over application access.
-
File Input Policy -
GSQL.fileInputPolicy
allows users to apply restrictions on the location of local files used to load data to TigerGraph. -
Kafka Security via SSL - Kafka brokers can be secured by SSL including the connections from Kafka clients to Kafka brokers.
TigerGraph Suite Updates
Admin Portal
-
Change Data Capture (CDC) can be enabled in Admin Portal.
-
SSO.OIDC via Okta - Support for Standard OIDC Authorization Code Flow for general purpose adds more security for logins to Admin Portal Users.
GraphStudio
-
Single Graph Import and Export Support - Allow users to choose a single graph and the data when they export or import data in GraphStudio.
-
New GUI command to disable concurrent sessions -
GUI.EnableConcurrentSession
allows users to disable concurrent sessions so that multiple browsers cannot log in with the same username at the same time, revoking the previous session and warning the user to re-login.
TigerGraph Insights
-
Changing Single Value Widget to Value Widget - Modified the value element of insights to support the mapping of multiple values.
-
Added Markdown Widget - This addition allows users to add formatted text, links, images, and other rich content to the dashboards.
-
Conditional Styling Widget Update - Conditional styling can now be applied to edges, with the addition of an
always
option in the condition dropdown. -
Added Scatter Chart Widget - The scatter chart will provide a visual representation of the relationship between two numerical variables, allowing users to identify patterns or correlations in the data.
Fixed issues
Fixed and Improved in 3.10.2
-
Fixed issue where local accumulators defined across multiple lines in a query were misinterpreted in the GSQL client (GLE-8259).
-
Fixed issue in the post-upgrade check that causes the upgrade to abort due to insufficient permissions to the /tmp directory (GLE-8005).
-
Fixed issue where loading jobs with a
WHERE
condition would hang after upgrading from an older version (GLE-7953). -
Added an error report if a schema check is requested but cannot be performed because the GPE is in warmup status (GLE-7898).
-
Fixed situation where a query containing a
BREAK
orCONTINUE
statements could produce incorrect results (GLE-7874). -
Fixed regression problem with installing queries which create lists containing mixed types of numeric data (GLE-7928).
-
Resolved an intermittent deadlock in Informant that caused
gadmin status
to fail (TP-5930). -
Fixed int64 value underflow error by explicitly type casting uint64 (CORE-4108).
-
Restored the ability to run the TigerGraph
gcollect
command on Kubernetes (TP-6351). -
Fixed issue with database import hanging caused by the status of a loading job not being received by Kafka streaming library (TP-5772).
-
Allowed installation to continue on Oracle and RedHat Linux, 8 even if the TigerGraph user is not listed in AllowedUsers in /etc/ssh/sshd_config (TP-5105).
Fixed and Improved in 3.10.1
Functionality
-
Fixed known issue where the attribute name
memberOf
was case-sensitive. It is now case-insensitive (GLE-6660). -
Fixed issue of clarity for error message/log when global
schema_change
failed for adding edge but it’s relied on vertex does not exist (GLE-6751). -
Fixed issue where installation was halted due to TigerGraph disks mounted with
noexec
on AppRoot or DataRoot, preventing execution (TP-4929). -
Fixed issue where there was a delay in loading response times due to syntax detection process in GSQL (GLE-6822).
-
Fixed issue were there was a GPE failure reported during query execution prompting relocation from
/tmp
toSystem.TempRoot
(GLE-5536). -
Fixed issue where incorrect error response occurred when specified graph does not exist (APS-2824).
-
Fixed issue where users encountered error
Vertex expansion failed: c.default.post is not a function
during Explore Neighbors operation in Insight (APS-2840).
Fixed and Improved in 3.10.0
Functionality
-
Fixed issue where if the primary node is offline, access to Graph Studio was interrupted, but resumed once the primary node is back online (APPS-258).
-
Fixed issue where some
GPR
andInterpret
queries that specified the built-infilter()
function would fail installation because of a row policy or tag filter (GLE-6448). -
Fixed issue when restarting Restpp and resulted in the task count being greater than the actual number (TP-4498).
-
Fixed Issue in 3.9.3 and 3.10.0 versions could not run a GSQL query when a single node is down in a High Availability cluster. See 3.9.2 and below versions workaround for more details.
-
Fixed issue when changes would not save when switching to fullscreen and back in Insights (APPS-2197).
-
Fixed issues where a vertex would not move after expanding in
Explore Graph
(APPS-2540). -
Fixed issue in Exception statements where if it was placed before any query-body statements, it would cause both branches of an
IF-ELSE
statement to be executed (GLE-3998). -
Fixed issue where an error in how the
ACCUM
clause is transformed, results in a transformed query with a semantic error. See accumulator types for more details on valid types (GLE-5695). -
Fixed issue when parsing a negative float parameter to GSQL CLI in
{key:value}
format would create an argument error (GLE-5875).
Crashes and Deadlocks
-
Fixed GPE crash during query execution when accumulators values are not vaild. See accumulator types for more details (GLE-4411).
Improvements
-
Improved by significantly reducing the CPU usage when a large number of loading jobs are started at the same time (TP-4159).
-
Improved the write speed of loading jobs (TP-4159).
-
Improved disk usage optimization by restricting a loading job in waiting status to only consumes disk resources when it actually writes data (TP-4474).
-
Improved stability and cohesion of the connector and loader, which helps create better synchronization and reduces inconsistencies in the statuses (TP-4158).
-
Improved significantly the pause time during backups from a few minutes to a couple of seconds, regardless of the data size. (CORE-3000).
-
Improved data consistency during the backup and restore process (Core-3000).
-
Improved availability when one KSL server in error state (TP-4378 & TP-4593).
-
Improved the required privilege for
/rebuildnow
and/deleted_vertex_check
making both nowGraph-level “READ DATA”
privilege and now able to run on DR cluster in CRR feature.(CORE-3291). -
Improved exception statements by adding a default exception format available in cases where the exception is not defined in the query (GLE-5854)
-
Improved long-running RESTPP requests and will now use less memory (CORE-3027).
-
Improved log files names from
log.AUDIT
tolog.AUDIT-GSQL
(GLE-6496). -
Improved audit log
timestamp
format by extending format from2023-12-20 14:42:50.25
to this2023-12-20T14:42:50.243-07:00
(GLE-6395). -
Improved
userAgent
field clarity in audit logs when authenticating failed. Audit log will now record the correct user agent (GLE-6404). -
Improved audit logs by adding operating system’s username to the audit log record (GLE-6394).
-
Improved SearchFile experience by increasing the
GRPC_CLIENT_TIMEOUT
(APPS-2711). -
Improved functionality of the
ExprFunction
file to automatically remove the leftover “to_string” function in ExprFunction file (GLE-5834). -
Improved retention strategy for
EventQueue
that improved timely monitoring of the utilization of disk space (TP-4920). -
Improved service logs accuracy to show SSO users username in log (APPS-2496).
Known Issues and Limitations
Description | Found In | Workaround | Fixed In | ||
---|---|---|---|---|---|
Running either |
3.9.1 |
After running either command, change the superuser’s password to make it secure again. |
TBD |
||
EXPORT GRAPH ALL does not correctly handle loading jobs containing |
3.2 |
TBD |
|||
When using IMPORT ALL if a users schema size in the
|
3.2 |
|
TBD |
||
If importing a role, policy, or function that has a different signature or content from the existing one, the one being imported will be skipped and not aborted. For example:
|
3.10.0 |
Users need to re-create (delete and create) the imported role, policy, or function manually, and make sure that the importing one meets the requirements set by the existing one. |
TBD |
||
Row Policy (Preview Feature) does not yet filter or check vertex attribute data in upsert operations, such as:
|
3.10.0 |
Users should restrict the access of creating/running queries and loading jobs for roles related to row policy. |
TBD |
||
In file INPUT and OUTPUT policy, if there exists 2 path ( |
3.2 and 3.10.0 |
Users should avoid using paths if they are nested. For example, avoid this scenario, path2 = |
3.10.1 |
||
It has been observed that an issue happens when RESTPP will send a request to all gpes, and if one is down, the request sent to it will |
3.10.0 |
|
TBD |
||
While running |
3.10.0 |
Restart all services in Admin Portal or the backend. |
TBD |
||
|
3.10.0 |
TBD |
|||
Upgrading from a previous version of TigerGraph has known issues. |
3.10.0 |
See section Known Issues and Workarounds for more details. |
TBD |
||
Input Policy feature has known limitations. |
3.10.0 |
See section Input Policy Limitations for more details. |
TBD |
||
Change Data Capture (CDC) feature has known limitations. |
3.10.0 |
See section CDC Limitations for more details. |
TBD |
||
If the |
3.9.3 |
This results in the error message: |
TBD |
||
Users may see a high CPU usage caused by Kafka prefetching when there is no query or posting request. |
3.9.3 |
TBD |
TBD |
||
GSQL query compiler may report a false error for a valid query using a vertex set variable (e.g. |
TBD |
TBD |
TBD |
||
If a loading job is expected to load from a large batch of files or Kafka queues (e.g. more than 500), the job’s status may not be updated for an extended period of time. |
3.9.3 |
In this case, users should check the loader log file as an additional reference for loading status. |
TBD |
||
When a GPE/GSE is turned off right after initiating a loading job, the loading job is terminated internally. However, users may still observe the loading job as running on their end. |
3.9.3 |
Please see Troubleshooting Loading Job Delays for additional details. |
TBD |
||
For v3.9.1 and v3.9.2 when inserting a new edge in |
3.9.2 |
Please see Troubleshooting Loading Job Delays for additional details. |
3.9.3 |
||
GSQL |
TBD |
TBD |
TBD |
||
After a global loading job is running for a while a fail can be encountered when getting the loading status due to |
TBD |
TBD |
TBD |
||
When the memory usage approaches 100%, the system may stall because the process to elect a new GSE leader did not complete correctly. |
TBD |
This lockup can be cleared by restarting the GSE. |
TBD |
||
If the CPU and memory utilization remain high for an extended period during a schema change on a cluster, a GSE follower could crash, if it is requested to insert data belonging to the new schema before it has finished handling the schema update. |
TBD |
TBD |
TBD |
||
When available memory becomes very low in a cluster and there are a large number of vertex deletions to process, some remote servers might have difficulty receiving the metadata needed to be aware of all the deletions across the full cluster. The mismatched metadata will cause the GPE to go down. |
TBD |
TBD |
TBD |
||
Subqueries with SET<VERTEX> parameters cannot be run in Distributed or Interpreted mode. |
TBD |
(Limited Distributed model support is added in 3.9.2.) |
TBD |
||
Upgrading a cluster with 10 or more nodes to v3.9.0 requires a patch. |
3.9 |
Please contact TigerGraph Support if you have a cluster this large. Clusters with nine or fewer nodes do not require the patch. |
3.9.1 |
||
Downsizing a cluster to have fewer nodes requires a patch. |
3.9.0 |
Please contact TigerGraph Support. |
TBD |
||
During peak system load, loading jobs may sometimes display an inaccurate loading status. |
3.9.0 |
This issue can be remediated by continuing to run |
TBD |
||
When managing many loading jobs, pausing a data loading job may result in longer-than-usual response time. |
TBD |
TBD |
TBD |
||
Schema change jobs may fail if the server is experiencing a heavy workload. |
TBD |
To remedy this, avoid applying schema changes during peak load times. |
TBD |
||
User-defined Types (UDT) do not work if exceeding string size limit. |
TBD |
Avoid using UDT for variable length strings that cannot be limited by size. |
TBD |
||
Unable to handle the tab character |
TBD |
TBD |
TBD |
||
If |
3.9.0 |
TBD |
3.9.1 |
||
The data streaming connector does not handle NULL values; the connector may operate properly if a NULL value is submitted. |
TBD |
Users should replace NULL with an alternate value, such as empty string "" for STRING data, 0 for INT data, etc. (NULL is not a valid value for the TigerGraph graph data store.) |
TBD |
||
Automatic message removal is an Alpha feature of the Kafka connector. It has several known issues. |
TBD |
TBD |
TBD |
||
The |
3.9.0 |
TBD |
3.9.1 |
||
The LDAP keyword |
3.9 |
Check the case of the keywords for |
3.10.1 |
Compatibility Issues
Description | Version Introduced |
---|---|
Users could encounter file input/output policy violations when upgrading a TigerGraph version. See Input policy backward compatibility. |
v3.10.0 |
When a PRINT argument is an expression, the output uses the expression as the key (label) for that output value.
To better support Antlr processing, PRINT now removes any spaces from that key. For example, |
v3.9.3+ |
Betweenness Centrality algorithm: |
v3.9.2+ |
For vertices with string-type primary IDs, vertices whose ID is an empty string will now be rejected. |
v3.9.2+ |
The default mode for the Kafka Connector changed from EOF="false" to EOF="true". |
v3.9.2+ |
The default retention time for two monitoring services |
v3.9.2+ |
The filter for |
v3.9.2+ |
Some user-defined functions (UDFs) may no longer be accepted due to increased security screening.
|
v3.9+ |
Deprecations
Description | Deprecated | Removed |
---|---|---|
The use of plaintext tokens in authentication is deprecated. Use OIDC JWT Authentication instead. |
3.10.0 |
4.1 |
The command |
3.7 |
3.10.0 |
Vertex-level Access Control (VLAC) and VLAC Methods are now deprecated. |
3.10.0 |
4.0 |
Spark Connection via JDBC Driver is now deprecated and will no longer be supported. |
3.10.0 |
TBD |
|
v3.9.3 |
TBD |
Kubernetes classic mode (non-operator) is deprecated. |
v3.9 |
TBD |
The |
v3.7 |
TBD |