TigerGraph DB Release Notes
TigerGraph Server 4.1.1 was released on November 12, 2024.
TigerGraph Server 4.1.0 preview version was released on August 30, 2024.
TigerGraph 4’s feature set is an enhancement of what was available in 3.10.1. Therefore, features that were in 3.10.1 are not considered New Features. |
Detailed List of New and Modified Features
Backup and Restore Enhancements
-
Point in Time Restore: Users can roll back the database to a moment they select, not only the time of an available backup snapshots.
-
Role ARN for Backup to AWS S3 buckets: Users can use AWS Role ARNs (Amazon Resource Names) for convenient and secure management of backups.
-
The --custom-tag flag gives users more control over the naming of backup files.
-
See also High Availability (HA) Enhancements
Security Enhancements
-
Fine-grained Privileges on Queries: gives adminstrators finer control over who may do what actions with which objects:
-
Allows privileges to be granted/revoked at the query level.
-
Adds new query privileges OWNERSHIP, INSTALL and EXECUTE, and splits WRITE to CREATE, UPDATE and DROP.
-
-
JWT Authentication: JWT authorization/authentication is more secure than the plaintext tokens used in earlier TigerGraph versions. Users can generate and use JWT tokens when sending REST requests to the GSQL or REST servers.
-
Password Complexity and Rotation Rules: Administrators can specify rules for the complexity of passwords and how long or how many times a password may be used, in order to improve security and help organizations to satisfy compliance requirements.
-
Expanded Audit Log Coverage: Audit logs now include
gadmin
activity, for more complete audit coverage. -
[4.1.1] More detailed information in audit logs. Also, an authorized admin user can configure the audit logs to include PII.
High Availability (HA) Enhancements
-
Data Loading HA (Beta): If the database is replicated, this data loading mode offers faster loading, reduced disk usage, and HA with automatic failover. Available for TigerGraph’s data streaming connectors for cloud storage, Kafka, data warehouses, and Spark.
-
Change Data Capture (CDC) HA: CDC continues to function as long as at least one replica per partition is online. Previously, Replica 1 needed to remain operational.
-
[4.1.1] CDC messages have a "graph" field to specify which graph was affected.
-
[4.1.1] Cross-Region Replication now includes loading jobs.
-
Backup HA: Backups remain available even if a node fails. Previously, backups were unavailable if a node failed.
Data Streaming/Connector Enhancements
-
Malformed Loading Data Inspector:
SHOW LOADING ERROR
presents a sample of malformed data lines and their error types to an authorized user, enabling them to quickly locate and diagnose loading problems. -
Data Export to AWS S3: Export query results in CSV format directly to AWS S3 buckets for efficient data sharing and analysis
-
Stream Data into Spark: Using the Spark data source reader API, users can run TigerGraph queries and stream the output to Spark as DataFrames. Streaming the output lifts the 2GB query response size limitation for non-streaming output.
-
Notification for Stuck Loader: If loading is stuck or acting abnormally, the loader sends an alert message to the console, including a diagnosis, enabling users to take timely action.
-
See also High Availability (HA) Enhancements
System Management and Monitoring Enhancements
-
New options and optimized behavior in
GADMIN CONFIG APPLY
to restart only those services that have been affected by configuration changes. -
Enhanced Metrics Reports Total and available capacity for CPU and memory are now reported by the
/informant/metrics
endpoint.
Kubernetes Operator Enhancements
-
TigerGraph Operator 1.2.0 introduces several significant new features, including the ability to create services for sidecar containers, support for cluster storage resizing, new lifecycle hooks for TigerGraph CR, and enhanced Multi-AZ cluster resiliency.
TigerGraph K8s Operator 1.2.0 details
-
Region Awareness with Pod Topology Spread Constraints:
Improve workload distribution and availability by enabling region awareness. -
Automatic Expansion of PVCs for TigerGraph CR:
Simplify storage management with automated Persistent Volume Claim (PVC) resizing. -
New Lifecycle Hooks for TigerGraph CR:
Utilize preDeleteAction and prePauseAction lifecycle hooks for better control and automation during cluster operations. -
Service Creation for Sidecar Containers:
Easily create services for sidecar containers with TigerGraph CR. -
Enhanced Debugging Mode:
Debug more effectively with the newly introduced debugging mode in the operator. -
Customization of MaxConcurrentReconciles for the operator:
Fine tune TigerGraph operator’s performance by customizing the maximum number of concurrent reconciles.
-
-
[4.1.1] Added the namespace as a suffix to the HostName in the HostList configuration for TigerGraph on K8s.
-
[4.1.1] Support for customizing the external service port for the TigerGraph and sidecar listeners on K8s.
-
[4.1.1] Pre-upgrade hook in the TigerGraph Operator Helm chart to refresh the CRD automatically during the Operator upgrade.
Language Enhancements
-
REST APIs for Programmatic Use: Introduces a more complete and standard set of REST APIs:
-
provides coverage for database commands previously only available as GSQL commands.
-
allows developers and AI to more easily control and use TigerGraph programmatically.
-
-
[4.1.1] Added token function get_current_datatime() for loading jobs.
TigerGraph Suite
GraphStudio
-
Highlight Connected Edges When Hovering Over a Vertex:
to show users the relationships and structure at a glance. Also available in Insights. -
Collapsible Sidebar:
to let users use their screen space more effiently.
-
Mandatory Password Change upon Expiration:
A password must be changed when it expires or reaches a time/usage limit, enhancing security protocols. Not available on TigerGraph Cloud. -
Customizable Naming of Reverse Edges:
enables data modelers to apply more intuitive and domain-specific names.
TigerGraph Insights
-
Downloadable Query Output:
as CSV or JSON -
"Tree" View Respects Direction of Directed Edges:
to depict hierarchical structures and dependencies more meaningfully -
Support for Variables in Markdown Widget:
for more context-aware and interactive dashboard displays
AdminPortal
-
Health Check Tool: The Health Check Tool in TigerGraph AdminPortal provides administrators with a comprehensive set of checks and diagnostics to ensure the system is running optimally.
-
Fine-grained Query Privileges in RBAC: AdminPortal UI for the fine-grained query privileges described above.
Fixed issues
Fixed and Improved in 4.1.1
Functionality
-
Fixed issue where local accumulators defined across multiple lines in a query were misinterpreted in the GSQL client (GLE-7833).
-
Fixed compile issue for escaped double quote inside string literal of loading job (GLE-8742).
-
Fixed issue where the input widget occasionally resets and loses content (APPS-3095).
-
Fixed inability to run gcollect on a k8s cluster (TP-6291).
-
Fixed Admin Portal log search when there are links to nonexistent logs (APPS-2874).
Crashes and Deadlocks
-
Fixed a stall in differential backups if the preceding full backup was created before any data was loaded into the system (CORE-3833).
-
Fixed issue where a loading job was stuck and couldn’t be cleared if it failed to start (TP-6419).
-
Fixed issue where the file loader enters an infinite loop if the source file contains only a header line (TP-6636).
Security
-
Fixed issue where users without any roles and privileges could access other users' information through the "/gsql/v1/users" endpoint (GLE-8477).
-
Fixed issue in file input/output policy where symbolic links in the allow/block list were unexpectedly dereferenced (GLE-7139).
-
Fixed issue where TG Cloud password could be changed without first validating the current password (APP-2829).
Known Issues and Limitations
Description | Found In | Workaround | Fixed In | ||
---|---|---|---|---|---|
After a cluster expansion, the GPE service may remain in the warmup state. |
4.1.1 |
If this happens, run |
TBD |
||
When using Import All if the schema size in the
|
3.2 |
|
TBD |
||
If importing a role, policy, or function that has a different signature or content from the existing one, the one being imported will be skipped and not aborted. For example:
|
3.10.0 |
Users need to re-create (delete and create) the imported role, policy, or function manually, and make sure that the importing one meets the requirements set by the existing one. |
TBD |
||
Row Policy (Preview Feature) does not yet filter or check vertex attribute data in upsert operations. Such as,
|
3.10.0 |
Users should restrict the access of creating/running queries and loading jobs for roles related to row policy. |
TBD |
||
In file INPUT and OUTPUT policy, if there exists 2 path ( |
3.2 and 3.10.0 |
Users should avoid using paths if they are nested. For example, avoid this scenario, path2 = |
3.10.1 |
||
When RESTPP sends a request to all GPEs, and if one is down, the request sent to it will |
3.10.0 |
|
TBD |
||
While running |
3.10.0 |
Restart all services in Admin Portal or the backend. |
TBD |
||
|
3.10.0 |
TBD |
|||
Upgrading from a previous version of TigerGraph has known issues. |
3.10.0 |
See section Known Issues and Workarounds for more details. |
TBD |
||
Input Policy feature has known limitations. |
3.10.0 |
See section Input Policy Limitations for more details. |
TBD |
||
Change Data Capture (CDC) feature has known limitations. |
3.10.0 |
See section CDC Limitations for more details. |
TBD |
||
If the |
3.9.3 |
This results in the error message: |
TBD |
||
Users may see a high CPU usage caused by Kafka prefetching when there is no query or posting request. |
3.9.3 |
TBD |
TBD |
||
GSQL query compiler may report a false error for a valid query using a vertex set variable (e.g. |
TBD |
TBD |
TBD |
||
If a loading job is expected to load from a large batch of files or Kafka queues (e.g. more than 500), the job’s status may not be updated for an extended period of time. |
3.9.3 |
In this case, users should check the loader log file as an additional reference for loading status. |
TBD |
||
When a GPE/GSE is turned off right after initiating a loading job, the loading job is terminated internally. However, users may still observe the loading job as running on their end. |
3.9.3 |
Please see Troubleshooting Loading Job Delays for additional details. |
TBD |
||
For v3.9.1 and v3.9.2 when inserting a new edge in |
3.9.2 |
Please see Troubleshooting Loading Job Delays for additional details. |
3.9.3 |
||
GSQL |
TBD |
TBD |
TBD |
||
After a global loading job is running for a while a fail can be encountered when getting the loading status due to |
TBD |
TBD |
TBD |
||
When the memory usage approaches 100%, the system may stall because the process to elect a new GSE leader did not complete correctly. |
TBD |
This lockup can be cleared by restarting the GSE. |
TBD |
||
If the CPU and memory utilization remain high for an extended period during a schema change on a cluster, a GSE follower could crash, if it is requested to insert data belonging to the new schema before it has finished handling the schema update. |
TBD |
TBD |
TBD |
||
When available memory becomes very low in a cluster and there are a large number of vertex deletions to process, some remote servers might have difficulty receiving the metadata needed to be aware of all the deletions across the full cluster. The mismatched metadata will cause the GPE to go down. |
TBD |
TBD |
TBD |
||
Subqueries with SET<VERTEX> parameters cannot be run in Distributed or Interpreted mode. |
TBD |
(Limited Distributed model support is added in 3.9.2.) |
TBD |
||
Upgrading a cluster with 10 or more nodes to v3.9.0 requires a patch. |
3.9 |
Please contact TigerGraph Support if you have a cluster this large. Clusters with nine or fewer nodes do not require the patch. |
3.9.1 |
||
Downsizing a cluster to have fewer nodes requires a patch. |
3.9.0 |
Please contact TigerGraph Support. |
TBD |
||
During peak system load, loading jobs may sometimes display an inaccurate loading status. |
3.9.0 |
This issue can be remediated by continuing to run |
TBD |
||
When managing many loading jobs, pausing a data loading job may result in longer-than-usual response time. |
TBD |
TBD |
TBD |
||
Schema change jobs may fail if the server is experiencing a heavy workload. |
TBD |
To remedy this, avoid applying schema changes during peak load times. |
TBD |
||
User-defined Types (UDT) do not work if exceeding string size limit. |
TBD |
Avoid using UDT for variable length strings that cannot be limited by size. |
TBD |
||
Unable to handle the tab character |
TBD |
TBD |
TBD |
||
If |
3.9.0 |
TBD |
3.9.1 |
||
The data streaming connector does not handle NULL values; the connector may operate properly if a NULL value is submitted. |
TBD |
Users should replace NULL with an alternate value, such as empty string "" for STRING data, 0 for INT data, etc. (NULL is not a valid value for the TigerGraph graph data store.) |
TBD |
||
Automatic message removal is an Alpha feature of the Kafka connector. It has several known issues. |
TBD |
TBD |
TBD |
||
The |
3.9.0 |
TBD |
3.9.1 |
||
The LDAP keyword |
3.9 |
Check the case of the keywords for |
3.10.1 |
Compatibility Issues
Description | Version Introduced |
---|---|
In CDC messages, the format of map values has changed. |
v4.1.1 |
A full export package now includes access policies and template queries. |
v4.1.0 |
Users could encounter file input/output policy violations when upgrading a TigerGraph version. See Input policy backward compatibility. |
v3.10.0 |
When a PRINT argument is an expression, the output uses the expression as the key (label) for that output value.
To better support Antlr processing, PRINT now removes any spaces from that key. For example, |
v3.9.3+ |
Betweenness Centrality algorithm: |
v3.9.2+ |
For vertices with string-type primary IDs, vertices whose ID is an empty string will now be rejected. |
v3.9.2+ |
The default mode for the Kafka Connector changed from EOF="false" to EOF="true". |
v3.9.2+ |
The default retention time for two monitoring services |
v3.9.2+ |
The filter for |
v3.9.2+ |
Some user-defined functions (UDFs) may no longer be accepted due to increased security screening.
|
v3.9+ |
Deprecations
Description | Deprecated | Removed |
---|---|---|
The format for tuple structures in CDC messages will change in a future veresion. The future format is likely to be similar to the new format for maps. |
4.1.1 |
TBD (possibly 4.2) |
The use of plaintext tokens in authentication is deprecated. Use OIDC JWT Authentication instead. |
3.10.0 |
TBD |
Vertex-level Access Control (VLAC) and VLAC Methods are removed and are no longer available. |
3.10.0 |
4.1.0 |
Spark Connection via JDBC Driver is now deprecated and will no longer be supported. |
3.10.0 |
TBD |
|
v3.9.3 |
TBD |
Kubernetes classic mode (non-operator) is deprecated. |
v3.9 |
TBD |
The |
v3.7 |
TBD |