TigerGraph DB Release Notes
TigerGraph Server 4.2.1 LTS was released on September 5, 2025.
TigerGraph Server 4.2.0 Preview was released on April 3, 2025.
TigerGraph Server 4.2.0 Alpha was released on March 4, 2025.
TigerGraph 4.2’s feature set is an enhancement of what was available in 4.1.2. Therefore, features that were in 4.1.2 are not considered New Features. |
Key New Features
-
Hybrid Vector Search: Scalable storage and similarity search for vector attributes associated with vertices.
-
New
VECTOR
attribute data type with real-time updates -
Fast approximate nearest neighborhood (ANN), with automatic indexing, incremental indexing
-
hybrid graph + vector search in GSQL
-
-
Incremental Backup which efficiently backs up only what has changed since the last backup/restore event.
-
Rolling Upgrades of maintenance versions for HA clusters (Preview).
-
GQL/OpenCypher path pattern syntax, within GSQL syntax
-
Run ad hoc queries directly (non-procedural interpreted queries)
-
Community Edition of TigerGraph Database:
-
Up to 200GB graph data + 100GB vector data, single server only
-
Some enterprise features not included. Community support only
-
See the full comparison of editions.
-
[4.2.1] Combined graph and vector data size limit for Community Edition, allowing flexible allocation of up to 300 GB for graph and vector storage.
-
Detailed List of New and Modified Features
System Management Enhancements
-
Incremental Backup which efficiently backs up only what has changed since the last backup/restore event.
-
Rolling Upgrades of maintenance versions for HA clusters (Preview).
-
Restore option
--keepusers
to keep the database users in the current database, not the ones from the backup file. -
Backup to Azure and GCP cloud storage, analogous to existing support for backup to AWS S3.
-
More graceful user-initiated shutdown of RESTPP and GPE services.
-
Backup HA Enhancement: backup can still work even if a node is down or a region is down.
-
Kubernetes Operator update to v1.5.0 with numerous new features
-
CRR replication, Auto cluster monitoring during creation, upgrade pre-check support, rolling upgrades, backoff delay for job retries to improve resilience
-
-
gadmin status -v now displays Role information, with support for GPE, GSE, and GSQL.
-
[4.2.1] New ZK.SnapCount configuration enables users to fine-tune the number of committed ZooKeeper transactions that trigger a snapshot, enhancing memory management and stability under high transaction loads.
Security and Access Control Enhancements
-
JWT authentication for all REST endpoints, including GSQL endpoints.
-
New built-in role: globalobserver, used in Savanna, for minimal privileges:
READ_SCHEMA
andREAD_LOADINGJOB
. -
Two new access privileges: READ_LOG and APP_ACCESS_LOG to grant read-only access to log files.
APP_ACCESS_LOG
is specifically designed for access from GraphStudio or Savanna. -
[4.2.1] Added the
Security.JWT.JWKS.URL
configuration parameter, to allow JWT certificates to be fetched using the Key ID (kid) from a JWKS (JSON Web Key Set) URL.
Integration Enhancements
-
Writing query output to an S3 object can now be configured at the user session level instead at the system administration level, for better alignment of roles and privileges.
-
Updated Kafka from 2.5.1 to 3.6.2. Updated Zookeeper from 3.6.3 to 3.8.4.
API Enhancements
-
New endpoints to fetch schema information and sample data from Snowflake.
-
Added a REST endpoint to delete user-defined function files.
-
The Interpret Query endpoint
POST /gsql/v1/queries/interpret
can now be used with queries that have a datetype parameter. -
The Edge Upsert endpoint now supports upserting the multiple edges of a discriminated edge in a single API request.
-
[4.2.1] Added endpoints to list running queries and list running loading jobs for a specific graph. Each endpoint shows currently running queries or loading jobs, plus a
hasRunning
indicator. -
[4.2.1] RESTPP endpoints
/query_status
,/query_result
,/showprocesslist/{graph_name}
, and/showprocesslistall
now report query status and results across all server nodes in a distributed cluster, not just the node receiving the request. Use the optionaldisallow_redirect
parameter to query only the local node, if desired.
Query Language Enhancements
-
Multiple enhancements to Syntax V3, which combines GSQL + OpenCypher + GQL Pattern Matching:
-
GQL/OpenCypher path pattern syntax, within GSQL syntax
-
Run ad hoc OpenCypher queries and single GSQL SELECT blocks directly (non-procedural interpreted queries)
-
comparison operators
=
and<>
as alternatives to==
and!=
in WHERE conditions. -
OPTIONAL MATCH, COALESCE, and IS [NOT] NULL (OpenCypher)
-
type() and labels() functions (OpenCypher)
-
-
Map and List query parameter types are now supported.
-
LIKE: <string1> LIKE <string pattern> can be used anywhere a condition is allowed, and <string pattern> can include functions.
-
Bitwise Accumulators: official support for comparison and bitwise logic operators.
-
Configuation parameter
GSQL.Github.Enabled
to enable retrieving UDFs from a Github repository. -
Query Profiler to analyze query execution and performance.
-
[4.2.1] Added support for the "configs" object in loading jobs, enabling additional configuration parameters via the REST API.
-
[4.2.1] Added two new timestamp conversion functions, gsql_ts_to_epoch_seconds_legacy() and gsql_ts_to_epoch_seconds_signed(), providing enhanced flexibility for date handling in loading jobs, including support for pre-1970 dates.
-
[4.2.1] Added support for the
PRINT vt TO_CSV f
syntax to allow printing a vertex set directly to a CSV file. -
[4.2.1] File objects can now be created with an optional file permission parameter, allowing users to specify permissions using a Linux octal-encoded file permission code.
Data Streaming/Connector Enhancements
-
Improved connector for loading from external Kafka source: new configuration details. Older connector to Load from External Kafka using MirrorMaker (Deprecated) is still available.
-
Multi-character separators between data fields in loading jobs.
-
Schema Change options
-force
and-warn
to specify the behavior when loading jobs run during a schema change. -
Introduced
DISABLED
Loading Job status for jobs which have a schema conflict as a result of a Schema Change. -
Spark Connector supports
OAuth2
authentication, enabling seamless JWT token retrieval from third-party IdPs like Azure AD and Auth0 for improved security and token management.
Performance Improvements
-
Query Plan Caching, to speed up the run time of repeated queries.
GraphStudio, Admin Portal, and Insights
-
Added a new page for changing passwords in TigerGraph Suite.
-
See the release notes for Insights 4.2.
-
[4.2.1] Added a new entry on the GraphStudio homepage to browse and download pre-built solutions, making it easy to import and start projects with common graph use cases.
-
[4.2.1] Added support in Insights for global parameters, enabling application-level variables to be defined once and shared across multiple pages.
Fixed issues
Fixed and Improved in 4.2.1
Functionality
-
Fixed missing error message when running a non-procedural GSQL query containing syntax errors at the end (GLE-10295).
-
Eliminated false REST-10015 error message when upserting a single vertex using the
gsql-atomic-level:atomic
flag via REST API (GLE-10680). -
Fixed a GPE crash caused by removing a cluster node or when a Disaster Recovery cluster has fewer replicas than the primary cluster (CORE-4967).
-
Fixed the MaxFlow algorithm to return correct results, consistent with documented examples (GLE-10737).
-
Fixed a UDF parsing error that failed to detect user-defined function names which conflict with built-in functions (GLE-10326).
-
Fixed an issue where the system would throw a NullPointerException and fail to create or install a query during code validation (GLE-10501).
-
Fixed an issue where authentication was skipped when replaying requests in a Disaster Recovery cluster (GLE-10702).
-
Improved the upgrade process by performing query and token bank compilation checks during pre‑upgrade instead of post-upgrade (GLE‑9459).
-
Fixed GSQL startup failures during upgrade caused by errors while verifying dictionary and UDF files (GLE-11224).
-
Fixed: If an interpreted non-procedural query contained an error but the text prior to the error was valid, the partial query would run and ignore the error (GLE-10295).
-
Fixed an issue that could cause false negatives or false positives during license checks caused by an underflow in the vertex count reporting (CORE-5117).
Improvements
-
Reduced disk usage by removing on-disk indexes in cloud mode and deleting main files in both cloud and on-prem deployments (CORE-4930).
-
Enhanced security for namespace-scoped TigerGraph Operator installations by separating the necessary cluster roles and namespace roles (TP-7024).
-
Improved upgrade logs with a clearer message: Installation of new TigerGraph version (TP-8249).
-
Improved the Rolling Upgrade documentation to explain how queries behave during upgrades, how retries are handled, and how to tune timeout settings, so users can plan smoother upgrades.
-
Improved the Vector Search documentation to detail known limitations, including unsupported ListAccum patterns, use of the ListAccum operator, and WHERE clause restrictions.
-
Added support for configuring file permissions when exporting query results with
PRINT TO_CSV
, instead of using hardcoded defaults (GLE-11322).
Security
-
Eliminated a security vulnerability that allowed AWS credentials to be read in plain text using the
gadmin config get
command. These values are now masked (TP-8285). -
Fixed unauthorized exposure of graph name and creator information via the
/auth/simple
and/internal/info
APIs (GLE-10746). -
Eliminated the potential exposure of personally identifiable information (PII) in loading job summary files by replacing detailed data with line numbers for invalid entries (TP-8575).
-
Eliminated potential exposure of personally identifiable information (PII) in error messages when a loading job semantic check failed during creation (GLE-11125).
-
Addressed multiple upstream and OS-level security vulnerabilities through package upgrades and dependency patching. Fixed the following security vulnerabilities:
-
CVE-2016-20013, CVE-2016-2781
-
CVE-2017-11164, CVE-2017-7475
-
CVE-2018-18064
-
CVE-2019-6461
-
CVE-2020-8908, CVE-2020-29582
-
CVE-2021-31879
-
CVE-2022-27943, CVE-2022-3219, CVE-2022-41409, CVE-2022-4899
-
CVE-2023-29383, CVE-2023-2976, CVE-2023-34969, CVE-2023-37769,
CVE-2023-4039, CVE-2023-45288, CVE-2023-45289, CVE-2023-45290,
CVE-2023-45918, CVE-2023-48161, CVE-2023-50495, CVE-2023-7008
-
CVE-2024-10041, CVE-2024-2236, CVE-2024-23337, CVE-2024-23454, CVE-2024-23944,
CVE-2024-24783, CVE-2024-24784, CVE-2024-24785, CVE-2024-24789, CVE-2024-24790, CVE-2024-24791,
CVE-2024-34155, CVE-2024-34156, CVE-2024-34158, CVE-2024-41996, CVE-2024-45336, CVE-2024-45341,
CVE-2024-52005, CVE-2024-52615, CVE-2024-52616, CVE-2024-56406, CVE-2024-56433, CVE-2024-6763, CVE-2024-8176
-
CVE-2025-0167, CVE-2025-0913, CVE-2025-1352, CVE-2025-1376, CVE-2025-22233, CVE-2025-22234, CVE-2025-22235,
CVE-2025-22866, CVE-2025-22870, CVE-2025-22871, CVE-2025-22872, CVE-2025-22874, CVE-2025-23022,
CVE-2025-27496, CVE-2025-27613, CVE-2025-27817, CVE-2025-27818, CVE-2025-27819, CVE-2025-29088, CVE-2025-30204,
CVE-2025-32728, CVE-2025-32988, CVE-2025-32989, CVE-2025-32990, CVE-2025-3576, CVE-2025-41234, CVE-2025-41242,
CVE-2025-4598, CVE-2025-4673, CVE-2025-46701, CVE-2025-46835, CVE-2025-46836, CVE-2025-47907, CVE-2025-48060,
CVE-2025-48734, CVE-2025-48924, CVE-2025-48988, CVE-2025-48989, CVE-2025-49125, CVE-2025-49146, CVE-2025-49574,
CVE-2025-52520, CVE-2025-53506, CVE-2025-55163, CVE-2025-6069, CVE-2025-6395, CVE-2025-7345, CVE-2025-8176
-
Fixed and Improved in 4.2.0
Functionality
-
Fixed issue where the loading job status was incorrectly cleaned up in advance due to an inaccurate loading job counter (TP-4741).
-
Fixed issue where the loader would fail silently, and now it reports an error directly if the loader fails to start (TP-6420).
-
Fixed issue where loading an empty file triggered a failure loop (TP-6530).
-
Fixed issue with audit error codes being logged when a CDC message fails to deliver to external Kafka (CORE-4249).
-
Fixed critical disk issue caused by the rebuilder getting stuck in a partitioned cluster after dropping vertex or edge attributes (CORE-4303).
-
Fixed issue where CDC EDGE messages with "UNKNOWN" ID were not removed when a query deleted a vertex and edge simultaneously (CORE-4457).
-
Fixed issue where the primary ID was missing when inserting a vertex implicitly from loading an edge in a single-partition cluster and the target vertex used "primary_id_as_attribute=true" (GLE-7562).
-
Fixed issue with running the TigerGraph command
gcollect
on Kubernetes (TP-6353). -
Added namespace as a suffix of the HostName in HostList on Kubernetes (TP-6214).
-
Fixed query compilation errors when iterating over
FOREACH
to retrieve an accum type attribute with the-single
ordistributed
keyword; ensured operations between two differentGroupByAccums
hold the same data types and field names (GLE-8507). -
Fixed issue with double quotes in literal strings during the creation of a loading job (GLE-8630).
-
Fixed the inability to fetch dynamic file output policy during query execution (GLE-4847).
-
Fixed issue where sensitive information was not removed from the browser localStorage, ensuring better security (APPS-1066)
-
Fixed a query installation failure for single-node queries that initialize vertex set variables in conditional branches, such as
if-else
orcase-when
statements (GLE-7369). -
Added a button to refetch real-time graph statistics on the Load Data page in GraphStudio (APPS-2565).
-
Added the standard Triangle Counting Algorithm to the built-in query list on the Write Query page in GraphStudio (APPS-3429).
-
Fixed the issue where query creation failed with a "query not found" prompt in GraphStudio (APPS-3624).
-
Removed the requirement to restart the GSQL service after modifying the SSO settings in Admin Portal (APPS-3167).
-
Allowed aborting a query in GraphStudio (APPS-3487).
-
Checked that the buffer length is non-negative before reading a file (TP-6690).
-
Fixed the issue where unauthorized users could access sensitive data, including superuser info, via the
/gsql/v1/users
API endpoint (GLE-8290). -
Fixed
NoSuchElementException
thrown when removing all authentication tokens (GLE-8937). -
Fixed issue with data source creation failing to sync to Disaster Recovery (DR) due to complex payload handling (GLE-9617).
-
Fixed bug with the
to_vertex_set
function (GLE-8083). -
Unblocked the usage of the keyword “function” as a vertex/edge/attribute name (GLE-9803).
-
Fixed the backup restoration issue where GSQL was in a warmup state when importing GSQL data (TP-6949).
-
Fixed the metrics issue where
tigergraph_cpu_usage
exceeded 100 andtigergraph_cpu_available
was negative under high load (TP-7127). -
Fixed issue where failed backups in corner cases caused the Engine to pause for an extended period, leading to subsequent query timeouts (CORE-4532).
-
Resolved the issue with downloading GSQL output files larger than 3GB in AdminPortal (APPS-3271).
-
Fixed an issue where the
GSQL-TIMEOUT
header was not being forwarded in the/api/graphql/gsql
endpoint (APPS-3648). -
Fixed a bug where local accumulators defined across multiple lines in a query were misinterpreted as a file in the GSQL client (GLE-8261).
-
Allowed the
globaldesigner
role to monitor queries in Admin Portal (APPS-3172).
Improvements
-
Improved performance of GSQL queries containing DELETE statements intended for deleting all vertices of a given type (GLE-8328).
-
Added support for debug mode when a pod restart fails due to a PostStartHookError (TP-7228).
-
Corrected the logic for the SelectVertex() function so it throws an error if the filePath is not an absolute path.
Known Issues and Limitations
Description | Found In | Workaround | Fixed In |
---|---|---|---|
If a non-procedural query has a syntax error, the GSQL interpreter ignored the invalid portion and executed the partial query before the error, without raising an error. |
4.2.0 |
Check the full query before accepting it. |
4.2.1 |
A NullPointerException occurs when performing a code check for queries using statements other than a = vt.* or a = {vt.*}, causing query creation to fail. |
4.1.3 |
TBD |
TBD |
When |
4.1.3 |
TBD |
TBD |
After upgrading, the system fails to detect |
4.1.3 |
TBD |
TBD |
When using |
4.1.3 |
TBD |
TBD |
Using |
4.1.3 |
Move the |
TBD |
After a cluster expansion, the GPE service may remain in the warmup state. |
4.1.1 |
If this happens, run |
TBD |
The |
4.1 |
Use legacy methods for access control: Define separate graphs which span the same data but have different queries and users. |
TBD |
If a RESTPP request is for a graph operation and thus sent to the GPE then fails inside the GPE, the RESTPP will interpret the response from the GPE as success and report SUCCEED in the audit log. |
4.1 |
Use a |
TBD |
When upgrading, possible permission error for destination folder. |
3.10.1 |
Manually grant permission to |
TBD |
Export does not include LDAP proxy groups |
3.9+ |
Manually recreate the proxy groups on the imported database. |
Compatibility Issues
Description | Version Introduced |
---|---|
Long and dynamic length |
v4.2.1 |
To apply a File Input Policy or File Output Policy change, now both |
v4.1.3, v4.2.0 |
The |
v4.2.0 |
In CDC messages, the format of tuple values has changed. |
v4.2.0 |
SelectVertex() may not be used with a relative filepath but previously this was not enforced. It is now enforced. |
v4.2.0 |
The 'graph' field is now included in CDC messages generated by the TigerGraph CDC service. |
v3.11.0, v4.1.1 |
In CDC messages, the format of map values has changed. |
v3.11.0, v4.1.1 |
A full export package now includes access policies and template queries. |
v4.1.0 |
Users could encounter file input/output policy violations when upgrading a TigerGraph version. See Input policy backward compatibility. |
v3.10.0 |
When a PRINT argument is an expression, the output uses the expression as the key (label) for that output value.
To better support Antlr processing, PRINT now removes any spaces from that key. For example, |
v3.9.3+ |
Betweenness Centrality algorithm: |
v3.9.2+ |
For vertices with string-type primary IDs, vertices whose ID is an empty string will now be rejected. |
v3.9.2+ |
The default mode for the Kafka Connector changed from EOF="false" to EOF="true". |
v3.9.2+ |
The default retention time for two monitoring services |
v3.9.2+ |
The filter for |
v3.9.2+ |
Some user-defined functions (UDFs) may no longer be accepted due to increased security screening.
|
v3.9+ |
Deprecations and Removals
Description | Deprecated | Removed |
---|---|---|
The |
3.9.2 |
4.2.0 |
Streaming connector for external Kafka (3.6 to 3.9.2 version) |
3.9.3 |
4.2.0 |
The format for tuple structures in CDC messages will change in a future version. The future format is likely to be similar to the new format for maps. |
4.1.1 |
4.2 |
Access Control Lists (ACLs) are no longer supported. When upgrading from 3.x to 4.x, ACL privileges will be automatically migrated to object-based privileges. |
4.1.0 |
4.1.0 |
The use of plaintext tokens in authentication is deprecated. Use OIDC JWT Authentication instead. |
3.10.0 |
TBD |
Vertex-level Access Control (VLAC) and VLAC Methods are removed and are no longer available. |
3.10.0 |
4.1.0 |
The command |
3.7 |
3.10 |
Spark Connection via JDBC Driver is now deprecated and will no longer be supported. |
3.10.0 |
TBD |
|
3.9.3 |
TBD |
Kubernetes classic mode (non-operator) is deprecated. |
3.9 |
TBD |
The |
3.7 |
4.1 |