Database Import/Export All Graphs
The GSQL EXPORT GRAPH ALL
and IMPORT GRAPH ALL
commands perform a logical backup and restore.
A database EXPORT
contains the database’s data, and optionally some types of metadata.
This data can be subsequently imported in order to recreate the same database, in the original or in a different TigerGraph platform instance.
To see how to Export/Import selected individual graphs in a database refer to Import/Export Individual Graphs.
Import/export is a complement to Backup and Restore, but not a substitute. See the Known Issues for additional considerations. |
Import Prerequisite
To import an exported database, ensure that the export files are from a database that was running the exact same version of TigerGraph as the database that you are importing into.
Known Issues
|
EXPORT GRAPH ALL
The EXPORT GRAPH ALL
command reads the data and metadata for all
graphs in the TigerGraph system and writes the information to a zip file
in the designated folder.
If the command provides no options, it performs a full export, including schema, data, template information, and user profiles.
Synopsis
EXPORT GRAPH ALL [<export_options>] TO "<directory_name>"
exportOptions ::=
(-S | --SCHEMA | -T | --TEMPLATE | -D | --DATA | -U | --USERS | -P | --PASSWORD pwd)
-S, --SCHEMA Only Schema will be exported
-T, --TEMPLATE Only Schema, Queries, Loading Jobs, UDFs
-D, --DATA Only Data Sources will be exported
-U, --USERS Includes Permissions, Secrets, and Tokens
-P, --PASSWORD Encrypt with password. User will be prompted
--SEPARATOR Specify a column separator
--EOL Specify a line separator
The export directory should be empty before running the command. |
Parameters
Parameter | Description |
---|---|
|
The path of the directory to output export files to. Must be an absolute path. |
Export All Options
Option | Description |
---|---|
|
Only export graph schema. |
|
Only export graph schema, queries, loading jobs and UDFs. |
|
Must be used with either |
|
Must be used with either |
|
Encrypt the exported file with a password. Users will be prompted to enter a password when using this option. |
-p |
Pass to “/path/to/dir“. |
|
Specify the column separator character. Default is ASCII char 29. |
|
Specify the line separator character. Default is ASCII char 28. |
Proxy users cannot be exported. |
ASCII char 30 (RS) and 31 (US) are reserved separators that cannot be used for data export. |
Output
The EXPORT GRAPH ALL
command exports all graphs in the database.
The export contains the following categories of files:
-
Data files in CSV format, one file for each type of vertex and each type of edge in a single node.
However, for cluster environments, since the data files are spread across the nodes, they will not be included in the zip file.
-
GSQL DDL command files created by the export command. The import command uses these files to recreate the graph schema(s) and reload the data.
-
Copies of the database’s queries, loading jobs, and user-defined functions.
-
GSQL command files used to recreate the users and their privileges.
-
An encrypted file containing the Access Control List (ACL) privilege catalog.
When exporting graphs, ACLs are only exported when both queries and users are exported.
|
The following files are created in the specified directory when
exporting and are then zipped into a single file named
ExportedGraph.zip
.
If the file is password-protected, it can only be unzipped using the GSQL command |
-
A
DBImportExport_<Graph_Name>.gsql
for each graph calledGraph_Name
in a multigraph database that contains a series of GSQL DDL statements that do the following:-
Create the exported graph, along with its local vertex, edge, and tuple types,
-
Create the loading jobs from the exported graphs
-
Create data source file objects
-
Create queries
-
-
A
graph_<Graph_Name>/
folder for each graph in a multigraph database containing data for local vertex/edge types in<Graph_Name>
. For each vertex or edge type called<type>
, there is one of the following two data files:-
vertex_<type>.csv
-
edge_<type>.csv
-
-
global.gsql
- DDL job to create all global vertex and edge types, and data sources. -
tuple.gsql
- DDL job to create all User Defined Tuples. -
Exported data and jobs used to restore the data:
-
GlobalTypes/
- folder containing data for global vertex/edge types-
vertex_name.csv
-
edge_name.csv
-
-
run_loading_jobs.gsql
- DDL created by the export command which will be used during import:-
Temporary global schema change job to add user-defined indexes. This schema job is dropped after it has run.
-
Loading jobs to load data for global and local vertex/edges.
-
-
-
Database’s saved queries, loading jobs, and schema change jobs
-
SchemaChangeJob/
-* folder containing DDL for schema change jobs. See section "Schema Change Jobs" for more information-
Global_Schema_Change_Jobs.gsql
contains all global schema change jobs -
Graph_Name_Schema_Change_Jobs.gsql
contains schema change jobs for each graphGraph_Name
-
-
-
User-defined functions
-
Tokenbank.cpp
- copy of<tigergraph.root.dir>/app/<VERSION_NUM>/dev/gdk/gsql/src/TokenBank/TokenBank.cpp
-
ExprFunctions.hpp
- copy of<tigergraph.root.dir>/app/<VERSION_NUM>dev/gdk/gsql/src/QueryUdf/ExprFunctions.hpp
-
ExprUtil.hpp
- copy of<tigergraph.root.dir>/app/<VERSION_NUM>/dev/gdk/gsql/src/QueryUdf/ExprUtil.hpp
-
-
Users:
-
users.gsql
- DDL to create all exported users, import Secrets and Tokens and grant permissions.
-
-
ACL privilege catalog:
-
ACLDict/1/ACLPrivilegeCatalog.json
. An encrypted file containing the ACL privilege catalog.
-
EXPORT GRAPH ALL TO "/tmp/export_graphs/"
Insufficient disk space
If not enough disk space is available for the data to be exported, the system returns an error message indicating not all data has been exported. Some data may have already been written to disk. If an insufficient disk error occurs, the files will not be zipped, due to the possibility of corrupted data which would then corrupt the zip file. The user should clear enough disk space, including deleting the partially exported data, before reattempting the export.
It is possible for all the files to be written to disk and then to run out of disk space during the zip operation. If that is the case, the system will report this error. The unzipped files will be present in the specified export directory. |
Export timeout
If the timeout limit is reached during export, the system returns an error message indicating not all data has been exported. Some data may have already been written to disk. If a timeout error occurs, the files will not be zipped. The user should delete the export files, increase the timeout limit and then rerun the export.
The timeout limit is controlled by the session parameter export_timeout
.
The default timeout is ~138 hours. To change the timeout limit, use the command:
SET EXPORT_TIMEOUT = <timeout_in_ms>
IMPORT GRAPH ALL
The IMPORT GRAPH ALL
command unzips the .zip
file ExportedGraph.zip
located in the designated folder, and then runs the GSQL command files.
IMPORT GRAPH ALL
erases the current database (equivalent to running DROP ALL
).
The current version does not support incremental or supplemental changes to an existing database (except for the --keep-users
option).
IMPORT GRAPH ALL
looks for specific filenames.
If either the zip file or any of its contents are renamed by the user, IMPORT GRAPH ALL
may fail.
Any separator character defined in the export step is automatically included in the exported file. There is no need to specify the separator during import.
Please be extra cautious when importing databases as it can overwrite the current solution, resulting in the deletion of existing schemas, load jobs, queries, and data files. Importing a new solution cannot be undone to restore the previous state, regardless of whether the import succeeds or fails. Therefore, create a complete backup beforehand in case you need to restore the database: Back up a Database Cluster For security purposes, TigerGraph has two |
Required privileges
WRITE_SCHEMA
, WRITE_QUERY
, WRITE_LOADINGJOB
, EXECUTE_LOADINGJOB
, DROP ALL
, WRITE_USERS
Synopsis
IMPORT GRAPH ALL [import_options] FROM "<filename>"
importOptions ::= [-P | --PASSWORD ] [ (-KU | -- keep-users]
-P, --PASSWORD Decrypt with password. User will be prompted.
-p, Pass from “/path/to/dir“.
-KU, --KEEP-USERS Do not delete user identities before importing
Parameters
Parameter | Description |
---|---|
|
The path to the zip file produced by the
|
Import All Options
Option | Description |
---|---|
|
Decrypt with password. You will be prompted to enter a password when using this option. |
|
Keep the current users during the import operation. New users from the imported graph will still be added. Global roles from the current database are kept if you use this option. Local roles are dropped since the graphs themselves are dropped. If you have global roles with local privileges, those privileges are dropped from the global roles as well. You must include this option if you run the |
Loading Jobs
There are two sets of loading jobs:
-
Those that were in the catalog of the database which was exported. These are embedded in the file
DBImportExport_Graph_Name.gsql
-
Those that are *created by
EXPORT GRAPH ALL
and are used to assist with the import process. These are embedded in the filerun_loading_jobs.gsql
.
The catalog loading jobs are not needed to restore the data. They are included for archival purposes.
Some special rules apply to importing loading jobs. Some catalog loading jobs will not be imported. |
-
If a catalog loading job contains
DEFINE FILENAME F = "/path/to/file/"
, the path will be removed and the imported loading job will only containDEFINE FILENAME F
. This is to allow a loading job to still be imported even though the file may no longer exist or the path may be different due to moving to another TigerGraph instance. -
If a specific file path is used directly in the LOAD statement, and the file cannot be found, the loading job cannot be created and will be skipped. For example,
LOAD "/path/to/file" to vertex v1
cannot be created if/path/to/file
does not exist. -
Any file path using
$sys.data_root
will be skipped. This is because the value of$sys.data_root
is not retained from an export. During an import,$sys.data_root
is set to the root folder of the import location.
Schema Change Jobs
There are two sets of schema change jobs:
-
Those that were in the catalog of the database which was exported. These are stored in the folder
/SchemaChangeJobs
. -
Those that were created by
EXPORT GRAPH ALL
and are used to assist with the import process. These are in therun_loading_jobs.gsql
command file. The jobs are dropped after the import command is finished with them.
The database’s schema change jobs are not executed during the import process.
This is because if a schema change job had been run before the export, then the exported schema already reflects the result of the schema change job.
The directory /SchemaChangeJobs
contains these files:
-
Global_Schema_Change_Jobs.gsql
contains all global schema change jobs -
<Graph_Name>_Schema_Change_Jobs.gsql
contains schema change jobs for each graph<Graph_Name>
.
Cluster export/import
Importing and exporting clusters is not fully automated in the current version. The database can be exported and imported by following some additional steps.
Export from a cluster
Rather than creating a single export zip file, the EXPORT GRAPH ALL
command creates a file for each machine.
To export, run EXPORT GRAPH ALL
from the GSQL shell on one node.
The EXPORT GRAPH
command does not bundle all the files to one server, and it does not compress each server’s files to one zip file.
Some files, including the data files, are exported to each server, while some files are only on the local server where EXPORT GRAPH
was run.
Import into a cluster
The following are the steps to import an export file to a cluster.
You may only import to a cluster that has the same number and configuration of servers as the data from which the export originated.
1. Transfer files to new cluster
Transfer the export files from the export servers to the corresponding servers in the new cluster. For example, the files on the m1 node of the cluster that exported the graphs must be copied to the m1 server on the cluster that is importing the export files.
The export file on every node must share the same absolute path.
2. Run IMPORT GRAPH ALL
Run the IMPORT GRAPH ALL
command from the server that corresponds to the server where EXPORT GRAPH ALL
was run.
For example, if you exported from the m2 node in a cluster, you also need to run the IMPORT GRAPH ALL
command from the m2 node of the cluster you are importing the export files into.