Legacy Backup and Restore
Since 3.10.0 the command |
Graph Backup And Restore (GBAR), is an integrated tool for backing up and restoring the data and data dictionary (schema, loading jobs, and queries) of a TigerGraph instance or cluster.
The backup feature packs TigerGraph data and configuration information into a directory on the local disk or a remote AWS S3 bucket. Multiple backup files can be archived. Later, you can use the restore feature to roll back the system to any backup point. This tool can also be integrated easily with Linux cron to perform periodic backup jobs.
The current version of GBAR is intended for restoring the same machine that was backed up. For help with cloning a database (i.e., backing up machine A and restoring the database to machine B), please open a support ticket. |
Synopsis
Usage: gbar backup [options] -t <backup_tag>
gbar restore [options] <backup_tag>
gbar list [backup_tag] [-j]
gbar remove|rm <backup_tag>
gbar cleanup
gbar expand [-a] <new_nodes>
New nodes must be written in <name>:<host> pairs separated by comma
Example:
m1:192.168.1.2,m2:192.168.1.3,m3:192.168.1.4
Options:
-h, --help Show this help message and exit
-v Run with debug info dumped
-vv Run with verbose debug info dumped
-y Run without prompt
-j Print gbar list as JSON
-t BACKUP_TAG Tag for backup file, required on backup
-a, --advanced Enable advanced mode for node expansion
The -y
option forces GBAR to skip interactive prompt questions by selecting the default answer.
There is currently one interactive question:
-
At the start of the restore process, GBAR always asks if it is okay to stop and reset the TigerGraph services. The default answer is yes.
Configure GBAR
Before using the backup or the restore feature, GBAR must be configured.
-
Run
gadmin config entry system.backup
. At each prompt, enter the appropriate values for each config parameter.$ gadmin config entry system.backup System.Backup.TimeoutSec [ 18000 ]: The backup timeout in seconds New: 18000 System.Backup.CompressProcessNumber [ 0 ]: The number of concurrent process for compression during backup New: 0 System.Backup.Local.Enable [ true ]: Backup data to local path New: true System.Backup.Local.Path [ /tmp/backup ]: The path to store the backup files New: /data/backup System.Backup.S3.Enable [ false ]: Backup data to S3 path New: false System.Backup.S3.AWSAccessKeyID [ <masked> ]: The path to store the backup files New: System.Backup.S3.AWSSecretAccessKey [ <masked> ]: The path to store the backup files New: System.Backup.S3.BucketName [ ]: The path to store the backup files New:
-
After entering the configuration values, run the following command to apply the new configurations
gadmin config apply -y
|
Perform a backup
Before performing a backup, ensure that there is enough free disk space for the backup files.
To perform a backup, run the following command as the TigerGraph Linux user:
gbar backup -t <backup_tag>
Depending on your configuration settings, your backup archive is output to your local backup path and/or your AWS S3 bucket. If you are running a cluster, there will be a backup archive on every node in the same path.
A backup archive is stored as several files in a folder, rather than as a single file.
The backup tag acts like a filename prefix for the archive filename.
The full name of the backup archive will be <backup_tag>-<timestamp>
, which is a subfolder of the backup repository.
-
If
System.Backup.Local.Enable
is set totrue
, the folder is a local folder on every node in a cluster, to avoid massive data moving across nodes in a cluster. -
If
System.Backup.S3.Enable
is set totrue
, every node will upload data located on the node to the s3 repository. Therefore, every node in a cluster needs access to Amazon S3.
GBAR Backup performs a live backup, meaning that normal operations may continue while the backup is in progress.
When GBAR backup starts, GBAR checks if there are running loading jobs.
If there are, it pauses loading for 1 minute to generate a snapshot and then continue the backup process.
You can specify the loading pausing interval by the environment variable PAUSE_LOADING
.
GBAR then sends a request to the admin server, which then requests the GPE and GSE to create snapshots of their data.
Per the request, the GPE and GSE store their data under GBAR’s own working directory.
GBAR also directly contacts the Dictionary and obtains a dump of its system configuration information.
In addition, GBAR gathers the TigerGraph system version and customized information including user-defined functions, token functions, schema layouts and user-uploaded icons.
Then, GBAR compresses each of these data and configuration information files in tgz format and stores them in the <backup_tag>-<timestamp>
subfolder on each node.
As the last step, GBAR copies that file to local storage or AWS S3, according to the Config settings, and removes all temporary files generated during backup.
The current version of GBAR Backup takes snapshots quickly to make it very likely that all the components (GPE, GSE, and Dictionary) are in a consistent state, but it does not fully guarantee consistency.
Backup does not save input message queues for REST++ or Kafka. Make sure all messages are consumed before performing a backup. |
List Backup Files
gbar list
This command lists all generated backup files in the storage place configured by the user. For each file, it shows the file’s full tag, its size in human-readable format, and its creation time.
Restore from a backup archive
Before restoring a backup, ensure that the backup you are restoring from is in the same exact version as your current version of TigerGraph. Also make sure you have enough free disk space to accommodate both the old graph store and the graph store to be restored.
To restore a backup, run the following command:
gbar restore <archive_name>
If GBAR can verify that the backup archive exists and that the backup’s system version is compatible with the current system version, GBAR shuts down the TigerGraph servers temporarily as it restores the backup.
After completing the restore, GBAR restarts the TigerGraph servers.
If you are running a cluster, and you have copied the backup files to each individual node in the cluster, running gbar restore
on any node restores the entire cluster.
Restore is an offline operation, requiring the data services to be temporarily shut down.
The user must specify the full archive name ( <backup_tag>-<timestamp>
) to be restored.
Restore process
When GBAR restore begins, it first searches for a backup archive exactly matching the archive name supplied in the command line. Then it decompresses the backup files to a working directory. Next, GBAR compares the TigerGraph system version in the backup archive with the current system’s version, to make sure that the backup archive is compatible with that current system. It will then shut down the TigerGraph servers (GSE, RESTPP, etc.) temporarily.
GBAR then makes a copy of the current graph data, as a precaution. Next, GBAR copies the backup graph data into the GPE and GSE and notifies the Dictionary to load the configuration data. GBAR also notifies the GST to load backup user data and copy the backup user-defined token/functions to the right location. When these actions are all done, GBAR restarts the TigerGraph servers.
Remove a backup
To remove a backup, run the gbar remove
command:
$ gbar remove <backup_tag>-<timestamp>
The command removes a backup from the backup storage path.
To retrieve the full tag of a backup with the timestamp, use the gbar list
command.
Please note that the backup tag entered when you create a backup automatically includes a timestamp that is shown in the results of gbar list
.
The gbar remove
command must use the full tag, including the timestamp.
Clean up temporary files
Run gbar cleanup
to delete the temporary files created during backup or restore operations:
$ gbar cleanup
GBAR Detailed Example
The following example describes a real example, to show the actual commands, the expected output, and the amount of time and disk space used, for a given set of graph data. For this example, an Amazon EC2 instance was used, with the following specifications:
Single instance with 32 vCPU + 244GB memory + 2TB HDD.
Naturally, backup and restore time will vary depending on the hardware used.
GBAR Backup Operational Details
To run a daily backup, we tell GBAR to backup with the tag name daily.
$ gbar backup -t daily
[23:21:46] Retrieve TigerGraph system configuration
[23:21:51] Start workgroup
[23:21:59] Snapshot GPE/GSE data
[23:33:50] Snapshot DICT data
[23:33:50] Calc checksum
[23:37:19] Compress backup data
[23:46:43] Pack backup data
[23:53:18] Put archive daily-20180607232159 to repo-local
[23:53:19] Terminate workgroup
Backup to daily-20180607232159 finished in 31m33s.
The total backup process took about 31 minutes, and the generated archive is about 49 GB. Dumping the GPE + GSE data to disk took 12 minutes. Compressing the files took another 20 minutes.
GBAR Restore Operational Details
To restore from a backup archive, a full archive name needs to be provided, such as daily-20180607232159. By default, restore will ask the user to approve to continue. If you want to pre-approve these actions, use the "-y" option. GBAR will make the default choice for you.
$ gbar restore daily-20180607232159
[23:57:06] Retrieve TigerGraph system configuration
GBAR restore needs to reset TigerGraph system.
Do you want to continue?(y/N):y
[23:57:13] Start workgroup
[23:57:22] Pull archive daily-20180607232159, round #1
[23:57:57] Pull archive daily-20180607232159, round #2
[00:01:00] Pull archive daily-20180607232159, round #3
[00:01:00] Unpack cluster data
[00:06:39] Decompress backup data
[00:17:32] Verify checksum
[00:18:30] gadmin stop gpe gse
[00:18:36] Snapshot DICT data
[00:18:36] Restore cluster data
[00:18:36] Restore DICT data
[00:18:36] gadmin reset
[00:19:16] gadmin start
[00:19:41] reinstall GSQL queries
[00:19:42] recompiling loading jobs
[00:20:01] Terminate workgroup
Restore from daily-20180607232159 finished in 22m55s.
Old gstore data saved under /home/tigergraph/tigergraph/gstore with suffix -20180608001836, you need to remove them manually.
For our test, GBAR restore took about 23 minutes. Most of the time (20 minutes) was spent decompressing the backup archive.
Note that after the restore is done, GBAR informs you were the pre-restore graph data has been saved. After you have verified that the restore was successful, you may want to delete the old graph data files to free up disk space.