There are two aspects to clearing the system: flushing the data and clearing the schema definitions in the catalog. Two different commands are available.
The CLEAR GRAPH STORE command flushes all the data out of the graph store (database). By default, the system will ask the user to confirm that you really want to discard all the graph data. To force the clear operation and bypass the confirmation question, use the -HARD option, e.g.,
Clearing the graph store does not affect the schema.
The DROP ALL statement clears the graph store and removes all definitions from the catalog: vertex types, edge types, graph types, jobs, and queries.
Running a loading job executes a previously installed loading job. The job reads lines from an input source, parses each line into data tokens, and applies loading rules and conditions to create new vertex and edge instances to store in the graph data store.
TigerGraph 2.0 introduces enhanced data loading with slightly modified syntax for CREATE and RUN statements. The previous RUN JOB syntaxes for v1.x online loading and offline loading and still supported for backward compatibility. Additionally, loading jobs can also be run by directly submitted a HTTP request to the REST++ server.
Note that the keyword LOADING is included. This makes it more clear to users and to GSQL that the job is a loading job and not some other type of job ( such as a SCHEMA_CHANGE JOB).
When a concurrent loading job is submitted, it is assigned a job ID number, which is displayed on the GSQL console. The user can use this job ID to refer to the job, for a status update, to abort the job, or to re-start the job. These operations are described later in this section.
By default, the command will print several several lines of status information while the loading is running. If the -noprint option is included, the job will run omit the progress and summary details, but it will still display the job id and the location of the log file.
If -dryrun is used, the system will read the data files and process the data as instructed by the job, but will NOT load any data into the graph. This option can be a useful diagnostic tool.
-n [i], j
-n option limits the loading job to processing only a range of lines of each input data file. The -n flag accepts one or two arguments. For example,
-n 50 means read lines 1 to 50.
-n 10, 50 means read lines 10 to 50. The special symbol $ is interpreted as "last line", so
-n 10,$ means reads from line 10 to the end.
The optional USING clause may contain a list of file variables. Each file variable may optionally be assigned a filepath_string , obeying the same format as in the CREATE LOADING JOB. This list of file variables determines which parts of a loading job are run and what data files are used.
When a loading job is compiled, it generates one RESTPP endpoint for each filevar and filepath_string . As a consequence, a loading job can be run in parts. When RUN LOADING JOB is executed, only those endpoints whose filevar or file identifier ("
__GSQL_FILENAME_n__") is mentioned in the USING clause will be used. However, if the USING clause is omitted, then the entire loading job will be run.
If a filepath_string is given, it overrides the filepath_string defined in the loading job. If a particular filevar is not assigned a filepath_string either in the loading job or in the RUN LOADING JOB statement, then an error is reported and the job exits.
The CONCURRENCY parameter sets the maximum number of concurrent requests that the loading job may send to the GPE. The default is 256.
The BATCH_SIZE parameter sets the number of data lines included in each concurrent request sent to the GPE. The default is 1024.
Another way to run a loading job is to submit an HTTP request to the POST /ddl/<graph_name> endpoint of the REST++ server. Since the REST++ server has more direct access to the graph processing engine, this can execute more quickly than a RUN LOADING JOB statement in GSQL.
When a CREATE LOADING JOB block is executed, the GSQL system creates one REST endpoint for each file source. Therefore, one REST request can invoke loading for one file source at a time. Running an entire loading job may take more than one REST request.
The Linux curl command is a handy way to make HTTP requests. If the data size is small, it can be included directly in the command line by using the -d flag with a data string:
If the data size is large, it is better to reference the data filename, using the --data-binary flag:
<filepath> should be replaced with either a file variable (from a DEFINE FILENAME statement) or a position-based file identifier ("__GSQL_FILENAME_n__") for an explicit filepath_string.
For more information, about sending REST++ requests, see the RESTPP API User Guide .
Example : The code block below shows three equivalent commands for the same loading job. The first uses the gsql command RUN JOB. The second uses the Linux curl command to support a HTTP request, placing the parameter values in the URL's query string. T he third gives the parameter values through the curl command's data payload -d option.
Starting with v2.0, there are now commands to checking loading job status, to abort a loading job and to restart a loading job.
When a loading job starts, the GSQL server assigns it a job id and displays it for the user to see. The job id format is typically the name of the loading job, followed by the machine alias, following by a code number, e.g.,
By default, an active loading job will display periodic updates of its progress. There are two ways to inhibit these automatic output displays:
Run the loading job with the -noprint option.
After the loading job has started, enter CTRL+C. This will abort the output display process, but the loading job will continue.
The command SHOW LOADING JOB shows the current status of either a specified loading job or all current jobs:
The display format is the same as that displayed during the periodic progress updates of the RUN LOADING JOB command. If you do not know the job id, but you know the job name and possibly the machine, then the ALL option is a handy way to see a list of active job ids.
The command ABORT LOADING JOB aborts either a specified load job or all active loading jobs:
The output will show a summary of aborted loading jobs.
The command RESUME LOADING JOB will restart a previously-run job which ended for some reason before completion.
If the job is finished, this command will do nothing. The RESUME command should pick up where the previous run ended; that is, it should not load the same data twice.
Every loading job creates a log file. When the job starts, it will display the location of the log file. Typically, the file is located at
This file contains the following information which most users will find useful:
A list of all the parameter and option settings for the loading job
A copy of the status information that is printed
Statistics report on the number of lines successfully read and parsed
The statistics report include how many objects of each type is created, and how many lines are invalid due to different reasons. This report also shows which lines cause the errors. Here is the list of statistics shown in the report. There are two types of statistics. One is file level (the number of lines), and the other is data object level (the number of objects). If an file level error occurs, e.g., a line does not have enough columns, this line of data is skipped for all LOAD statements in this loading job. If an object level error or failed condition occurs, only the corresponding object is not created, i.e., all other objects in the same loading job are still created if no object level error or failed condition for each corresponding object.
Note that failing a WHERE clause is not necessarily a bad result. If the user's intent for the WHERE clause is to select only certain lines, then it is natural for some lines to pass and some lines to fail.
Below is an example.
The above loading job and data generate the following report
There are a total of 7 data lines. The report shows that
Six of the lines are valid data lines
One line (Line 7) does not have enough tokens.
Of the 6 valid lines,
Three of the 6 valid lines generate valid movie vertices.
One line has an invalid attribute (Line 1: year)
Two lines (Lines 4 and 5) do not pass the WHERE clause.