Common Applications
Example 1. Collaborative Filtering
Here is an observation about social networks: If a set of persons likes me, and many of them also like another person Z, it is probably true that person Z and I have some things in common. The same observation works for products and services: if a set of customers likes product X, and many of them also like product Z, then product X and Z probably have something in common. We say X and Z are "co-liked". This observation can be turned around into a search for recommendations: Given a user X, find a set of of users Z which are highly co-liked. For social networks, this can be used as friend recommendation: find a highly co-liked person Z to introduce to X. For e-commerce, this can be used for purchase recommendation: someone who bought X may also be interested in buying Z. This technique of finding the top co-liked individuals is called collaborative filtering.
A graph analytics approach is a natural fit for collaborative filtering because the original problem is in a graph (social network), and the search criteria can easily be expressed as a path in the graph. We first find all people Y who like user X, then find other users Z who are liked by someone in group Y, and rank members of Z according to how many times they're liked by Y.
Figure 1 below shows a simple graph according to our model. The circles represent three User vertices with id values id1, id2, and id3. There are two directed edges labeled "Liked" which show that User id2 likes id1, and id2 also likes id3. (In this model, friendship is directional because in online social networks, one of the two persons initiates the friendship.) There are two more directed edges in the opposite directions labeled "Liked_By". Since id2 likes both id1 and id3, id1 and id3 are co-liked.
Quick Demo
To just see the basic operation of the TigerGraph system, follow the easy instructions below . You can then continue to read for the explanation of the command files so you can learn to design your own examples.
Quick Demo Instructions
This example uses the graph below and asks the following query: "Who are the top co-liked persons of id1"?
Step 1: Obtain the data and command files. Create a graph model.
This example uses 4 small files: 3 command files ( cf_model.gsql , cf_load.gsql , cf_query.gsql) and one data file ( cf_data.csv) . Their contents are shown below, so you can either copy from this document or download the files (look in the "cf" subfolder of Examples.zip)
Step 2: Load data:
The command below loads our new data.
Step 3: Install and execute the query:
The file cf_query.gsql creates a query called topCoLiked. Then we install the query. The creation step runs fast, but the installation (compiling) step may take about 1 minute. We then run the query, asking for the top 20 Users who are co-liked with User id1.
The query results should be the following. Interpretation: id4 has as score (@cnt) = 2, which means there are two persons who like both id1 and id4. Next, id2 and id3 each have 1 co-friend in common with id1.
We now begin a tutorial-style explanation of this TigerGraph example and the workflow in general.
The figure below outlines the steps to progress from an empty graph to a query solution. Each of the blocks below corresponds to one of the steps in the Quick Demo above. The tutorial below will give you a deeper understanding of each step, so you can learn how it works and so you can design your own graph solutions.
Step 1: Create a graph model.
The first step is to create a model for your data which describes the types of vertices and edges you will have.
This example is written to be compatible with older TigerGraph platforms which support only one graph model at a time (though the user can make the model simple or complex, to handle multiple needs). To clear an existing model and old data, so you can install a new one, run the DROP ALL
command.
The statements below describe the vertex types and edge types in our Co-Liked model
The first CREATE statement creates one vertex type called User. The second statement creates one directed edge type called Liked. The WITH REVERSE_EDGE
clause means that for every two vertices (x,y) connected by a Liked type of edge, the system will automatically generate a corresponding edge of type Liked_By pointing from y to x, and both edges will have the same edge attributes.
After defining all your vertex and edge types, execute the following command to create a graph which binds the vertices and edges into one graph model:
The name of the graph is gsql_demo. Within the parentheses, you can either list the specific vertex and edge types (User, Liked), or you can use *, which means include everything. We chose to use * so that the same command can be used for all of our examples.
The CREATE commands can be stored in one file and executed together.
The CREATE GRAPH command is commented out for the following reason:
Our examples have been designed to run either as individual graphs or merged together into one multi-application graph. The CREATE GRAPH command may be run only once, after all the vertex and edge types have been created. (Each of our demo examples uses unique vertex and edge names, to avoid conflicts.) In other words, we run CREATE GRAPH gsql_demo(*)
as a separate command after creating all the vertex and edge types. If you decide you want to modify the schema after running CREATE GRAPH, you can create and run a SCHEMA_CHANGE JOB.
Newer TigerGraph platforms (i.e., version 1.1 or higher) can support multiple graphs, but this tutorial has been designed to be compatible with older single-graph platforms.
To execute these statements (DROP ALL, CREATE VERTEX, etc.), you can type them individually at the GSQL shell prompt, or you can first save them to a file, such as
cf_model.gsql
, and then run the command file. From within the shell, you would run@cf_model.gsql
From outside the shell, you would run>
gsql cf_model.gsql
Normally a user would put all their CREATE VERTEX, CREATE EDGE, and the final CREATE GRAPH statements in one file. In our example files, we have separated out the CREATE GRAPH statement because we want to merge all our example schemas together into one common graph.
The vertex, edge, and graph types become part of the catalog . To see what is currently in your catalog, type the
ls
command from within the GSQL shell to see a report as below:
To remove a definition from the catalog, use some version of the
DROP
command. Use thehelp
command to see a summary of available GSQL commands.In our examples, we typically show keywords in ALL UPPERCASE to distinguish them from user-defined identifiers. Identifiers are case-sensitive but keywords are not.
In this example, the vertices and edges don't have attributes. In general, a TigerGraph graph can have attributes on both vertices and edges, and it can also have different types of edges connecting the same two vertices. Please see GSQL Language Reference Part 1 - Defining Graphs and Loading Data which provides a more complete description of the graph schema definition language with additional examples.
Step 2: Load initial data.
Figure 2 shows a larger graph with five vertices and several edges. To avoid crowding the figure, only the Liked edges are shown: For every Liked edge, there is a corresponding Liked_By edge in the reverse direction.
The data file below describes the five vertices and seven edges of Figure 2.
The loading job below will read from a data file and create vertex and edge instances to put into the graph.
Now that we have defined a graph (in Step 1), GSQL commands or sessions should specify that you want to use a particular graph. Line 2 (new for v1.2) sets the working graph to be gsql_demo. Another way to set the working graph is to specify each time you invoke the gsql command, e.g.,
The CREATE LOADING JOB statement (line 3) defines a job called load_cf. The job will read each line of the input file, creates one vertex based on the value in the first column (referenced with column name $0), another vertex based on the value in the second column ($1), and one Liked edge pointing from the first vertex to the second vertex. In addition, since the Liked edge type definition includes the WITH REVERSE_EDGE clause, a Liked_By edge pointing in the opposite direction is also created.
After the job has been created, we run the job (line 12). the RUN LOADING JOB command line includes details about the data source: the name of the file is cf_data.csv, commas are used to separate columns, and \n is used to end each line. (Data files should not contain any extra spaces before or after the separator character.)
The TigerGraph loader automatically filters out duplicates. If either of the two column values has already been seen before, that vertex won't be created. Instead the existing vertex will be used. For example, if we read the first two data lines in data file cf_data.csv , the first line will generate two User
vertices, one edge type of Liked
, and one edge type of Liked_By
. For the second row, however, only one new vertex will be created since id2
has been seen already. Two edges will be created for the second row.
It is okay to run an LOADING JOB again, or to run a different loading job, to add more data to a graph store which already has some data. For example, you could do the following:
2.After loading, you can use the GraphStudio UI to visually inspect your data. Refer to the TigerGraph GraphStudio UI Guide .
To clear all your data but to keep your graph model, run the "CLEAR GRAPH STORE -HARD" command. -HARD must be in all capital letters.
Be very careful using CLEAR GRAPH STORE; there is no UNDO command.
For the querying and updating examples in the remainder of this use case, we will assume that Figure 2 has been loaded.
This loading example is basic. The GSQL language can do complex data extraction and transformation, such as dealing with JSON input format and key-value list input, all in high-level syntax. Please see GSQL Language Reference Part 1 - Defining Graphs and Loading Data for more examples.
Step 3: Write a graph-based query solution
The GSQL language includes not only data definition and simple inspection of the data, but also advanced querying which traverses the graph and which supports aggregation and iteration.
Built-In Queries
First , we can run some simple queries to verify that the data were loaded correctly. Below are some examples of some built-in GSQL queries which can be run in GSQL shell:
Create a Query
Note on approx_count(*)
The approx_count(*) function relies on statistics which may not account for recent insertions and deletions. If there has been no recent activity, they will give accurate results. In contrast, the count(*) function insures that recent data insertions and deletions are processed, so that it returns an accurate count.
SELECT *
displays information in JSON format. Below is an example of query output.
Now let's solve our original problem: find users who are co-liked with a user X. The following query demonstrates a 2-step traversal with aggregation.
The query below performs the co-liked collaborative filtering search. The concept behind this query is to describe a "graph path" which represents the relationship between a person (the starting point) and a person that is co-liked (the ending point). Figure 1 illustrates this path: id3 is a co-liked user of id1, because id2 likes both of them. The path from id1 to co-liked users is: (1) traverse a Liked_By edge to a User, and then (2) traverse a Liked edge to another User. This query also calculates the magnitude of the relationship between the starting point and each ending point. The more users there are such as id2 which connect id1 and id3, the stronger the co-like relationship between id1 and id3. Counting the number of paths that end at id3 serves to calculate this magnitude.
This query is structured like a procedure with two input parameters: an input vertex and value of k for the top-K ranking. The query contains three SELECT statements executed in order. The L0 statement defines our initial list of vertices: a set containing a single user supplied by the input_user
parameter. Suppose the input user is id1
. Next, the L1 statement starts from every vertex in the set L0, traverses every connected edge of type Liked_By and returns every target vertex (that is, the other ends of the connected edges). As a result, L1 is the set of all users who liked the input user. Referring to the graph in Figure 2, the query travels backwards along every Liked edge which points to id1
, arriving at id2
, id3
, and id5
. These three vertices form L1. Next, the L2 statement starts from each user in L1, travels to every user liked by that starting user (via the Liked type of edges), and increments the count for each User reached. That is, the algorithm counts how many times each vertex is visited by a query path. The WHERE condition makes sure the original input user will not be returned in the result.ORDER BY and LIMIT have the same meaning as in SQL. Below, we show how the L2 step tallies the counts for each vertex encountered:
From id2, Liked edges lead to id1 and id3. id1 is excluded due to the WHERE clause. The cnt count for id3 is incremented from 0 to 1.
From id3, Liked edges lead to id1 and id4. id1 is excluded due to the WHERE clause. The cnt count for id4 is incremented from 0 to 1.
From id5, Liked edges lead to id1, id2, and id4. id1 is excluded to to the WHERE clause. The cnt count of id2 is incremented from 0 to 1. The cnt count of id4 is incremented from 1 to 2.
The three co-liked users and their cnt scores: id3 (cnt score = 1), id4 (cnt = 2), and id2 (cnt = 1). The ORDER BY clause indicates that the sorting should be in descending order, such that the LIMIT clause trims L2 to the 20 vertices with the highest (as opposed to lowest) cnt values. For the test graph, there are only 3 vertices which are co-liked, less than the limit of 20. id4 has the strongest co-liked relationship.
Install and Run a Query
After the query is defined (in the CREATE QUERY block), it needs to be installed. The INSTALL QUERY command compiles the query.
If you have several queries, you can wait to install them in one command, which runs faster than installed each one separately. E.g.,
or
is faster than
After a query has been installed, it can be run as many times has desired. The command RUN QUERY invokes the query, with the given input arguments.
Using "id1" as the starting point and allowing up to 5 vertices in the output, the RUN QUERY command and its output on our test graph is shown below:
Instead of using the RUN QUERY command within the GSQL shell, the query can be invoked from the operating system via a RESTful GET endpoint (which is automatically created by the INSTALL QUERY
command):
If you followed the standard installation instructions for the TigerGraph system, hostName for the REST server is localhost
and port
is 9000
.
As of TigerGraph 1.2, the URL for query REST endpoints includes the graph name after query/. Prior to 1.2, the URL for the example above was http://hostName:port /query/topCoLiked
Step 4 (Optional): Update Your Data.
You can update the stored graph at any time, to add new vertices and edges, to remove some, or to update existing values. The GSQL language includes ADD, DROP, ALTER, UPSERT, and DELETE operations which are similar to the SQL operations of the same name. The UPSERT operation is a combined UPDATE-INSERT operation: If object exists, then UPDATE, else INSERT. Note that this is the default behavior for The GSQL language's 'smart' loading described above. There are three basic types of modifications to a graph:
Adding or deleting objects
Altering the schema of the graph
Modifying the attributes of existing objects
We'll give a quick example of each type. To show the effect each modification, we'll use the following simple built-in queries:
The current results, before making any modifications, are shown below.
Graph modification operations are performed by a distributed computing model which satisfies Sequential Consistency. For these examples, a brief one second pause between the updating and querying the graph should be sufficient.
Modification Type 1: Adding or deleting
Adding is simply running a loading job again with a new data file. More details are in the GSQL Language Reference Part 1.
Deleting: Suppose we want to delete vertex id3 and all its connections:
Modification Type 2: Altering the schema
The GSQL DELETE operation is a cascading deletion. If a vertex is deleted, then all of the edges which connect to it are automatically deleted as well.
Result: one fewer vertex and one fewer edge from id2.
The GSQL language supports four types of schema alterations:
Adding a new type of vertex or edge:
ADD VERTEX | DIRECTED EDGE | UNDIRECTED EDGE
Removing a type of vertex or edge:
DROP VERTEX | DIRECTED EDGE | UNDIRECTED EDGE
Adding attributes to a vertex or edge type:
ALTER VERTEX vertex_type | EDGE edge_type ADD ATTRIBUTE (name type)
Removing attributes of a vertex or edge type:
ALTER VERTEX vertex_type | EDGE edge_type DROP ATTRIBUTE (name)
To make schema changes, create a SCHEMA_CHANGE job. Running the SCHEMA_CHANGE JOB will automatically stop all services, update the graph store, and restart the service. For example, suppose we wish to add a name for Users and a weight to Liked edges to indicatehow much User A likes User B.
As of v1.2, the schema_change job here needs to be GLOBAL because the User vertex and Liked edge are global types (they were defined before an active graph was set.)
Changing the schema may necessitate changing queries and other tasks, such as REST endpoints. In this example, the collaborative filtering query will still run with the the new weight attribute, but it will ignore the weight in its calculations.
Modification Type 3: Modifying the attributes of existing objects
Now that we have added a weight attribute, we probably want to assign some weight values to the graph. The following example updates the weight values of two edges. For edge upserts, the first two arguments in the VALUES list specify the FROM vertex id and the TO vertex_id, respectively. Similarly, for vertex upserts, the first argument in the VALUES list specifies the PRIMARY_ID id. Since id values may not be updated, the GSQL shell implicitly applies a conditional test: "If the specified id value(s) exist, than update the non-id attributes in the VALUES list; otherwise, insert a new data record using these values."
Other Modes for Graph Updates
In addition to making graph updates within the GSQL Shell, there are two other ways: sending a query string directly to the Standard Data Manipulation REST API, or writing a custom REST endpoint. For details about the first method, see the GET, POST, and DELETE /graphendpoints in the RESTPP API User Guide . The functionality in GSQL and in the Standard Query API is the same; GSQL commands are translated into REST GET, POST, and DELETE requests and submitted to the Standard Query API.
The REST API equivalent of the GSQL Modification 3 upsert example above is as follows:
where serverIP is the IP address of your REST server (default = localhost
) and data/cf_mod3_input.json is a text file containing the following JSON-encoded data:
Example 2. Page Rank
This example shows the use of WHILE loop iteration, global variables , and the built-in outdegree attribute.
It is recommended that you do the Collaborative Filtering Use Case first, because it contains additional tips on running the TigerGraph system.
Remember that if you have a text file containing GSQL commands (e.g., commands.gsql), you can run it one of two ways:
From Linux:
gsql commands.gsql
From inside the GSQL shell:
@commands.gsql
To run a single command (such as DROP ALL):
From Linux:
gsql 'DROP ALL'
From inside the GSQL shell:
DROP ALL
Setting the working graph
If a graph has been defined, then all subsequent gsql commands need to specify which graph is being used. If your command file does not contain a "USE GRAPH" statement, then you can specify the graph when invoking gsql:gsql -g graph_name commands.gsql
If you are always using the same graph, you can define a Linux alias to automatically include your graph name:
You can add this line to the .bashrc in your home directory so that the alias is defined each time you open a bash shell.
Step 1: Create a graph model.
In this example, there is only one type of vertex and one type of edge, and edges are directed.
Note how the Page vertex type has both a PRIMARY_ID and a page_id attribute. As will be seen in step 2, the same data will be loaded into both fields. While this seems redundant, this is a useful technique in TigerGraph graph stores. The PRIMARY_ID is not treated as an ordinary attribute. In exchange for high-performance storage, the PRIMARY_ID lacks some of the filtering and querying features available to regular attributes. The Linkto edge does not have any attributes. In general, a TigerGraph graph can have attributes on both vertices and edges, and it can also have different types of edges connecting the same two vertices.
The CREATE GRAPH command is commented out for the following reason:
Our examples have been designed to run either as individual graphs or merged together into one multi-application graph. The CREATE GRAPH command should be run only once, after all the vertex and edge types for all the examples have been created. (Naturally, every model uses unique vertex and edge names, to avoid conflicts.) In other words, run ' CREATE GRAPH gsql_demo(*)
' as a separate command after you have created all your vertex and edge types.
Please see the GSQL Language Reference which provides a more complete description of the graph schema definition language with additional examples .
Step 2: Load initial data
A similar graph to what was used for the Collaborative Filtering user-user network example can be used for an example here. That is, each row has two values which are node IDs, meaning that there is a connection from the first node to the second node. However, we will introduce a difference to demonstrate the flexibility of the TigerGraph loading system. We will modify the data file to use the tab character as a field separator instead of the comma.
Loading job:
Create your loading job and load the data.
The above loading job will read each line of the input file (pagerank_data.tsv), create one vertex based on the value in the first column (referenced as $0), another vertex based on the value in the second column ($1), and one edge pointing from the first vertex to the second vertex. If either of the two column values has already been seen before, that vertex won't be created. Instead the existing vertex will be used. For example, the first row of pagerank_data.tsv, will create two vertices, with ids 1 and 2, and one edge (1, 2). The second row, however, will create only one new vertex, id 3, and one edge (1, 3), because id 1 already exists.
Note how the LOAD statement specifies the SEPARATOR character is the tab character.
Step 3: Write a Graph-based query solution
GSQL includes not only data definition and simple inspection of the data, but also advanced querying which traverses the graph and which supports aggregation and iteration. This example uses iterations, repeating the computation block until the maximum score change at any vertex is no more than a user-provided threshold, or until it reaches a user-specified maximum number of allowed iterations. Note the arrow ->
in the FROM
clause used to represent the direction of a directed edge.
For JSON output API v2, the PRINT syntax for a vertex set variable is different than the v1 syntax.
After executing the CREATE QUERY command, remember to install the query, either by itself or together with other queries:
Run the query:
We will use the typical dampingFactor of 0.15, iterate until the pagerank values change by less than 0.001, up to a maximum of 100 iterations. For these conditions, the PageRank values for the 4 vertices (1,2,3,4) are ( 0.65551, 0.93379, 1.22156, 1.18914), respectively.
Step 4 (Optional): Update Your Data.
Details about updating were discussed in Use Case 1 (Collaborative Filtering). We will go right to the graph modification examples for the PageRank case.
To show the effect of each modification, we use two built-in queries. The first one lists all the Page vertices. The second one lists all the edges which start at Page 4.
These are the results of the diagnostic queries, before any graph modifications. There are 4 vertices total and 2 edges which start at page 4.
Modification 1: Adding or deleting
Adding is simply running a loading job again with a new data file.
Deleting: Suppose we want to delete vertex url4 and all its connections:
The GSQL DELETE operation is a cascading deletion. If a vertex is deleted, then all of the edges which connect to it are automatically deleted as well.
Result: one fewer vertex and one fewer edge from Page 4.
Modification Type 2: Altering the schema
For example, suppose we wish to add an attribute to the Page vertices to classify what type of Page it is and also a date to the edges.
Changing the schema may necessitate revising and reinstalling loading jobs and queries. In this case, adding the pageType attribute does not harm the pageRank query.
This schema_change job is GLOBAL because the Page vertex and Linkto edge types are global (defined before setting an active graph).
Modification Type 3: Modifying the attributes of existing objects
The following example updates the type values of two vertices and one edge.
Other Modes for Graph Updates
In addition to making graph updates within the GSQL Shell, there are two other ways: sending a query string directly to the Standard Data Manipulation REST API, or writing a custom REST endpoint. For details about the first method, see the Standard Data Manipulation REST API User Guide . The functionality in GSQL and in the Standard Query API is essentially the same; GSQL commands are translated into REST GET, POST, and DELETE requests and submitted to the Standard Query API.
The REST API equivalent of the GSQL Modification 3 upsert example above is as follows:
where hostName is the IP address of your REST server, and data /pagerank_mod3_input.json is a text file containing the following JSON-encoded data:
Example 3. Simple Product Recommendation
This example introduces the technique of flattening – splitting a data field which contains a set of elements into individual vertices and edges, one for each element.
Input Data: A list of products. Each Product has a 64-bit image hash value and a list of words describing the product. Query Task : Find the products which are most similar to a given product. Formally, given a product id P and an integer K,return the top K products similar to the product P. The similarity between a product P and another product Q is based on the number of words found in the product descriptions for both product P and product Q.
Step 1: Create a graph model for the use case, using the data definition language (DDL) aspect of the GSQL language.
Then run
The above statements create two types of vertices, Product and DescWord, and one type of edge connecting the two vertex types. The edge is undirected so that you can just as easily traverse from a Product to its descriptive words or from a descriptive word to Products which are described by it.
The generated graph schema for this case is shown below. The GSQL Language Reference manual provides a more complete description of the language with more examples .
Step 2: Load Input Data.
In this example, the input data are all stored in a single file having a 3-column format with a header column. Below are the test data:
Column 1 is the product id; column 2 is the image hash code, and column 3 is a list of words describing the product. Note how double quotation marks are used to enclose the list of words. Each row from the input file may lead to the creation of one Product vertex, multiple DescWord vertices, and multiple edges, one edge connecting the Product to each DescWord vertex.
The loading job below makes use of several features of the loading language to intelligently transform this data file into the appropriate vertices and edges.
The HEADER="true" option tells the loader that the data file's first line contains column headings instead of data. It will read the column headings and permit these heading names to be used instead of index numbers $1, $2, etc.
DEFINE HEADER and USER_DEFINED_HEADER allow the loading job to define its own names for the columns ("id", "hash", "words"), instead of the index numbers ($0, $1, $2) and overriding the file's own headings.
QUOTE="double" informs the loader that double quotation marks enclose strings. This allows the separator character (e.g., comma) to appear in the string, without triggering the end of the token. QUOTE="single" is also available.
The special TEMP_TABLE and flatten() function are used to split the list of tokens into separate items and to store them temporarily. The temporary items are then used to assemble the final edge objects.
In general, the GSQL language can map and transform multiple input files to multiple vertex and edge types. More advanced data transformation and filtering features are also available. See the GSQL Language Reference manual for more information.
An example of the resulting data graph is shown below. Products (P1, P2, etc.) connect to various DescWords (Word1, Word2, etc.). Each Product connects to many DescWords, and each DescWord is used in multiple Products.
Step 3: Write a graph-based solution using TigerGraph's high-level GSQL query language, to solve the use case and auto-generate the REST GET/POST endpoints for real-time accesses to TigerGraph's system.
Query Result:
For product id= 62abcax334, find the top 3 similar products, which have more than 1 descriptive word in common with product 62abcax334.
When installing the above GSQL query, a REST GET endpoint for this query will automatically be generated. Instead of running the query as a GSQL command, clients can also invoke the query by formatting the query as a HTTP request query string and sending a GET request, e.g.,
Example 4. Same Name Search
This example introduces the CASE...WHEN...THEN structure, which can also be used as an if...then block.
Input Data: A social network, where each person has a first and last name and may also display a picture of themselves.
Query Task : Find the users who are most "similar" to a user X. Specifically, a user X searches for other users whose first or last name matches user X's name. The query returns the list of users (Y1,Y2,...Yk) within two steps (two steps means friend-of-friend), who have matching names, and who offer a picture. The list is sorted and ranked by the relevance score between X and another user Yi, where the score is a linear function of four factors:
For the standard TigerGraph configuration, hostName:port
is localhost:9000
depth : how far X is from Yi (the shortest distance)
count : the number of shortest paths between X and Yi
match : whether Yi matches the input first name (match=1), the input last name (match=2), or both input names (match=3)
profile : whether Yi has a profile picture
Using the graph above as an example, suppose we want to compute relevance scores for the social network of the Tom Smith on the left.
match=1 for Tom Lee (first names are the same)
match=2 for May Smith (last names are the same)
match=3 for Tom Smith on the right (both names are the same).
There is no direct connection to the other Tom Smith, but there are several paths:
Paths with a depth = 2:
Tom Smith → Ming Wu → Tom Smith
Tom Smith → Ron Glass → Tom Smith
Tom Smith → Tom Lee → Tom Smith
Tom Smith → May Smith → Tom Smith
There are also some longer paths (e.g., Tom Smith → Ron Glass → Tom Lee → Tom Smith), but since they are longer, they are not of interest. Therefore, for the relationship (Tom Smith, Tom Smith), depth = 2 and count = 4.
The four factors (depth, count, match, hasPicture) are combined to compute an overall relevance score:
The clause hasPicture? 200 : 0
uses the ternary conditional operator. If hasPicture
is TRUE, evaluate to 200. Otherwise, evaluate to 0.
To design the graph schema, consider what attributes are needed for each vertex and attribute. The User vertices need to have a first name, a last name, and a profile picture. We assume that the social network is stored in two data files, one for vertices and one for edges.
Vertex file format: id, firstname, lastname, img_url Edge file format: user1, user2
The following code creates the schema and loads the data:
Test data files
The query algorithm is a bit long but straightforward:
Select all the depth=1 neighbors. For each neighbor:
Use a CASE structure to check for matching first and last names and assign a match value.
Check for an image.
We know depth=1 and count=1, so compute the relevance score.
Starting from the depth=1 neighbors, move to the depth=2 neighbors. For each such neighbor:
Use a CASE structure to check for matching first and last names and assign a match value.
Use ACCUM to count up the paths.
Check for an image.
Depth=2. Compute the relevance score.
The complete query is shown below:
Query result
Starting from user 0, who is named "Michael Jackson", find the top 100 most similar persons, according to the scoring function described above.
Example 5. Content-Based Filtering Recommendation of Videos
This example demonstrates conditional loading to be selective about which data records to load into which vertices or edges.
Input Data: A network of video programs, a set of tags which describe each video, and a set of users who have watched and rated videos.
Query Task: Recommend video programs that a given user might like.
Step 1: Create Graph Schema
The principle behind content-based recommendation is that people are often interested in products which have attributes similar to the ones which they have selected in the past. Suppose we have a video store. If the store tracks what videos each customer has selected in the past, and also records attributes about its videos, it can use this data to recommend more videos to the customer. Formally, for an input user (seed), first find which videos the user has watched. Then, from all the watched videos, find the top k attributes. From the top k attributes, find the top n videos that the seed user has not watched.
This suggests that we should have a graph with three types of vertices: user, video, and attributes (of a video). The schema is shown below.
Then run
Step 2: Load Input Data
In this example, there is one data file which contains data for all three type of vertices – VidUser, Video, and AttributeTag. The first field of each line indicates the vertex type. Similarly, there is one edge data file for two types of edges – User_Video and Video_AttributeTag. The WHERE clause is used to conditionally load only certain data into each type of vertex or edge. Further, these data files do not contain information for every attribute. When "_" is used in the VALUES list of a LOAD statement, it means not to load data from the input. The default value will be written (or it will remain as it is, if there is already a vertex or edge with that ID).
Test data files
Loading jobs
Step 3: Query the data
The query has the three stages characteristic of content-based recommendation:
Find products (videos) previously selected
Find the top attributes of those products
Find the products which have the most attributes in common with the seed products
Query result
Recommend up to 10 videos to user 0, using the top 10 attributes from the client's favorite videos.
Example 6. People You May Know
This example shows a technique for passing intermediate results from one stage to another.
Input Data : A social network with weighted connections.
Query Task: Recommend the Top K people you may know but who are not yet in your set of connections. Scoring is based on a variation of cosine similarity of two users:
This is a way to "transport" a value as the query travels through the graph .
The graph schema and loading jobs:
Step 1
This example shows that the computation of a moderately complex formula is simple in the GSQL language. It also demonstrates a technique of copying an attribute from an edge or a source vertex to the (temporary) accumulator of the edge's target vertex:
Then
Step 2
Test data:
Step 3
If you have worked through the previous examples, you perhaps can now see that we need a two-stage query: from A to A's neighbors, and then from A's neighbors to their neighbors. Also, you may realize that we will use the ACCUM clause to perform summation in the second stage. But, how will we know during the second stage what was the strength of the first stage edge? By storing a copy of the edge's weight in an accumulator attached to the edge's target vertex, which becomes a source vertex in the second stage.
In JSON output API v2, the PRINT syntax for a vertex set variable is different than the v1 syntax.
Query result:
Recommend up to 10 persons whom Person 1 might like to get to know.
Example 7. More Social Network Queries
Input Data: A social network in which each user has two attributes (besides their name): the time that they joined the network, and a boolean flag which says whether they are active or not.
Query Tasks: We show several query examples, making use the the time attribute and directed links between users.
Part 1: Create Graph
Then
Part 2: Load Data
Test data:
We have two data files. For variety, we will create two loading jobs, one for each file. Moreover, we will define the specific file locations in the loading jobs themselves. Then, it is not necessary to provide the filepaths in the RUN LOADING JOB statements. Also, the file social_users.csv
has a header, so we can use the column headings to refer to the columns.
Part 3 : Create, install, and run queries.
This case study presents four queries and their results, one at a time, so there are four separate "INSTALL QUERY" commands. Alternately, all four can be installed at once, which will execute faster than separate install commands:
INSTALL QUERY socialFromUser, socialToUser, socialMutualConnections, socialOneWay
or
INSTALL QUERY ALL
Q1 (socialFromUser): find users who have a direct connection from a given input user, with some filtering conditions on the candidate users' attributes
Test query and result:
Q2 (socialToUser): similar to Q1, but return users who have a connection pointing to the input user.
Test query and result:
Q3 (socialMutualConnections): return the set of users who have connections from both input user A and input user B.
Test query and result:
Q4 (socialOneWay): find all A->B user relationships such that there is an edge from A to B but there is no edge from B to A, and also requires that A and B connect to at least some number of common friends.
Test query and result: There are three such pairs
From vertex 0 to 1. Vertices 0 and 1 have 3 neighbors in common.
From vertex 0 to 8. Vertices 0 and 8 have 2 neighbors in common.
From vertex 0 to 9. Vertices 0 and 9 have 1 neighbor in common.
Suggested variant query:
socialTwoWay: Find all A↔B relationships such that there are connected edges both from A to B and from B to A, and A and B have some common neighbors.
Test query and result:
There is one such pair (0, 7), but the query reports it twice: first as (7, 0) and then as (0, 7). Vertices 7 and 0 have 3 neighbors in common.
Last updated