GSQL Graph Algorithm Library

Updated Jan 21, 2019

Graph algorithms are functions for measuring characteristics of graphs, vertices, or relationships. Graph algorithms can provide insights into the role or relevance of individual entities in a graph. For example: How centrally located is this vertex? How much influence does this vertex exert over the others?

Some graph algorithms measure or identify global characteristics: What are the natural community groupings in the graph? What is the density of connections?

Using the GSQL Graph Algorithm Library

The GSQL Graph Algorithm Library is a collection of high-performance GSQL queries, each of which implements a standard graph algorithm. Each algorithm is ready to be installed and used, either as a stand-alone query or as a building block of a larger analytics application.

GSQL running on the TigerGraph platform is particularly well-suited for graph algorithms:

  • Turing-complete with full support for imperative and procedural programming, ideal for algorithmic computation.

  • Parallel and Distributed Processing, enabling computations on larger graphs.

  • User-Extensible. Because the algorithms are written in standard GSQL and compiled by the user, they are easy to modify and customize.

  • Open-Source. Users can study the GSQL implementations to learn by example, and they can develop and submit additions to the library.

Library Structure

You can download the library from github: https://github.com/tigergraph/ecosys/tree/master/graph_algorithms

The library contains two main sections: Algorithms and Tests. The Algorithms folder contains template algorithms and scripts to help you customize and install them. The Tests folder contains small sample graphs that you can use to experiment with the algorithms. In this document, we use the test graphs to show you the expected result for each algorithm. The graphs are small enough that you can manually calculate and sometimes intuitively see what the answers should be.

Installing an Algorithm

Remember that GSQL graph algorithms are simply GSQL queries. However, since we do not know what vertices or edges you want to analyze, or how you want to receive output, the algorithms are in a template format. You need to run a script to personalize your algorithm and then to install it.

  1. Within the Algorithms folder is a script install.sh. When you run the script, it will first ask you which graph schema you wish to work on. (The TigerGraph platform supports multiple concurrent graphs.)

  2. It then asks you to choose from a menu of available algorithms.

  3. After knowing your graph schema and your algorithm, the installer can give you some guidance in selecting appropriate vertex types and edges types. Note this does not have to be all the vertex or edge types in your graph. For example, you may have a social graph with three categories of persons and five types of relationships. You might decide to compute PageRank using Member and Guest vertices and Recommended edges.

  4. It will then ask you what type of output you would like. It will proceed to create up to three versions of your algorithm, based on the three ways of receiving the algorithm's output:

    1. Stream the output in JSON format, the default behavior for most GSQL queries.

    2. Save the output value(s) in CSV format to a file. For some algorithms, this option will add an input parameter to the query, to let the user specify how many total values to output.

    3. Store the results as vertex or edge attribute values. The attributes must already exist in the graph schema, and the installer will ask you which attributes to use.

  5. After creating queries for one algorithm, the installer will loop back to let you choose another algorithm (returning to step 2 above).

  6. If you choose to exit, the installer makes a last request: Do you want to install your queries? Installation is when the code is compiled and bound into the query engine. It takes a few minutes, so it is best to create all your personalized queries at once and then install them as a group.

Example:

After the algorithms are installed, you will see them listed among the rest of your GSQL queries.

Running an Algorithm

Running an algorithm is the same as running a GSQL query. For example, if you selected the JSON option for pageRank, you could run it from GSQL as below:

Installing a query also creates a REST endpoint. The same query could be run thus:

GSQL lets you run queries from within other queries. This means you can use a library algorithm as a building block for more complex analytics.

Library Overview

The following algorithms are currently available. The algorithms are grouped into three classes:

  • Path

  • Centrality

  • Community

  • Similarity (NEW)

Moreover, each algorithm may or may be be currently available for a graph with Undirected Edges, Directed Edges, and Weighted Edges.

  • Coming soon means that TigerGraph plans to release this variant of the algorithm soon.

  • n/a means that this variant of the algorithm is typically not used

Computational Complexity

Computational Complexity is a formal mathematical term, referring to how an algorithm's time requirements scale according to the size of the data or other key parameters. For graphs, there are two key data parameters:

  • V (or sometimes n), the number of vertices

  • E (or sometimes m), the number of edges

The notation O(V^2) (read "big O V squared") means that when V is large, the computational time is proportional to V^2.

Path Algorithms

These algorithms help find the shortest path or evaluate the availability and quality of routes.

Single-Source Shortest Path, Unweighted

Description and Uses

If a graph has unweighted edges, then finding the shortest path from one vertex to another is the same as finding the path with the fewest hops. Think of Six Degrees of Separation and Friend of a Friend. Unweighted Shortest Path answers the question "How are you two related?" The two entities do not have to be persons. Shortest Path is useful in a host of applications, from estimating influences or knowledge transfer, to criminal investigation.

When the graph is unweighted, we can use a "greedy" approach to find the shortest path. In computer science, a greedy algorithm makes intermediate choices based on the data being considered at the moment, and then does not revisit those choices later on. In this case, once the algorithm finds any path to a vertex T, it is certain that that is a shortest path.

Specifications

Example

In the below graph, we do not consider the weight on edge. Using vertex A as the source vertex, the algorithm discovers that the shortest path from A to B is A-B, and the shortest path from A to C is A-D-C, etc.

Single-Source Shortest Path, Weighted

Description and Uses

Finding shortest paths in a graph with weighted edges is algorithmically harder than in an unweighted graph because just because you find a path to a vertex T, you cannot be certain that it is a shortest path. If edge weights are always positive, then you must keep trying until you have considered every in-edge to T. If edge weights can be negative, then it's even harder. You must consider all possible paths.

A classic application for weighted shortest path is finding the shortest travel route to get from A to B. (Think of route planning "GPS" apps.) In general, any application where you are looking for the cheapest route is a possible fit.

Specifications

The shortest path algorithm can be optimized if we know all the weights are nonnegative. If there can be negative weights, then sometimes a longer path will have a lower cumulative weight. Therefore, we have two versions of this algorithm

Example

The graph below has only positive edge weights. Using vertex A as the source vertex, the algorithm discovers that the shortest weighted path from A to B is A-D-B, with distance 8. The shortest weighted path from A to C is A-D-B-C with distance 9.

The graph below has both positive and negative edge weights. Using vertex A as the source vertex, the algorithm discovers that the shortest weighted path from A to E is A-D-C-B-E, with a cumulative score of 7 - 3 - 2 - 4 = -2.

Single-Pair Shortest Path

The Single-Pair Shortest Path task seeks the shortest path between a source vertex S and a target vertex T. If the edges are unweighted, then use the query in our tutorial document GSQL Demo Examples.

If the edges are weighted, then use the Single-Source Shortest Path algorithm. In the worst case, it takes the same computational effort to find the shortest path for one pair as to find the shortest paths for all pairs from the same source S. The reason is that you cannot know whether you have found the shortest (least weight) path until you have explored the full graph. If the weights are always positive, however, then a more efficient algorithm is possible. You can stop searching when you have found paths that use each of the in-edges to T.

All-Pairs Shortest Path

The All-Pairs Shortest Path (APSP) task seeks to find shortest paths between every pair of vertices in the entire graph. In principle, this task can be handled by running the Single-Source Shortest Path (SSSP) algorithm for each input vertex, e.g.,

This example highlights one of the strengths of GSQL: treating queries as stored procedures which can be called from within other queries.

For large graphs (with millions of vertices or more), however, this is an enormous task. While the massively parallel processing of the TigerGraph platform can speed up the computation by 10x or 100x, consider what it takes just to store or report the results. If there are 1 million vertices, then there are nearly 1 trillion output values.

There are more efficient methods than calling the single-source shortest path algorithm n times, such as the Floyd-Warshall algorithm, which computes APSP in O(V^3) time.

Our recommendation:

  • If you have a smaller graph (perhaps thousands or tens of thousands of vertices), the APSP task may be tractable.

  • If you have a large graph, avoid using APSP.

Centrality Algorithms

Centrality algorithms determine the importance of each vertex within a network. Typical applications:

PageRank is designed for directed edges. The classic interpretation is to find the most "important" web pages, based on hyperlink referrals, but it can be used for another network where entities make positive referrals of one another.

Closeness Centrality and Betweenness Centrality both deal with the idea of "centrally located."

PageRank

Description and Uses

The PageRank algorithm measures the influence of each vertex on every other vertex. PageRank influence is defined recursively: a vertex's influence is based on the influence of the vertices which refer to it. A vertex's influence tends to increase if (1) it has more referring vertices or if (2) its referring vertices have higher influence. The analogy to social influence is clear.

A common way of interpreting PageRank value is through the Random Network Surfer model. A vertex's pageRank score is proportional to the probability that a random network surfer will be at that vertex at any given time. A vertex with a high pageRank score is a vertex that is frequently visited, assuming that vertices are visited according to the following Random Surfer scheme:

  • Assume a person travels or surfs across a network's structure, moving from vertex to vertex in a long series of rounds.

  • The surfer can start anywhere. This start-anywhere property is part of the magic of PageRank, meaning the score is a truly fundamental property of the graph structure itself.

  • Each round, the surfer randomly picks one of the outward connections from the surfer's current location. The surfer repeats this random walk for a long time.

  • But wait. The surfer doesn't always follow the network's connection structure. There is a probability (1-damping, to be precise), that the surfer will ignore the structure and will magically teleport to a random vertex.

Specifications

Example

We ran pageRank on our test10 graph (using Friend edges) with the following parameter values: damping=0.85, maxChange=0.001, and maxIter=25. We see that Ivy (center bottom) has the highest pageRank score (1.12). This makes sense, since there are 3 neighboring persons who point to Ivy, more than for any other person. Eddie and Justin have scores have exactly 1, because they do not have any out-edges. This is an artifact of our particular version pageRank. Likewise, Alex has a score of 0.15, which is (1-damping), because Alex has no in-edges.

Closeness Centrality

We all have an intuitive understanding when we say a home, an office, or a store is "centrally located." Closeness Centrality provides a precise measure of how "centrally located" is a vertex. The steps below show the steps for one vertex v.

These steps are repeated for every vertex in the graph.

Specifications

Parameters

Example

Closeness centrality can be measured for either directed edges (from v to others) or for undirected edges. Directed graphs may seem less intuitive, however. because if the distance from Alex to Bob is 1, it does not mean the distance from Bob to Alex is also 1.

For our example, we wanted to use the topology of the Likes graph, but to have undirected edges. We emulated an undirected graph by using both Friend and Also_Friend (reverse direction) edges.

Community Algorithms

These algorithms evaluate how a group is clustered or partitioned, as well as its tendency to strengthen or break apart.

Connected Components

Description and Uses

A component is the maximal set of vertices, plus their connecting edges, which are interconnected. That is, you can reach each vertex from each other vertex. In the example figure below, there are three components.

This particular algorithm deals with undirected edges. If the same definition (each vertex can reach each other vertex) is applied to directed edges, then the components are called Strongly Connected Components. If you have directed edges but ignore the direction (permitting traversal in either direction), then the algorithm finds Weakly Connected Components.

Specifications

Example

It is easy to see in this small graph that the algorithm correctly groups the vertices:

  • Alex, Bob and Justin all have Community ID = 2097152

  • Chase, Damon, and Eddie all have Community ID = 5242880

  • Fiona, George, Howard, and Ivy all have Community ID = 0

Our algorithm uses the TigerGraph engine's internal vertex ID numbers; they cannot be predicted.

Label Propagation

Description and Uses

Label Propagation is a heuristic method for determining communities. The idea is simple: If the plurality of your neighbors all bear the label X, then you should label yourself as also a member of X. The algorithm begins with each vertex having its own unique label. Then we iteratively update labels based on the neighbor influence described above. It is important that they the order for updating the vertices be random. The algorithm is favored for its efficiency and simplicity, but it is not guaranteed to produce the same results every time.

In a variant version, some vertices could initially be known to belong to the same community,. If they are well-connected to one another, they are likely to preserve their common membership and influence their neighbors,

Specifications

Example

This is the same graph that was used in the Connected Component example. The results are different, though. The quartet of Fiona, George, Howard, and Ivy have been split into 2 groups. See can see the symmetry:

  • (George & Ivy) each connect to (Fiona & Howard) and to one another.

  • (Fiona & Howard) each connect to (George & Ivy) but not to one another.

Label Propagation tries to find natural clusters and separations within connected components. That is, it looks at the quality and pattern of connections. The Component Component algorithm simply asks the Yes or No question: Are these two vertices connected?

We set maxIter to 10, but the algorithm reached steady state after 3 iterations.

Louvain Modularity for Community Detection

Description and Uses

The modularity score for a partitioned graph assesses the difference in density of links within a partition vs. the density of links crossing from one partition to another. The assumption is that a good partitioning (that is, dividing up the graph into communities or clusters), then the within-density should be high and the inter-density should be low.

Also, we use changes in modularity to guide optimization of the partitioning. That is, we begin with a candidate partitioning and measure its modularity. Then we make an incremental change and confirm that the modularity has improved.

The most most efficient and empirically effective method for calculating modularity was published by a team of researchers at the University of Louvain. The Louvain method uses agglomeration and hierarchical optimization:

  1. Optimize modularity for small local communities.

  2. Treat each optimized local group as one unit, and repeat the modularity operation for groups of these condensed units.

Specifications

Example

The results are the same as those from the Label Propagation example. This is not surprising, as they have the same high-level goal: to find the natural communities in a graph. A larger and more complex graph would likely show some differences.

Triangle Counting

Description and Uses

Why triangles? Think of it in terms of a social network:

  • If A knows B, and A also knows C, then we complete the triangle if B knows C. If this situation is common, it indicates a community with a lot of interaction.

  • The triangle is in fact the smallest multi-edge "complete subgraph," where every vertex connects to every other vertex.

Triangle count (or density) is a measure of community and connectedness. In particular, it addresses the question of transitive relationships: If A--> B and B-->C, then what is the likelihood of A--> C?

Note that it is computing a single number: How many triangles are in this graph? It is not finding communities within a graph.

It is not common to count triangles in directed graphs, though it is certainly possible. If you choose to do so, you need to be very specific about the direction of interest: In a directed graph, If A--> B and B--> C, then

  • if A-->C, we have a "shortcut".

  • if C-->A, then we have a feedback loop.

Specifications

We present two different algorithms for counting triangles. The first, tri_count(), is the classic edge-iterator algorithm. For each edge and its two endpoint vertices S and T, count the overlap between S's neighbors and T's neighbors.

One side effect of the simple edge-iterator algorithm is that it ends up considering each of the three sides of a triangle. The count needs to be divided by 3, meaning we did 3 times more work than a smaller algorithm would have.

tri_count_fast() is a smarter algorithm which does two passes over the edges. In the first pass we mark which of the two endpoint vertices has fewer neighbors. In the second pass, we count the overlap only between marked vertices. The result is that we eliminate 1/3 of the neighborhood matching, the slowest 1/3, but at the cost of some additional memory.

Example

In the social10 graph with Coworker edges, there are clearly 4 triangles.

Similarity Algorithms

There are many ways to measure the similarity between two vertices in a graph, but all of them compare either (1) the features of the vertices themselves, (2) the relationships of each of the two vertices, or (3) both. We use a graph called movie to demonstrate our similarities algorithms.

Cosine Similarity of Neighborhoods, Single Source

Description and Uses

To compare two vertices by cosine similarity, first selected properties of each vertex are represented as a vector. For example, a property vector for a Person vertex could have the elements (age, height, weight). Then the cosine function is applied to the two vectors.

The cosine similarity of two vectors A and B is defined as follows:

If A and B are identical, then cos(A, B) = 1. As expected for a cosine function, the value can also be negative or zero. In fact, cosine similarity is closely related to the Pearson correlation coefficient.

For this library function, the feature vector is the set of edge weights between the the two vertices and their neighbors.

In the movie graph shown in the figure below, there are Person vertices and Movie vertices. Every person may give rating to some of the movies. The rating score is stored on the Likes edge using the weight attribute. For example, in the graph below, Alex give a rating of 10 to the movie "Free Solo".

Specifications

The output size is always K (if K <= N), so the algorithm may arbitrarily chose to output one vertex over another, if there are tied similarity scores.

Example

Given one person's name, this algorithm calculates the cosine similarity between this person and each other person where there is at one movie they have both rated..

In the previous example, if the input is Alex, and topK is set to 5, then we calculate the cosine similarity between him and two other persons, Jing and Kevin. The JSON output shows the top k similar vertices and their similarity score in descending order. The output limit is 5 persons, but we have only 2 qualified persons:

The FILE version output is not necessarily in descending order. It looks like the following:

The ATTR version inserts an edge into the graph with the similarity score as an edge attribute whenever the score is larger than zero. The result looks like this:

Cosine Similarity of Neighborhoods, All Pairs

Description and Uses

This algorithm computes the same similarity scores as the cosine similarity of neighborhoods, single source algorithm (cosine_nbor_ss), except that it considers ALL pairs of vertices in the graph (for the vertex and edge types selected by the user). Naturally, this algorithm will take longer to run. For very large and very dense graphs, this may not be a practical choice.

Specifications

Example

Using the movie graph, calculate the cosine similarity between all pairs and show the top 2 similar vertices for each vertex: cosine_nbor_ap(2). This is the JSON result:

The FILE output of cosine_nbor_ap_file(2) is:

The ATTR version will create at most k edges for each vertex, where the "Similarity" edge should be directed edge:

Jaccard Similarity of Neighborhoods, Single Source

Description and Uses

The Jaccard index measures the relative overlap between two sets. To compare two vertices by Jaccard similarity, first select a set of values for each vertex. For example, a set of values for a Person could be the cities the Person has lived in. Then the Jaccard index is computed for the two vectors.

The Jaccard index of two sets A and B is defined as follows:

The value ranges from 0 to 1. If A and B are identical, then Jaccard(A, B) = 1. If both A and B are empty, we define the value to be 0.

Specifications

In the current

The algorithm will not output more than K vertices, so the algorithm may arbitrarily chose to output one vertex over another, if there are tied similarity scores.

Example

Using the movie graph, we run jaccard_nbor_ss("Neil", 5):

If the source vertex (person) doesn't have any common neighbors (movies) with any other vertex (person), such as Elena in our example, the result will be an empty list:

Jaccard Similarity of Neighborhoods, All Pairs

Description and Uses

This algorithm computes the same similarity scores as the Jaccard similarity of neighborhoods, single source algorithm (jaccard_nbor_ss), except that it considers ALL pairs of vertices in the graph (for the vertex and edge types selected by the user). Naturally, this algorithm will take longer to run. For very large and very dense graphs, this algorithm may not be a practical choice

Specifications

The algorithm will not output more than K vertex pairs, so the algorithm may arbitrarily chose to output one vertex pair over another, if there are tied similarity scores.

Example

For the movie graph, calculate the Jaccard similarity and show the top 2 similar vertices for each vertex: jaccard_nbor_ap(2). This is the JSON output :