The TigerGraph system produces extensive and detailed logs about each of its components. Starting with TigerGraph 3.2, TigerGraph provides a gadmin
utility that allows users to easily view log files through an Elasticsearch, Kibana, and Filebeat setup. This page offers a step-by-step guide to set up log viewing for all components in a TigerGraph cluster with Elastic search, Kibana, and Filebeat.
Install Elasticsearch on a machine that is running TigerGraph.
If you have a TigerGraph cluster, you only need to install Elasticsearch on one node.
Install Kibana on the same machine where you installed Elasticsearch.
If you have a TigerGraph cluster, you need to install Filebeat on all nodes in the cluster.
The default Elasticsearch settings only allow the Elasticsearch service to be accessed from the same machine it starts from. In order to allow Elasticsearch to receive log files from other servers in the cluster, we have to make the following edits to the file at /etc/elasticsearch/elasticsearch.yml
After editing the configurations, restart the Elasticsearch service.
Elasticsearch is a memory-intensive service. For more information on memory management for Elasticsearch, see Managing and Troubleshooting Elasticsearch Memory.
You need to make the following changes to the file at /etc/kibana/kibana.yml:
To allow remote access, change the value of server.host
to the IP address or DNS name of the Kibana server. Since the Kibana server is on the same machine as Elasticsearch, this value should be the same private IP that you specified as Elasticsearch's network.host
.
Additionally, you need to provide the address of the Elasticsearch server in the elasticsearch.hosts
setting. By default, Elasticsearch is on port 9200, so the value for this setting should be ["server_private_ip:9200"]
After editing the configurations, restart the Kibana service.
Finally, we need to configure Filebeat to have each component on each node send its logs to the Elasticsearch server. To do so, run the following gadmin command:
The command outputs a Filebeat configuration file filebeat.yml
. The following options are available:
After generating the filebeat.yml
file, copy it to the directory /etc/filebeat
on every node, and restart the Filebeat service on each node.
After the service restarts, you should be able to view the logs through Kibana's user interface in your browser at server_ip:5601
.
Option
Description
--host=<ip_list>
Required. The list of IP addresses of the nodes whose logs you want to send to the Elasticsearch server.
Example:
--host=10.128.0.97,10.128.0.99,10.128.0.100
--from-beginning
Optional. If this flag is provided, Filebeat will harvest all log files including the oldest. If not included, Filebeat will only harvest the logs since the most recent time each service started.
--path=<path_to_file>
Optional. The path to output the configuration file. By default, the command outputs the configuration file filebeat.yml
to the current directory.
--service=<service_list>
Optional. The services you want Filebeat to monitor. By default, all services are included. Example: --service=
TigerGraph Database captures key information on activities occurring across its different components through log functions that output to log files. These log files are not only helpful in troubleshooting but also serve as an auditory resource. This document gives a high-level overview of TigerGraph's logging structure and lists some common information one might need to monitor their database services and where to obtain them in the logs.
Logs in TigerGraph are stored at <tigergraph_root_dir>/log/
. TigerGraph's logs are divided into different folders by the different internal components and each folder corresponds to a different component. Log formats also vary across the different components. In folders where logs are checked often, such as restpp
, gsql
, and admin
, there are three symbolic links that help you quickly get to the most recent log file of that category:
log.INFO
Contains regular output and errors
log.ERROR
Contains errors only
<component_name>.out
Contains all output from the component process
log.WARNING
or log.DEBUG
log.WARNING
contains warnings
In thegsql
folder, log.DEBUG
contains very specific information you only need when certain errors happen
Knowing where certain activities are recorded allows one to use tools such as the Linux grep
command to easily obtain critical information from your database.
In a TigerGraph cluster, each node will only keep logs of activities that took place on the node itself. For example, the GSQL logs on the m1 node will only record events for m1 and are not replicated across the cluster.
For GSQL specifically, the cluster will elect a leader to which all GSQL requests will be forwarded. To check which node is the leader, start by checking the GSQL logs of the m1 node. Check the most recent lines of log.INFO
and look for lines containing information about leader switch. For example, the logs below recorded a GSQL leader switch from m2 to m1:
All requests made to TigerGraph's REST endpoints are recorded by the RESTPP logs and Nginx logs. Information available in the logs includes:
Timestamp of the request
API request parameters
Request Status
User information (when RESTPP authentication is turned on)
RESTPP is responsible for many tasks in the TigerGraph internal architecture and records many internal API calls, which can be hard to distinguish from manual requests. When RESTPP authentication is on, the RESTPP log will record the user information and mark a call if it is made by an internal API. Therefore, you can use the command below to filter for manual requests:
RequestInfo
contains the ID of the request, which you can use to look up more information on the request :
Here is an example of using a request ID to look up a request in the restpp log:
User management activities, such as logins, role and privilege changes are recorded in the GSQL logs in the folder gsql
.
To view recent activities, use the symlink log.INFO
. There is a lot of information in the logs - to filter for information that you need, you can use Linux commands such as grep
and tail
For example, to view recent changes in roles, you can run the following command in the gsql
log directory:
To view login activities, search log.INFO
for "login"
instead.