Observability

Observability allows measuring a system’s current state based on the data it generates, such as logs, metrics, and traces.

Logs

PostgREST logs basic request information to stdout, including the authenticated user if available, the requesting IP address and user agent, the URL requested, and HTTP response status.

127.0.0.1 - user [26/Jul/2021:01:56:38 -0500] "GET /clients HTTP/1.1" 200 - "" "curl/7.64.0"
127.0.0.1 - anonymous [26/Jul/2021:01:56:48 -0500] "GET /unexistent HTTP/1.1" 404 - "" "curl/7.64.0"

For diagnostic information about the server itself, PostgREST logs to stderr:

  • The full version of the connected PostgreSQL database.

  • Schema Cache statistics.

  • The messages received by the Listener.

06/May/2024:08:16:11 -0500: Starting PostgREST 12.1...
06/May/2024:08:16:11 -0500: Attempting to connect to the database...
06/May/2024:08:16:11 -0500: Successfully connected to PostgreSQL 14.10 (Ubuntu 14.10-0ubuntu0.22.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit
06/May/2024:08:16:11 -0500: Listening on port 3000
06/May/2024:08:16:11 -0500: Listening for notifications on the "pgrst" channel
06/May/2024:08:16:11 -0500: Config reloaded
06/May/2024:08:16:11 -0500: Schema cache queried in 3.8 milliseconds
06/May/2024:08:16:11 -0500: Schema cache loaded 15 Relations, 8 Relationships, 8 Functions, 0 Domain Representations, 4 Media Type Handlers
06/May/2024:14:11:27 -0500: Received a config reload message on the "pgrst" channel
06/May/2024:14:11:27 -0500: Config reloaded

Database Logs

Currently PostgREST doesn’t log the SQL commands executed against the underlying database.

To find the SQL operations, you can watch the database logs. By default PostgreSQL does not keep these logs, so you’ll need to make the configuration changes below.

Find postgresql.conf inside your PostgreSQL data directory (to find that, issue the command show data_directory;). Either find the settings scattered throughout the file and change them to the following values, or append this block of code to the end of the configuration file.

# send logs where the collector can access them
log_destination = "stderr"

# collect stderr output to log files
logging_collector = on

# save logs in pg_log/ under the pg data directory
log_directory = "pg_log"

# (optional) new log file per day
log_filename = "postgresql-%Y-%m-%d.log"

# log every kind of SQL statement
log_statement = "all"

Restart the database and watch the log file in real-time to understand how HTTP requests are being translated into SQL commands.

Note

On Docker you can enable the logs by using a custom init.sh:

#!/bin/sh
echo "log_statement = 'all'" >> /var/lib/postgresql/data/postgresql.conf

After that you can start the container and check the logs with docker logs.

docker run -v "$(pwd)/init.sh":"/docker-entrypoint-initdb.d/init.sh" -d postgres
docker logs -f <container-id>

Metrics

The metrics endpoint on the Admin Server endpoint provides metrics in Prometheus text format.

curl "http://localhost:3001/metrics"

# HELP pgrst_schema_cache_query_time_seconds The query time in seconds of the last schema cache load
# TYPE pgrst_schema_cache_query_time_seconds gauge
pgrst_schema_cache_query_time_seconds 1.5937927e-2
# HELP pgrst_schema_cache_loads_total The total number of times the schema cache was loaded
# TYPE pgrst_schema_cache_loads_total counter
pgrst_schema_cache_loads_total 1.0
...

Schema Cache Metrics

Metrics related to the Schema Cache.

pgrst_schema_cache_query_time_seconds

Type

Gauge

The query time in seconds of the last schema cache load.

pgrst_schema_cache_loads_total

Type

Counter

Labels

status: SUCCESS | FAIL

The total number of times the schema cache was loaded.

Connection Pool Metrics

Metrics related to the Connection Pool.

pgrst_db_pool_timeouts_total

Type

Counter

The total number of pool connection timeouts.

pgrst_db_pool_available

Type

Gauge

Available connections in the pool.

pgrst_db_pool_waiting

Type

Gauge

Requests waiting to acquire a pool connection

pgrst_db_pool_max

Type

Gauge

Max pool connections.

Traces

Server Version Header

When debugging a problem it’s important to verify the running PostgREST version. For this you can look at the Server HTTP response header that is returned on every request.

HEAD /users HTTP/1.1

Server: postgrest/11.0.1

Trace Header

You can enable tracing HTTP requests by setting server-trace-header. Specify the set header in the request, and the server will include it in the response.

server-trace-header = "X-Request-Id"
curl "http://localhost:3000/users" \
  -H "X-Request-Id: 123"
HTTP/1.1 200 OK
X-Request-Id: 123

Server-Timing Header

You can enable the Server-Timing header by setting server-timing-enabled on. This header communicates metrics of the different phases in the request-response cycle.

curl "http://localhost:3000/users" -i
HTTP/1.1 200 OK

Server-Timing: jwt;dur=14.9, parse;dur=71.1, plan;dur=109.0, transaction;dur=353.2, response;dur=4.4
  • All the durations (dur) are in milliseconds.

  • The jwt stage is when JWT-Based User Impersonation is done. This duration can be lowered with JWT Caching.

  • On the parse stage, the URL Grammar is parsed.

  • On the plan stage, the Schema Cache is used to generate the Main query of the transaction.

  • The transaction stage corresponds to the database transaction. See Transactions.

  • The response stage is where the response status and headers are computed.

Note

We’re working on lowering the duration of the parse and plan stages on https://github.com/PostgREST/postgrest/issues/2816.

Execution plan

You can get the EXPLAIN execution plan of a request by adding the Accept: application/vnd.pgrst.plan header. This is enabled by db-plan-enabled (false by default).

curl "http://localhost:3000/users?select=name&order=id" \
  -H "Accept: application/vnd.pgrst.plan"
Aggregate  (cost=73.65..73.68 rows=1 width=112)
  ->  Index Scan using users_pkey on users  (cost=0.15..60.90 rows=850 width=36)

The output of the plan is generated in text format by default but you can change it to JSON by using the +json suffix.

curl "http://localhost:3000/users?select=name&order=id" \
  -H "Accept: application/vnd.pgrst.plan+json"
[
  {
    "Plan": {
      "Node Type": "Aggregate",
      "Strategy": "Plain",
      "Partial Mode": "Simple",
      "Parallel Aware": false,
      "Async Capable": false,
      "Startup Cost": 73.65,
      "Total Cost": 73.68,
      "Plan Rows": 1,
      "Plan Width": 112,
      "Plans": [
        {
          "Node Type": "Index Scan",
          "Parent Relationship": "Outer",
          "Parallel Aware": false,
          "Async Capable": false,
          "Scan Direction": "Forward",
          "Index Name": "users_pkey",
          "Relation Name": "users",
          "Alias": "users",
          "Startup Cost": 0.15,
          "Total Cost": 60.90,
          "Plan Rows": 850,
          "Plan Width": 36
        }
      ]
    }
  }
]

By default the plan is assumed to generate the JSON representation of a resource(application/json), but you can obtain the plan for the different representations that PostgREST supports by adding them to the for parameter. For instance, to obtain the plan for a text/xml, you would use Accept: application/vnd.pgrst.plan; for="text/xml.

The other available parameters are analyze, verbose, settings, buffers and wal, which correspond to the EXPLAIN command options. To use the analyze and wal parameters for example, you would add them like Accept: application/vnd.pgrst.plan; options=analyze|wal.

Note that akin to the EXPLAIN command, the changes will be committed when using the analyze option. To avoid this, you can use the db-tx-end and the Prefer: tx=rollback header.

Securing the Execution Plan

It’s recommended to only activate db-plan-enabled on testing environments since it reveals internal database details. However, if you choose to use it in production you can add a db-pre-request to filter the requests that can use this feature.

For example, to only allow requests from an IP address to get the execution plans:

-- Assuming a proxy(Nginx, Cloudflare, etc) passes an "X-Forwarded-For" header(https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For)
create or replace function filter_plan_requests()
returns void as $$
declare
  headers   json := current_setting('request.headers', true)::json;
  client_ip text := coalesce(headers->>'x-forwarded-for', '');
  accept    text := coalesce(headers->>'accept', '');
begin
  if accept like 'application/vnd.pgrst.plan%' and client_ip != '144.96.121.73' then
    raise insufficient_privilege using
      message = 'Not allowed to use application/vnd.pgrst.plan';
  end if;
end; $$ language plpgsql;

-- set this function on your postgrest.conf
-- db-pre-request = filter_plan_requests