Skip to content

Commit

Permalink
Merge upstream changes
Browse files Browse the repository at this point in the history
  • Loading branch information
Sync Fork committed Sep 29, 2023
1 parent 8590470 commit 326143b
Show file tree
Hide file tree
Showing 26 changed files with 851 additions and 11 deletions.
5 changes: 5 additions & 0 deletions .changeset/bright-worms-own.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
'@nhost/docs': patch
---

updated postgres and graphql documentation
5 changes: 5 additions & 0 deletions .changeset/unlucky-starfishes-share.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
'@nhost/docs': patch
---

docs: added storage/antivirus documentation
147 changes: 147 additions & 0 deletions docs/docs/database/extensions.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
---
title: 'Extensions'
sidebar_position: 4
---

## postgis

PostGIS extends the capabilities of the PostgreSQL relational database by adding support storing, indexing and querying geographic data.

### Managing

To install the extension you can create a migration with the following contents:

```
SET ROLE postgres;
CREATE EXTENSION postgis;
```

To uninstall it, you can use the following migration:

```
SET ROLE postgres;
DROP EXTENSION postgis;
```

### Resources

* [Official website](https://postgis.net)

## pgvector

Open-source vector similarity search for Postgres. Store your vectors with the rest of your data. Supports:

* exact and approximate nearest neighbor search
* L2 distance, inner product, and cosine distance
* any language with a Postgres client

Plus ACID compliance, point-in-time recovery, JOINs, and all of the other great features of Postgres

### Managing

To install the extension you can create a migration with the following contents:

```
SET ROLE postgres;
CREATE EXTENSION vector;
```

To uninstall it, you can use the following migration:

```
SET ROLE postgres;
DROP EXTENSION vector;
```

### Resources

* [GitHub](https://github.com/pgvector/pgvector)

## pg_cron

pg_cron is a simple cron-based job scheduler for PostgreSQL (10 or higher) that runs inside the database as an extension. It uses the same syntax as regular cron, but it allows you to schedule PostgreSQL commands directly from the database. You can also use '[1-59] seconds' to schedule a job based on an interval.

### Managing

To install the extension you can create a migration with the following contents:

```
SET ROLE postgres;
CREATE EXTENSION pg_cron;
```

To uninstall it, you can use the following migration:

```
SET ROLE postgres;
DROP EXTENSION pg_cron;
```

### Resources

* [GitHub](https://github.com/citusdata/pg_cron)

## hypopg

HypoPG is a PostgreSQL extension adding support for hypothetical indexes.

An hypothetical -- or virtual -- index is an index that doesn't really exists, and thus doesn't cost CPU, disk or any resource to create. They're useful to know if specific indexes can increase performance for problematic queries, since you can know if PostgreSQL will use these indexes or not without having to spend resources to create them.

### Managing

To install the extension you can create a migration with the following contents:

```
SET ROLE postgres;
CREATE EXTENSION hypopg;
```

To uninstall it, you can use the following migration:

```
SET ROLE postgres;
DROP EXTENSION hypopg;
```

### Resources

* [GitHub](https://github.com/HypoPG/hypopg)
* [Documentation](https://hypopg.readthedocs.io)

## timescaledb

TimescaleDB is an open-source database designed to make SQL scalable for time-series data. It is engineered up from PostgreSQL and packaged as a PostgreSQL extension, providing automatic partitioning across time and space (partitioning key), as well as full SQL support.

### Managing

To install the extension you can create a migration with the following contents:

```
SET ROLE postgres;
CREATE EXTENSION timescaledb;
```

To uninstall it, you can use the following migration:

```
SET ROLE postgres;
DROP EXTENSION timescaledb;
```

### Resources

* [GitHub](https://github.com/timescale/timescaledb)
* [Documentation](https://docs.timescale.com)
* [Website](https://www.timescale.com)

## pg_stat_statements

The pg_stat_statements module provides a means for tracking planning and execution statistics of all SQL statements executed by a server.

### Managing

Enabled by default.

### Resources

* [Documentation](https://www.postgresql.org/docs/14/pgstatstatements.html)
84 changes: 84 additions & 0 deletions docs/docs/database/performance.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
---
title: 'Performance'
sidebar_position: 4
---

Ensuring a healthy and performant PostgreSQL service is crucial as it directly impacts the overall response time and stability of your backend. Since Postgres serves as the centerpiece of your backend, prioritize the optimization and maintenance of your Postgres service to achieve the desired performance and reliability.

In case your Postgres service is not meeting your performance expectations, you can explore the following options:

1. Consider upgrading your [dedicated compute](/platform/compute) resources to provide more processing power and memory to the Postgres server.

2. Fine-tune the configuration parameters of Postgres to optimize its performance. Adjust settings such as `shared_buffers`, `work_mem`, and `effective_cache_size` to better align with your workload and server resources.

3. Identify and analyze slow-performing queries using tools like query logs or query monitoring extensions. Optimize or rewrite these queries to improve their efficiency.

4. Evaluate the usage of indexes in your database. Identify queries that could benefit from additional indexes and strategically add them to improve query performance.

By implementing these steps, you can effectively address performance concerns and enhance the overall performance of your Postgres service.

## Upgrade to our latest postgres image

Before trying anything else, always upgrade to our latest postgres image first. You can find our availables images in the dashbhoard, under your database settings.

## Upgrading dedicated compute

Increasing CPU and memory is the simplest way to address performance issues. You can read more about compute resources [here](/platform/compute).

## Fine-tune configuration parameters

When optimizing your Postgres setup, you can consider adjusting various Postgres settings. You can find a list of these parameters [here](/database/settings). Keep in mind that the optimal values for these parameters will depend on factors such as available resources, workload, and data distribution.

To help you get started, you can use [pgtune](https://pgtune.leopard.in.ua) as a reference tool. Pgtune can generate recommended configuration settings based on your system specifications. By providing information about your system, it can suggest parameter values that may be a good starting point for optimization.

However, it's important to note that the generated settings from pgtune are not guaranteed to be the best for your specific environment. It's always recommended to review and customize the suggested settings based on your particular requirements, performance testing, and ongoing monitoring of your Postgres database.

## Identifying slow queries

Monitoring slow queries is a highly effective method for tackling performance issues. Several tools leverage [pg_stat_statements](https://www.postgresql.org/docs/14/pgstatstatements.html), a PostgreSQL extension, to provide constant monitoring. You can employ these tools to identify and address slow queries in real-time.

### pghero

[PgHero](https://github.com/ankane/pghero) is one of such tools you can use to idenfity and address slow queries. You can easily run pghero alongside your postgres with [Nhost Run](/run):

1. Click on this [one-click install link](https://app.nhost.io:/run-one-click-install?config=eyJuYW1lIjoicGdoZXJvIiwiaW1hZ2UiOnsiaW1hZ2UiOiJkb2NrZXIuaW8vYW5rYW5lL3BnaGVybzpsYXRlc3QifSwiY29tbWFuZCI6W10sInJlc291cmNlcyI6eyJjb21wdXRlIjp7ImNwdSI6MTI1LCJtZW1vcnkiOjI1Nn0sInN0b3JhZ2UiOltdLCJyZXBsaWNhcyI6MX0sImVudmlyb25tZW50IjpbeyJuYW1lIjoiREFUQUJBU0VfVVJMIiwidmFsdWUiOiJwb3N0Z3JlczovL3Bvc3RncmVzOltQQVNTV09SRF1AcG9zdGdyZXMtc2VydmljZTo1NDMyL1tTVUJET01BSU5dP3NzbG1vZGU9ZGlzYWJsZSJ9LHsibmFtZSI6IlBHSEVST19VU0VSTkFNRSIsInZhbHVlIjoiW1VTRVJdIn0seyJuYW1lIjoiUEdIRVJPX1BBU1NXT1JEIiwidmFsdWUiOiJbUEFTU1dPUkRdIn1dLCJwb3J0cyI6W3sicG9ydCI6ODA4MCwidHlwZSI6Imh0dHAiLCJwdWJsaXNoIjp0cnVlfV19)
2. Select your project:
![select your project](/img/database/performance/pghero_01.png)
3. Replace the placeholders with your postgres password, subdomain and a user and password to protect your pghero service. Finally, click on create.
![fill run service details](/img/database/performance/pghero_02.png)
4. After confirming the service, copy the URL:
![run service details](/img/database/performance/pghero_03.png)

5. Finally, you can open the link you just copied to access pghero:

![pghero](/img/database/performance/pghero_04.png)


:::info
When you create a new service, it can take a few minutes for the DNS (Domain Name System) to propagate. If your browser displays an error stating that it couldn't find the server or website, simply wait for a couple of minutes and then try again.
:::

After successfully setting up pghero, it will begin displaying slow queries, suggesting index proposals, and offering other valuable information. Utilize this data to enhance your service's performance.

## Adding indexes

Indexes can significantly enhance the speed of data retrieval. However, it's essential to be aware that they introduce additional overhead during mutations. Therefore, understanding your workload is crucial before opting to add an index.

There are tools you can use to help analyze your workload and detect missing indexes.

### pghero

[PgHero](https://github.com/ankane/pghero), in addition to help with slow queries, can also help finding missing and duplicate indexes. See previous section on how to deploy pghero with [Nhost Run](/run).

### dexter

[Dexter](https://github.com/ankane/dexter) can leverage both [pg_stat_statements](https://www.postgresql.org/docs/14/pgstatstatements.html) and [hypopg](https://hypopg.readthedocs.io/en/rel1_stable/) to find and evaluate indexes. You can run dexter directly from your machine:

1. Enable [hypopg](/database/extensions#hypopg)
2. Execute the command `docker run --rm -it ankane/dexter [POSTGRES_CONN_STRING] --pg-stat-statements`

```
$ docker run --rm -it ankane/dexter [POSTGRES_CONN_STRING] --pg-stat-statements
Processing 1631 new query fingerprints
No new indexes found
```
82 changes: 82 additions & 0 deletions docs/docs/database/settings.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
---
title: 'Settings'
sidebar_position: 3
---

import Tabs from '@theme/Tabs'
import TabItem from '@theme/TabItem'

Below you can find the official schema (cue) and an example to configure your postgres database:

<Tabs groupId="package-manager">
<TabItem value="schema" label="schema">

```cue
#Postgres: {
version: string | *"14.6-20230705-1"
// Resources for the service, optional
resources?: #Resources & {
replicas: 1
}
// postgres settings of the same name in camelCase, optional
settings?: {
maxConnections: int32 | *100
sharedBuffers: string | *"128MB"
effectiveCacheSize: string | *"4GB"
maintenanceWorkMem: string | *"64MB"
checkpointCompletionTarget: number | *0.9
walBuffers: int32 | *-1
defaultStatisticsTarget: int32 | *100
randomPageCost: number | *4.0
effectiveIOConcurrency: int32 | *1
workMem: string | *"4MB"
hugePages: string | *"try"
minWalSize: string | *"80MB"
maxWalSize: string | *"1GB"
maxWorkerProcesses: int32 | *8
maxParallelWorkersPerGather: int32 | *2
maxParallelWorkers: int32 | *8
maxParallelMaintenanceWorkers: int32 | *2
}
}
```
</TabItem>
<TabItem value="toml" label="toml" default>

```toml
[postgres]
version = '14.6-20230925-1'

[postgres.resources.compute]
cpu = 1000
memory = 2048

[postgres.settings]
maxConnections = 100
sharedBuffers = '256MB'
effectiveCacheSize = '768MB'
maintenanceWorkMem = '64MB'
checkpointCompletionTarget = 0.9
walBuffers = -1
defaultStatisticsTarget = 100
randomPageCost = 1.1
effectiveIOConcurrency = 200
workMem = '1310kB'
hugePages = 'off'
minWalSize = '80MB'
maxWalSize = '1GB'
maxWorkerProcesses = 8
maxParallelWorkersPerGather = 2
maxParallelWorkers = 8
maxParallelMaintenanceWorkers = 2
```

</TabItem>
</Tabs>


:::info
At the time of writing this document postgres settings are only supported via the [configuration file](https://nhost.io/blog/config).
:::
Loading

0 comments on commit 326143b

Please sign in to comment.