diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 0d782474f9..bf7d4ec08e 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -2,5 +2,5 @@ When submitting a new feature or fix: - Add a new entry to the CHANGELOG - https://github.com/PostgREST/postgrest/blob/main/CHANGELOG.md#unreleased -- If relevant, update the docs - https://github.com/PostgREST/postgrest-docs +- If relevant, update the docs --> diff --git a/.github/workflows/docs.yaml b/.github/workflows/docs.yaml new file mode 100644 index 0000000000..f9bf2fd3a5 --- /dev/null +++ b/.github/workflows/docs.yaml @@ -0,0 +1,50 @@ +name: Docs + +on: + push: + branches: + - main + - rel-* + pull_request: + branches: + - main + - rel-* + +jobs: + build: + name: Build docs + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v3 + - uses: cachix/install-nix-action@v22 + - run: nix-env -f docs/default.nix -iA build + - run: postgrest-docs-build + + spellcheck: + name: Run spellcheck + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v3 + - uses: cachix/install-nix-action@v22 + - run: nix-env -f docs/default.nix -iA spellcheck + - run: postgrest-docs-spellcheck + + dictcheck: + name: Run dictcheck + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v3 + - uses: cachix/install-nix-action@v22 + - run: nix-env -f docs/default.nix -iA dictcheck + - run: postgrest-docs-dictcheck + + linkcheck: + name: Run linkcheck + if: github.base_ref == 'main' + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v3 + - uses: cachix/install-nix-action@v22 + - run: nix-env -f docs/default.nix -iA linkcheck + - run: postgrest-docs-linkcheck + diff --git a/.readthedocs.yaml b/.readthedocs.yaml new file mode 100644 index 0000000000..e7e973dcee --- /dev/null +++ b/.readthedocs.yaml @@ -0,0 +1,6 @@ +version: 2 +sphinx: + configuration: docs/conf.py +python: + install: + - requirements: docs/requirements.txt diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 0000000000..be92f4d7a9 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,3 @@ +This repository follows the same contribution guidelines as the main PostgREST repository contribution guidelines: + +https://github.com/PostgREST/postgrest/blob/main/.github/CONTRIBUTING.md diff --git a/docs/.gitignore b/docs/.gitignore new file mode 100644 index 0000000000..24f3c27992 --- /dev/null +++ b/docs/.gitignore @@ -0,0 +1,6 @@ +_build +Pipfile.lock +*.aux +*.log +_diagrams/db.pdf +misspellings diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 0000000000..29050fb9a6 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,20 @@ +# PostgREST documentation https://postgrest.org/ + +PostgREST docs use the reStructuredText format, check this [cheatsheet](https://github.com/ralsina/rst-cheatsheet/blob/master/rst-cheatsheet.rst) to get acquainted with it. + +To build the docs locally, use [nix](https://nixos.org/nix/): + +```bash + nix-shell +``` + +Once in the nix-shell you have the following commands available: + +- `postgrest-docs-build`: Build the docs. +- `postgrest-docs-serve`: Build the docs and start a livereload server on `http://localhost:5500`. +- `postgrest-docs-spellcheck`: Run aspell. + +## Documentation structure + +This documentation is structured according to tutorials-howtos-topics-references. For more details on the rationale of this structure, +see https://www.divio.com/blog/documentation. diff --git a/docs/_diagrams/README.md b/docs/_diagrams/README.md new file mode 100644 index 0000000000..408e767f27 --- /dev/null +++ b/docs/_diagrams/README.md @@ -0,0 +1,40 @@ +## ERD + +The ER diagrams were created with https://github.com/BurntSushi/erd/. + +You can go download erd from https://github.com/BurntSushi/erd/releases and then do: + +```bash +./erd_static-x86-64 -i film.er -o ../_static/film.png +``` + +## LaTeX + +The schema structure diagram is done with LaTeX. You can use a GUI like https://www.mathcha.io/editor to create the .tex file. + +Then use this command to generate the png file. + +```bash +pdflatex --shell-escape -halt-on-error db.tex + +## and move it to the static folder(it's not easy to do it in one go with the pdflatex) +mv db.png ../_static/ +``` + +LaTeX is used because it's a tweakable plain text format. + +You can install the full latex suite with `nix`: + +``` +nix-env -iA texlive.combined.scheme-full +``` + +To tweak the file with a live reload environment use: + +```bash +# open the pdf(zathura used as an example) +zathura db.pdf & + +# live reload with entr +echo db.tex | entr pdflatex --shell-escape -halt-on-error db.tex +``` diff --git a/docs/_diagrams/db.tex b/docs/_diagrams/db.tex new file mode 100644 index 0000000000..f4580f8915 --- /dev/null +++ b/docs/_diagrams/db.tex @@ -0,0 +1,71 @@ +\documentclass[convert]{standalone} +\usepackage{amsmath} +\usepackage{tikz} +\usepackage{mathdots} +\usepackage{yhmath} +\usepackage{cancel} +\usepackage{color} +\usepackage{siunitx} +\usepackage{array} +\usepackage{multirow} +\usepackage{amssymb} +\usepackage{gensymb} +\usepackage{tabularx} +\usepackage{booktabs} +\usetikzlibrary{fadings} +\usetikzlibrary{patterns} +\usetikzlibrary{shadows.blur} +\usetikzlibrary{shapes} + +\begin{document} + +\newcommand\customScale{0.35} + +\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1, scale=\customScale, every node/.style={scale=\customScale}] + +%Shape: Can [id:dp7234864758664346] +\draw [fill={rgb, 255:red, 47; green, 97; blue, 144 } ,fill opacity=1 ] (497.5,51.5) -- (497.5,255.5) .. controls (497.5,275.66) and (423.18,292) .. (331.5,292) .. controls (239.82,292) and (165.5,275.66) .. (165.5,255.5) -- (165.5,51.5) .. controls (165.5,31.34) and (239.82,15) .. (331.5,15) .. controls (423.18,15) and (497.5,31.34) .. (497.5,51.5) .. controls (497.5,71.66) and (423.18,88) .. (331.5,88) .. controls (239.82,88) and (165.5,71.66) .. (165.5,51.5) ; +%Shape: Rectangle [id:dp7384065579958246] +\draw [fill={rgb, 255:red, 236; green, 227; blue, 227 } ,fill opacity=1 ] (189,115) -- (252.5,115) -- (252.5,155) -- (189,155) -- cycle ; +%Shape: Rectangle [id:dp24763906430298177] +\draw [fill={rgb, 255:red, 236; green, 227; blue, 227 } ,fill opacity=1 ] (292,118) -- (362,118) -- (362,158) -- (292,158) -- cycle ; +%Shape: Rectangle [id:dp3775601612537265] +\draw [fill={rgb, 255:red, 236; green, 227; blue, 227 } ,fill opacity=1 ] (397,114) -- (467,114) -- (467,154) -- (397,154) -- cycle ; +%Shape: Rectangle [id:dp7071457022893852] +\draw [fill={rgb, 255:red, 248; green, 231; blue, 28 } ,fill opacity=1 ] (269,199) -- (397.5,199) -- (397.5,273) -- (269,273) -- cycle ; +%Straight Lines [id:da8846759047437789] +\draw (268,234) -- (226.44,155.77) ; +\draw [shift={(225.5,154)}, rotate = 422.02] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; +%Straight Lines [id:da6908444738113828] +\draw (309.5,198) -- (307.6,161) ; +\draw [shift={(307.5,159)}, rotate = 447.06] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; +%Straight Lines [id:da7168757864413169] +\draw (398.5,233) -- (431.72,154.84) ; +\draw [shift={(432.5,153)}, rotate = 473.03] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; +%Up Down Arrow [id:dp14059754167108496] +\draw [fill={rgb, 255:red, 126; green, 211; blue, 33 } ,fill opacity=1 ] (312.5,288.5) -- (330,273) -- (347.5,288.5) -- (338.75,288.5) -- (338.75,319.5) -- (347.5,319.5) -- (330,335) -- (312.5,319.5) -- (321.25,319.5) -- (321.25,288.5) -- cycle ; + +% Text Node +\draw (201,129) node [anchor=north west][inner sep=0.75pt] [align=left] {tables}; +% Text Node +\draw (307,130) node [anchor=north west][inner sep=0.75pt] [align=left ] {tables}; +% Text Node +\draw (414,127) node [anchor=north west][inner sep=0.75pt] [align=left] {tables}; +% Text Node +\draw (272,203) node [anchor=north west][inner sep=0.75pt] [color={rgb, 255:red, 0; green, 0; blue, 0 } ,opacity=1 ] [align=center] { \\ views \\ + \\ \ \ stored procedures}; + +% Text Node +\draw (322,178) node [anchor=north west][inner sep=0.75pt] [color={rgb, 255:red, 255; green, 255; blue, 255 } ,opacity=1 ] [align=left] {\large\textbf{api}}; +% Text Node +\draw (190,97) node [anchor=north west][inner sep=0.75pt] [color={rgb, 255:red, 255; green, 255; blue, 255 } ,opacity=1 ] [align=left] {\large\textbf{internal}}; +% Text Node +\draw (300,99) node [anchor=north west][inner sep=0.75pt] [color={rgb, 255:red, 255; green, 255; blue, 255 } ,opacity=1 ] [align=left] {\large\textbf{private}}; +% Text Node +\draw (417,101) node [anchor=north west][inner sep=0.75pt] [color={rgb, 255:red, 255; green, 255; blue, 255 } ,opacity=1 ] [align=left] {\large\textbf{core}}; +% Text Node +\draw (358,306) node [anchor=north west][inner sep=0.75pt] [align=left] {REST}; + +\end{tikzpicture} + + +\end{document} diff --git a/docs/_diagrams/film.er b/docs/_diagrams/film.er new file mode 100644 index 0000000000..d19fdf61b5 --- /dev/null +++ b/docs/_diagrams/film.er @@ -0,0 +1,40 @@ +[Films] +*id ++director_id +title +year +rating +language + +[Directors] +*id +first_name +last_name + +[Actors] +*id +first_name +last_name + +[Roles] +*+film_id +*+actor_id +character + +[Competitions] +*id +name +year + +[Nominations] +*+competition_id +*+film_id +rank + +Roles *--1 Actors +Roles *--1 Films + +Nominations *--1 Competitions +Nominations *--1 Films + +Films *--1 Directors diff --git a/docs/_diagrams/orders.er b/docs/_diagrams/orders.er new file mode 100644 index 0000000000..bdd93de2ef --- /dev/null +++ b/docs/_diagrams/orders.er @@ -0,0 +1,15 @@ +[Addresses] +*id +name +city +state +postal_code + +[Orders] +*id +name ++billing_address_id ++shipping_address_id + +Orders *--1 Addresses +Orders *--1 Addresses diff --git a/docs/_static/2ndquadrant.png b/docs/_static/2ndquadrant.png new file mode 100644 index 0000000000..3b6a755891 Binary files /dev/null and b/docs/_static/2ndquadrant.png differ diff --git a/docs/_static/css/custom.css b/docs/_static/css/custom.css new file mode 100644 index 0000000000..fc7f2edb68 --- /dev/null +++ b/docs/_static/css/custom.css @@ -0,0 +1,67 @@ +.wy-nav-content { + max-width: initial; +} + +#postgrest-documentation > h1 { + display: none; +} + +div.wy-menu.rst-pro { + display: none !important; +} + +div.highlight { + background: #fff !important; +} + +div.line-block { + margin-bottom: 0px !important; +} + +#sponsors { + text-align: center; +} + +#sponsors h2 { + text-align: left; +} + +#sponsors img{ + margin: 10px; +} + +#thanks{ + text-align: center; +} + +#thanks img{ + margin: 10px; +} + +#thanks h2{ + text-align: left; +} + +#thanks p{ + text-align: left; +} + +#thanks ul{ + text-align: left; +} + +.image-container { + max-width: 800px; + display: block; + margin-left: auto; + margin-right: auto; + margin-bottom: 24px; +} + +.wy-table-responsive table td { + white-space: normal !important; +} + +.wy-table-responsive { + overflow: visible !important; +} diff --git a/docs/_static/cybertec-new.png b/docs/_static/cybertec-new.png new file mode 100644 index 0000000000..15ec8d4abc Binary files /dev/null and b/docs/_static/cybertec-new.png differ diff --git a/docs/_static/cybertec.png b/docs/_static/cybertec.png new file mode 100644 index 0000000000..4bb395027f Binary files /dev/null and b/docs/_static/cybertec.png differ diff --git a/docs/_static/db.png b/docs/_static/db.png new file mode 100644 index 0000000000..a3dd2d85ab Binary files /dev/null and b/docs/_static/db.png differ diff --git a/docs/_static/empty.png b/docs/_static/empty.png new file mode 100644 index 0000000000..99fabe47fd Binary files /dev/null and b/docs/_static/empty.png differ diff --git a/docs/_static/favicon.ico b/docs/_static/favicon.ico new file mode 100644 index 0000000000..a9e16d3a8b Binary files /dev/null and b/docs/_static/favicon.ico differ diff --git a/docs/_static/film.png b/docs/_static/film.png new file mode 100644 index 0000000000..99843b8709 Binary files /dev/null and b/docs/_static/film.png differ diff --git a/docs/_static/gnuhost.png b/docs/_static/gnuhost.png new file mode 100644 index 0000000000..79a4c9d728 Binary files /dev/null and b/docs/_static/gnuhost.png differ diff --git a/docs/_static/logo.png b/docs/_static/logo.png new file mode 100644 index 0000000000..7d23fffc4a Binary files /dev/null and b/docs/_static/logo.png differ diff --git a/docs/_static/oblivious.jpg b/docs/_static/oblivious.jpg new file mode 100644 index 0000000000..955e1a57d3 Binary files /dev/null and b/docs/_static/oblivious.jpg differ diff --git a/docs/_static/orders.png b/docs/_static/orders.png new file mode 100644 index 0000000000..db709c873d Binary files /dev/null and b/docs/_static/orders.png differ diff --git a/docs/_static/retool.png b/docs/_static/retool.png new file mode 100644 index 0000000000..abf26a1eff Binary files /dev/null and b/docs/_static/retool.png differ diff --git a/docs/_static/security-anon-choice.png b/docs/_static/security-anon-choice.png new file mode 100644 index 0000000000..ea02a237e6 Binary files /dev/null and b/docs/_static/security-anon-choice.png differ diff --git a/docs/_static/security-roles.png b/docs/_static/security-roles.png new file mode 100644 index 0000000000..f45ba8e9e1 Binary files /dev/null and b/docs/_static/security-roles.png differ diff --git a/docs/_static/supabase.png b/docs/_static/supabase.png new file mode 100644 index 0000000000..9c3686bd04 Binary files /dev/null and b/docs/_static/supabase.png differ diff --git a/docs/_static/timescaledb.png b/docs/_static/timescaledb.png new file mode 100644 index 0000000000..d6403efa4c Binary files /dev/null and b/docs/_static/timescaledb.png differ diff --git a/docs/_static/tuts/tut0-request-flow.png b/docs/_static/tuts/tut0-request-flow.png new file mode 100644 index 0000000000..24f2986e81 Binary files /dev/null and b/docs/_static/tuts/tut0-request-flow.png differ diff --git a/docs/_static/tuts/tut1-jwt-io.png b/docs/_static/tuts/tut1-jwt-io.png new file mode 100644 index 0000000000..488b87f103 Binary files /dev/null and b/docs/_static/tuts/tut1-jwt-io.png differ diff --git a/docs/_static/win-err-dialog.png b/docs/_static/win-err-dialog.png new file mode 100644 index 0000000000..e60a71c450 Binary files /dev/null and b/docs/_static/win-err-dialog.png differ diff --git a/docs/admin.rst b/docs/admin.rst new file mode 100644 index 0000000000..36b9e5e6f2 --- /dev/null +++ b/docs/admin.rst @@ -0,0 +1,385 @@ +.. _admin: + +Hardening PostgREST +=================== + +PostgREST is a fast way to construct a RESTful API. Its default behavior is great for scaffolding in development. When it's time to go to production it works great too, as long as you take precautions. PostgREST is a small sharp tool that focuses on performing the API-to-database mapping. We rely on a reverse proxy like Nginx for additional safeguards. + +The first step is to create an Nginx configuration file that proxies requests to an underlying PostgREST server. + +.. code-block:: nginx + + http { + # ... + # upstream configuration + upstream postgrest { + server localhost:3000; + } + # ... + server { + # ... + # expose to the outside world + location /api/ { + default_type application/json; + proxy_hide_header Content-Location; + add_header Content-Location /api/$upstream_http_content_location; + proxy_set_header Connection ""; + proxy_http_version 1.1; + proxy_pass http://postgrest/; + } + # ... + } + } + +.. note:: + + For ubuntu, if you already installed nginx through :code:`apt` you can add this to the config file in + :code:`/etc/nginx/sites-enabled/default`. + +.. _block_fulltable: + +Block Full-Table Operations +--------------------------- + +Each table in the admin-selected schema gets exposed as a top level route. Client requests are executed by certain database roles depending on their authentication. All HTTP verbs are supported that correspond to actions permitted to the role. For instance if the active role can drop rows of the table then the DELETE verb is allowed for clients. Here's an API request to delete old rows from a hypothetical logs table: + +.. tabs:: + + .. code-tab:: http + + DELETE /logs?time=lt.1991-08-06 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/logs?time=lt.1991-08-06" -X DELETE + +However it's very easy to delete the **entire table** by omitting the query parameter! + +.. tabs:: + + .. code-tab:: http + + DELETE /logs HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/logs" -X DELETE + +This can happen accidentally such as by switching a request from a GET to a DELETE. To protect against accidental operations use the `pg-safeupdate `_ PostgreSQL extension. It raises an error if UPDATE or DELETE are executed without specifying conditions. To install it you can use the `PGXN `_ network: + +.. code-block:: bash + + sudo -E pgxn install safeupdate + + # then add this to postgresql.conf: + # shared_preload_libraries='safeupdate'; + +This does not protect against malicious actions, since someone can add a url parameter that does not affect the result set. To prevent this you must turn to database permissions, forbidding the wrong people from deleting rows, and using `row-level security `_ if finer access control is required. + +Count-Header DoS +---------------- + +For convenience to client-side pagination controls PostgREST supports counting and reporting total table size in its response. As described in :ref:`limits`, responses ordinarily include a range but leave the total unspecified like + +.. code-block:: http + + HTTP/1.1 200 OK + Range-Unit: items + Content-Range: 0-14/* + +However including the request header :code:`Prefer: count=exact` calculates and includes the full count: + +.. code-block:: http + + HTTP/1.1 206 Partial Content + Range-Unit: items + Content-Range: 0-14/3573458 + +This is fine in small tables, but count performance degrades in big tables due to the MVCC architecture of PostgreSQL. For very large tables it can take a very long time to retrieve the results which allows a denial of service attack. The solution is to strip this header from all requests: + +.. code-block:: postgres + + -- Pending nginx config: Remove any prefer header which contains the word count + +.. _https: + +HTTPS +----- + +PostgREST aims to do one thing well: add an HTTP interface to a PostgreSQL database. To keep the code small and focused we do not implement HTTPS. Use a reverse proxy such as NGINX to add this, `here's how `_. Note that some Platforms as a Service like Heroku also add SSL automatically in their load balancer. + +Rate Limiting +------------- + +Nginx supports "leaky bucket" rate limiting (see `official docs `_). Using standard Nginx configuration, routes can be grouped into *request zones* for rate limiting. For instance we can define a zone for login attempts: + +.. code-block:: nginx + + limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s; + +This creates a shared memory zone called "login" to store a log of IP addresses that access the rate limited urls. The space reserved, 10 MB (:code:`10m`) will give us enough space to store a history of 160k requests. We have chosen to allow only allow one request per second (:code:`1r/s`). + +Next we apply the zone to certain routes, like a hypothetical stored procedure called :code:`login`. + +.. code-block:: nginx + + location /rpc/login/ { + # apply rate limiting + limit_req zone=login burst=5; + } + +The burst argument tells Nginx to start dropping requests if more than five queue up from a specific IP. + +Nginx rate limiting is general and indiscriminate. To rate limit each authenticated request individually you will need to add logic in a :ref:`Custom Validation ` function. + +.. _external_connection_poolers: + +Using External Connection Poolers +--------------------------------- + +PostgREST manages its :ref:`own pool of connections ` and uses prepared statements by default in order to increase performance. However, this setting is incompatible with external connection poolers such as PgBouncer working in transaction pooling mode. In this case, you need to set the :ref:`db-prepared-statements` config option to ``false``. On the other hand, session pooling is fully compatible with PostgREST, while statement pooling is not compatible at all. + +.. note:: + + If prepared statements are enabled, PostgREST will quit after detecting that transaction or statement pooling is being used. + +You should also set the :ref:`db-channel-enabled` config option to ``false``, due to the ``LISTEN`` command not being compatible with transaction pooling, although it should not give any errors if it's left enabled by default. + +Debugging +========= + +Server Version +-------------- + +When debugging a problem it's important to verify the PostgREST version. At any time you can make a request to the running server and determine exactly which version is deployed. Look for the :code:`Server` HTTP response header, which contains the version number. + +Errors +------ + +See the :doc:`Errors ` reference page for detailed information on the errors that PostgREST returns. + +.. _pgrst_logging: + +Logging +------- + +PostgREST logs basic request information to ``stdout``, including the authenticated user if available, the requesting IP address and user agent, the URL requested, and HTTP response status. + +.. code:: + + 127.0.0.1 - user [26/Jul/2021:01:56:38 -0500] "GET /clients HTTP/1.1" 200 - "" "curl/7.64.0" + 127.0.0.1 - anonymous [26/Jul/2021:01:56:48 -0500] "GET /unexistent HTTP/1.1" 404 - "" "curl/7.64.0" + +For diagnostic information about the server itself, PostgREST logs to ``stderr``. + +.. code:: + + 12/Jun/2021:17:47:39 -0500: Attempting to connect to the database... + 12/Jun/2021:17:47:39 -0500: Listening on port 3000 + 12/Jun/2021:17:47:39 -0500: Connection successful + 12/Jun/2021:17:47:39 -0500: Config re-loaded + 12/Jun/2021:17:47:40 -0500: Schema cache loaded + +.. note:: + + When running it in an SSH session you must detach it from stdout or it will be terminated when the session closes. The easiest technique is redirecting the output to a log file or to the syslog: + + .. code-block:: bash + + ssh foo@example.com \ + 'postgrest foo.conf /var/log/postgrest.log 2>&1 &' + + # another option is to pipe the output into "logger -t postgrest" + +PostgREST logging provides limited information for debugging server errors. It's helpful to get full information about both client requests and the corresponding SQL commands executed against the underlying database. + +HTTP Requests +------------- + +A great way to inspect incoming HTTP requests including headers and query parameters is to sniff the network traffic on the port where PostgREST is running. For instance on a development server bound to port 3000 on localhost, run this: + +.. code:: bash + + # sudo access is necessary for watching the network + sudo ngrep -d lo0 port 3000 + +The options to ngrep vary depending on the address and host on which you've bound the server. The binding is described in the :ref:`configuration` section. The ngrep output isn't particularly pretty, but it's legible. + +.. _automatic_recovery: + +Automatic Connection Recovery +----------------------------- + +When PostgREST loses the connection to the database, it retries the connection using capped exponential backoff, with 32 seconds being the maximum backoff time. + +This retry behavior is triggered immediately after the connection is lost if :ref:`db-channel-enabled` is set to true(the default), otherwise it will be activated once a request is made. + +To notify the client when the next reconnection attempt will be, PostgREST responds with ``503 Service Unavailable`` and the ``Retry-After: x`` header, where ``x`` is the number of seconds programmed for the next retry. + +Database Logs +------------- + +Once you've verified that requests are as you expect, you can get more information about the server operations by watching the database logs. By default PostgreSQL does not keep these logs, so you'll need to make the configuration changes below. Find :code:`postgresql.conf` inside your PostgreSQL data directory (to find that, issue the command :code:`show data_directory;`). Either find the settings scattered throughout the file and change them to the following values, or append this block of code to the end of the configuration file. + +.. code:: sql + + # send logs where the collector can access them + log_destination = "stderr" + + # collect stderr output to log files + logging_collector = on + + # save logs in pg_log/ under the pg data directory + log_directory = "pg_log" + + # (optional) new log file per day + log_filename = "postgresql-%Y-%m-%d.log" + + # log every kind of SQL statement + log_statement = "all" + +Restart the database and watch the log file in real-time to understand how HTTP requests are being translated into SQL commands. + +.. note:: + + On Docker you can enable the logs by using a custom ``init.sh``: + + .. code:: bash + + #!/bin/sh + echo "log_statement = 'all'" >> /var/lib/postgresql/data/postgresql.conf + + After that you can start the container and check the logs with ``docker logs``. + + .. code:: bash + + docker run -v "$(pwd)/init.sh":"/docker-entrypoint-initdb.d/init.sh" -d postgres + docker logs -f + +Schema Reloading +---------------- + +Changing the schema while the server is running can lead to errors due to a stale schema cache. To learn how to refresh the cache see :ref:`schema_reloading`. + +.. _health_check: + +Health Check +------------ + +You can enable a minimal health check to verify if PostgREST is available for client requests and to check the status of its internal state. + +To do this, set the configuration variable :ref:`admin-server-port` to the port number of your preference. Two endpoints ``live`` and ``ready`` will then be available. + +The ``live`` endpoint verifies if PostgREST is running on its configured port. A request will return ``200 OK`` if PostgREST is alive or ``503`` otherwise. + +The ``ready`` endpoint also checks the state of both the Database Connection and the :ref:`schema_cache`. A request will return ``200 OK`` if it is ready or ``503`` if not. + +For instance, to verify if PostgREST is running at ``localhost:3000`` while the ``admin-server-port`` is set to ``3001``: + +.. tabs:: + + .. code-tab:: http + + GET localhost:3001/live HTTP/1.1 + + .. code-tab:: bash Curl + + curl -I "http://localhost:3001/live" + +.. code-block:: http + + HTTP/1.1 200 OK + +If you have a machine with multiple network interfaces and multiple PostgREST instances in the same port, you need to specify a unique :ref:`hostname ` in the configuration of each PostgREST instance for the health check to work correctly. Don't use the special values(``!4``, ``*``, etc) in this case because the health check could report a false positive. + +Daemonizing +=========== + +For Linux distributions that use **systemd** (Ubuntu, Debian, Archlinux) you can create a daemon in the following way. + +First, create postgrest configuration in ``/etc/postgrest/config`` + +.. code-block:: ini + + db-uri = "postgres://:@localhost:5432/" + db-schemas = "" + db-anon-role = "" + jwt-secret = "" + +Then create the systemd service file in ``/etc/systemd/system/postgrest.service`` + +.. code-block:: ini + + [Unit] + Description=REST API for any PostgreSQL database + After=postgresql.service + + [Service] + ExecStart=/bin/postgrest /etc/postgrest/config + ExecReload=/bin/kill -SIGUSR1 $MAINPID + + [Install] + WantedBy=multi-user.target + +After that, you can enable the service at boot time and start it with: + +.. code-block:: bash + + systemctl enable postgrest + systemctl start postgrest + + ## For reloading the service + ## systemctl restart postgrest + +.. _file_descriptors: + +File Descriptors +---------------- + +File descriptors are kernel resources that are used by HTTP connections (among others). File descriptors are limited per process. The kernel default limit is 1024, which is increased in some Linux distributions. +When under heavy traffic, PostgREST can reach this limit and start showing ``No file descriptors available`` errors. To clear these errors, you can increase the process' file descriptor limit. + +.. code-block:: ini + + [Service] + LimitNOFILE=10000 + +Alternate URL Structure +======================= + +As discussed in :ref:`singular_plural`, there are no special URL forms for singular resources in PostgREST, only operators for filtering. Thus there are no URLs like :code:`/people/1`. It would be specified instead as + +.. tabs:: + + .. code-tab:: http + + GET /people?id=eq.1 HTTP/1.1 + Accept: application/vnd.pgrst.object+json + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?id=eq.1" \ + -H "Accept: application/vnd.pgrst.object+json" + +This allows compound primary keys and makes the intent for singular response independent of a URL convention. + +Nginx rewrite rules allow you to simulate the familiar URL convention. The following example adds a rewrite rule for all table endpoints, but you'll want to restrict it to those tables that have a numeric simple primary key named "id." + +.. code-block:: nginx + + # support /endpoint/:id url style + location ~ ^/([a-z_]+)/([0-9]+) { + + # make the response singular + proxy_set_header Accept 'application/vnd.pgrst.object+json'; + + # assuming an upstream named "postgrest" + proxy_pass http://postgrest/$1?id=eq.$2; + + } + +.. TODO +.. Administration +.. API Versioning +.. HTTP Caching +.. Upgrading diff --git a/docs/api.rst b/docs/api.rst new file mode 100644 index 0000000000..a9483635e7 --- /dev/null +++ b/docs/api.rst @@ -0,0 +1,3085 @@ +.. role:: sql(code) + :language: sql + +Tables and Views +================ + +All views and tables in the exposed schema and accessible by the active database role for a request are available for querying. They are exposed in one-level deep routes. For instance the full contents of a table `people` is returned at + +.. tabs:: + + .. code-tab:: http + + GET /people HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people" + +There are no deeply/nested/routes. Each route provides OPTIONS, GET, HEAD, POST, PATCH, and DELETE verbs depending entirely on database permissions. + +.. note:: + + Why not provide nested routes? Many APIs allow nesting to retrieve related information, such as :code:`/films/1/director`. We offer a more flexible mechanism (inspired by GraphQL) to embed related information. It can handle one-to-many and many-to-many relationships. This is covered in the section about :ref:`resource_embedding`. + +.. _h_filter: + +Horizontal Filtering (Rows) +--------------------------- + +You can filter result rows by adding conditions on columns. For instance, to return people aged under 13 years old: + +.. tabs:: + + .. code-tab:: http + + GET /people?age=lt.13 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?age=lt.13" + +You can evaluate multiple conditions on columns by adding more query string parameters. For instance, to return people who are 18 or older **and** are students: + +.. tabs:: + + .. code-tab:: http + + GET /people?age=gte.18&student=is.true HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?age=gte.18&student=is.true" + +.. _operators: + +Operators +~~~~~~~~~ + +These operators are available: + +============ ======================== ================================================================================== +Abbreviation In PostgreSQL Meaning +============ ======================== ================================================================================== +eq :code:`=` equals +gt :code:`>` greater than +gte :code:`>=` greater than or equal +lt :code:`<` less than +lte :code:`<=` less than or equal +neq :code:`<>` or :code:`!=` not equal +like :code:`LIKE` LIKE operator (to avoid `URL encoding `_ you can use ``*`` as an alias of the percent sign ``%`` for the pattern) +ilike :code:`ILIKE` ILIKE operator (to avoid `URL encoding `_ you can use ``*`` as an alias of the percent sign ``%`` for the pattern) +match :code:`~` ~ operator, see :ref:`pattern_matching` +imatch :code:`~*` ~* operator, see :ref:`pattern_matching` +in :code:`IN` one of a list of values, e.g. :code:`?a=in.(1,2,3)` + – also supports commas in quoted strings like + :code:`?a=in.("hi,there","yes,you")` +is :code:`IS` checking for exact equality (null,true,false,unknown) +fts :code:`@@` :ref:`fts` using to_tsquery +plfts :code:`@@` :ref:`fts` using plainto_tsquery +phfts :code:`@@` :ref:`fts` using phraseto_tsquery +wfts :code:`@@` :ref:`fts` using websearch_to_tsquery +cs :code:`@>` contains e.g. :code:`?tags=cs.{example, new}` +cd :code:`<@` contained in e.g. :code:`?values=cd.{1,2,3}` +ov :code:`&&` overlap (have points in common), e.g. :code:`?period=ov.[2017-01-01,2017-06-30]` – + also supports array types, use curly braces instead of square brackets e.g. + :code: `?arr=ov.{1,3}` +sl :code:`<<` strictly left of, e.g. :code:`?range=sl.(1,10)` +sr :code:`>>` strictly right of +nxr :code:`&<` does not extend to the right of, e.g. :code:`?range=nxr.(1,10)` +nxl :code:`&>` does not extend to the left of +adj :code:`-|-` is adjacent to, e.g. :code:`?range=adj.(1,10)` +not :code:`NOT` negates another operator, see :ref:`logical_operators` +or :code:`OR` logical :code:`OR`, see :ref:`logical_operators` +and :code:`AND` logical :code:`AND`, see :ref:`logical_operators` +============ ======================== ================================================================================== + +For more complicated filters you will have to create a new view in the database, or use a stored procedure. For instance, here's a view to show "today's stories" including possibly older pinned stories: + +.. code-block:: postgresql + + CREATE VIEW fresh_stories AS + SELECT * + FROM stories + WHERE pinned = true + OR published > now() - interval '1 day' + ORDER BY pinned DESC, published DESC; + +The view will provide a new endpoint: + +.. tabs:: + + .. code-tab:: http + + GET /fresh_stories HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/fresh_stories" + +.. _logical_operators: + +Logical operators +~~~~~~~~~~~~~~~~~ + +Multiple conditions on columns are evaluated using ``AND`` by default, but you can combine them using ``OR`` with the ``or`` operator. For example, to return people under 18 **or** over 21: + +.. tabs:: + + .. code-tab:: http + + GET /people?or=(age.lt.18,age.gt.21) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?or=(age.lt.18,age.gt.21)" + +To **negate** any operator, you can prefix it with :code:`not` like :code:`?a=not.eq.2` or :code:`?not.and=(a.gte.0,a.lte.100)` . + +You can also apply complex logic to the conditions: + +.. tabs:: + + .. code-tab:: http + + GET /people?grade=gte.90&student=is.true&or=(age.eq.14,not.and(age.gte.11,age.lte.17)) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?grade=gte.90&student=is.true&or=(age.eq.14,not.and(age.gte.11,age.lte.17))" + +.. _pattern_matching: + +Pattern Matching +~~~~~~~~~~~~~~~~ + +The pattern-matching operators (:code:`like`, :code:`ilike`, :code:`match`, :code:`imatch`) exist to support filtering data using patterns instead of concrete strings, as described in the `PostgreSQL docs `__. + +To ensure best performance on larger data sets, an `appropriate index `__ should be used and even then, it depends on the pattern value and actual data statistics whether an existing index will be used by the query planner or not. + +.. _fts: + +Full-Text Search +~~~~~~~~~~~~~~~~ + +The :code:`fts` filter mentioned above has a number of options to support flexible textual queries, namely the choice of plain vs phrase search and the language used for stemming. Suppose that :code:`tsearch` is a table with column :code:`my_tsv`, of type `tsvector `_. The following examples illustrate the possibilities. + +.. tabs:: + + .. code-tab:: http + + GET /tsearch?my_tsv=fts(french).amusant HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/tsearch?my_tsv=fts(french).amusant" + +.. tabs:: + + .. code-tab:: http + + GET /tsearch?my_tsv=plfts.The%20Fat%20Cats HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/tsearch?my_tsv=plfts.The%20Fat%20Cats" + +.. tabs:: + + .. code-tab:: http + + GET /tsearch?my_tsv=not.phfts(english).The%20Fat%20Cats HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/tsearch?my_tsv=not.phfts(english).The%20Fat%20Cats" + +.. tabs:: + + .. code-tab:: http + + GET /tsearch?my_tsv=not.wfts(french).amusant HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/tsearch?my_tsv=not.wfts(french).amusant" + +Using `websearch_to_tsquery` requires PostgreSQL of version at least 11.0 and will raise an error in earlier versions of the database. + +.. _v_filter: + +Vertical Filtering (Columns) +---------------------------- + +When certain columns are wide (such as those holding binary data), it is more efficient for the server to withhold them in a response. The client can specify which columns are required using the :sql:`select` parameter. + +.. tabs:: + + .. code-tab:: http + + GET /people?select=first_name,age HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?select=first_name,age" + +.. code-block:: json + + [ + {"first_name": "John", "age": 30}, + {"first_name": "Jane", "age": 20} + ] + +The default is :sql:`*`, meaning all columns. This value will become more important below in :ref:`resource_embedding`. + +Renaming Columns +~~~~~~~~~~~~~~~~ + +You can rename the columns by prefixing them with an alias followed by the colon ``:`` operator. + +.. tabs:: + + .. code-tab:: http + + GET /people?select=fullName:full_name,birthDate:birth_date HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?select=fullName:full_name,birthDate:birth_date" + +.. code-block:: json + + [ + {"fullName": "John Doe", "birthDate": "04/25/1988"}, + {"fullName": "Jane Doe", "birthDate": "01/12/1998"} + ] + +.. _casting_columns: + +Casting Columns +~~~~~~~~~~~~~~~ + +Casting the columns is possible by suffixing them with the double colon ``::`` plus the desired type. + +.. tabs:: + + .. code-tab:: http + + GET /people?select=full_name,salary::text HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?select=full_name,salary::text" + +.. code-block:: json + + [ + {"full_name": "John Doe", "salary": "90000.00"}, + {"full_name": "Jane Doe", "salary": "120000.00"} + ] + +.. _json_columns: + +JSON Columns +------------ + +You can specify a path for a ``json`` or ``jsonb`` column using the arrow operators(``->`` or ``->>``) as per the `PostgreSQL docs `__. + +.. code-block:: postgres + + CREATE TABLE people ( + id int, + json_data json + ); + +.. tabs:: + + .. code-tab:: http + + GET /people?select=id,json_data->>blood_type,json_data->phones HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?select=id,json_data->>blood_type,json_data->phones" + +.. code-block:: json + + [ + { "id": 1, "blood_type": "A-", "phones": [{"country_code": "61", "number": "917-929-5745"}] }, + { "id": 2, "blood_type": "O+", "phones": [{"country_code": "43", "number": "512-446-4988"}, {"country_code": "43", "number": "213-891-5979"}] } + ] + +.. tabs:: + + .. code-tab:: http + + GET /people?select=id,json_data->phones->0->>number HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?select=id,json_data->phones->0->>number" + +.. code-block:: json + + [ + { "id": 1, "number": "917-929-5745"}, + { "id": 2, "number": "512-446-4988"} + ] + +This also works with filters: + +.. tabs:: + + .. code-tab:: http + + GET /people?select=id,json_data->blood_type&json_data->>blood_type=eq.A- HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?select=id,json_data->blood_type&json_data->>blood_type=eq.A-" + +.. code-block:: json + + [ + { "id": 1, "blood_type": "A-" }, + { "id": 3, "blood_type": "A-" }, + { "id": 7, "blood_type": "A-" } + ] + +Note that ``->>`` is used to compare ``blood_type`` as ``text``. To compare with an integer value use ``->``: + +.. tabs:: + + .. code-tab:: http + + GET /people?select=id,json_data->age&json_data->age=gt.20 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?select=id,json_data->age&json_data->age=gt.20" + +.. code-block:: json + + [ + { "id": 11, "age": 25 }, + { "id": 12, "age": 30 }, + { "id": 15, "age": 35 } + ] +.. _composite_array_columns: + +Composite / Array Columns +------------------------- + +The arrow operators(``->``, ``->>``) can also be used for accessing composite fields and array elements. + +.. code-block:: postgres + + CREATE TYPE coordinates ( + lat decimal(8,6), + long decimal(9,6) + ); + + CREATE TABLE countries ( + id int, + location coordinates, + languages text[] + ); + +.. tabs:: + + .. code-tab:: http + + GET /countries?select=id,location->>lat,location->>long,primary_language:languages->0&location->lat=gte.19 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/countries?select=id,location->>lat,location->>long,primary_language:languages->0&location->lat=gte.19" + +.. code-block:: json + + [ + { + "id": 5, + "lat": "19.741755", + "long": "-155.844437", + "primary_language": "en" + } + ] + +.. important:: + + When using the ``->`` and ``->>`` operators, PostgREST uses a query like ``to_jsonb()->'field'``. To make filtering and ordering on those nested fields use an index, the index needs to be created on the same expression, including the ``to_jsonb(...)`` call: + + .. code-block:: postgres + + CREATE INDEX ON mytable ((to_jsonb(data) -> 'identification' ->> 'registration_number')); + +.. _computed_cols: + +Computed / Virtual Columns +-------------------------- + +Filters may be applied to computed columns(**a.k.a. virtual columns**) as well as actual table/view columns, even though the computed columns will not appear in the output. For example, to search first and last names at once we can create a computed column that will not appear in the output but can be used in a filter: + +.. code-block:: postgres + + CREATE TABLE people ( + fname text, + lname text + ); + + CREATE FUNCTION full_name(people) RETURNS text AS $$ + SELECT $1.fname || ' ' || $1.lname; + $$ LANGUAGE SQL; + + -- (optional) add an index to speed up anticipated query + CREATE INDEX people_full_name_idx ON people + USING GIN (to_tsvector('english', full_name(people))); + +A full-text search on the computed column: + +.. tabs:: + + .. code-tab:: http + + GET /people?full_name=fts.Beckett HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?full_name=fts.Beckett" + +As mentioned, computed columns do not appear in the output by default. However you can include them by listing them in the vertical filtering :code:`select` parameter: + +.. tabs:: + + .. code-tab:: http + + GET /people?select=*,full_name HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?select=*,full_name" + +.. important:: + + Computed columns must be created in the :ref:`exposed schema ` or in a schema in the :ref:`extra search path ` to be used in this way. When placing the computed column in the :ref:`exposed schema ` you can use an **unnamed** argument, as in the example above, to prevent it from being exposed as an :ref:`RPC ` under ``/rpc``. + +Unicode support +--------------- + +PostgREST supports unicode in schemas, tables, columns and values. To access a table with unicode name, use percent encoding. + +To request this: + +.. code-block:: http + + GET /موارد HTTP/1.1 + +Do this: + +.. tabs:: + + .. code-tab:: http + + GET /%D9%85%D9%88%D8%A7%D8%B1%D8%AF HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/%D9%85%D9%88%D8%A7%D8%B1%D8%AF" + +.. _tabs-cols-w-spaces: + +Table / Columns with spaces +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +You can request table/columns with spaces in them by percent encoding the spaces with ``%20``: + +.. tabs:: + + .. code-tab:: http + + GET /Order%20Items?Unit%20Price=lt.200 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/Order%20Items?Unit%20Price=lt.200" + +.. _reserved-chars: + +Reserved characters +~~~~~~~~~~~~~~~~~~~ + +If filters include PostgREST reserved characters(``,``, ``.``, ``:``, ``()``) you'll have to surround them in percent encoded double quotes ``%22`` for correct processing. + +Here ``Hebdon,John`` and ``Williams,Mary`` are values. + +.. tabs:: + + .. code-tab:: http + + GET /employees?name=in.(%22Hebdon,John%22,%22Williams,Mary%22) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/employees?name=in.(%22Hebdon,John%22,%22Williams,Mary%22)" + +Here ``information.cpe`` is a column name. + +.. tabs:: + + .. code-tab:: http + + GET /vulnerabilities?%22information.cpe%22=like.*MS* HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/vulnerabilities?%22information.cpe%22=like.*MS*" + +If the value filtered by the ``in`` operator has a double quote (``"``), you can escape it using a backslash ``"\""``. A backslash itself can be used with a double backslash ``"\\"``. + +Here ``Quote:"`` and ``Backslash:\`` are percent-encoded values. Note that ``%5C`` is the percent-encoded backslash. + +.. tabs:: + + .. code-tab:: http + + GET /marks?name=in.(%22Quote:%5C%22%22,%22Backslash:%5C%5C%22) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/marks?name=in.(%22Quote:%5C%22%22,%22Backslash:%5C%5C%22)" + +.. note:: + + Some HTTP libraries might encode URLs automatically(e.g. :code:`axios`). In these cases you should use double quotes + :code:`""` directly instead of :code:`%22`. + +.. _ordering: + +Ordering +-------- + +The reserved word :sql:`order` reorders the response rows. It uses a comma-separated list of columns and directions: + +.. tabs:: + + .. code-tab:: http + + GET /people?order=age.desc,height.asc HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?order=age.desc,height.asc" + +If no direction is specified it defaults to ascending order: + +.. tabs:: + + .. code-tab:: http + + GET /people?order=age HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?order=age" + +If you care where nulls are sorted, add ``nullsfirst`` or ``nullslast``: + +.. tabs:: + + .. code-tab:: http + + GET /people?order=age.nullsfirst HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?order=age.nullsfirst" + +.. tabs:: + + .. code-tab:: http + + GET /people?order=age.desc.nullslast HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?order=age.desc.nullslast" + +You can also use :ref:`computed_cols` to order the results, even though the computed columns will not appear in the output. You can sort by nested fields of :ref:`json_columns` with the JSON operators. + +.. _limits: + +Limits and Pagination +--------------------- + +PostgREST uses HTTP range headers to describe the size of results. Every response contains the current range and, if requested, the total number of results: + +.. code-block:: http + + HTTP/1.1 200 OK + Range-Unit: items + Content-Range: 0-14/* + +Here items zero through fourteen are returned. This information is available in every response and can help you render pagination controls on the client. This is an RFC7233-compliant solution that keeps the response JSON cleaner. + +There are two ways to apply a limit and offset rows: through request headers or query parameters. When using headers you specify the range of rows desired. This request gets the first twenty people. + +.. tabs:: + + .. code-tab:: http + + GET /people HTTP/1.1 + Range-Unit: items + Range: 0-19 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people" -i \ + -H "Range-Unit: items" \ + -H "Range: 0-19" + +Note that the server may respond with fewer if unable to meet your request: + +.. code-block:: http + + HTTP/1.1 200 OK + Range-Unit: items + Content-Range: 0-17/* + +You may also request open-ended ranges for an offset with no limit, e.g. :code:`Range: 10-`. + +The other way to request a limit or offset is with query parameters. For example + +.. tabs:: + + .. code-tab:: http + + GET /people?limit=15&offset=30 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?limit=15&offset=30" + +This method is also useful for embedded resources, which we will cover in another section. The server always responds with range headers even if you use query parameters to limit the query. + +.. _exact_count: + +Exact Count +----------- + +In order to obtain the total size of the table or view (such as when rendering the last page link in a pagination control), specify ``Prefer: count=exact`` as a request header: + +.. tabs:: + + .. code-tab:: http + + HEAD /bigtable HTTP/1.1 + Range-Unit: items + Range: 0-24 + Prefer: count=exact + + .. code-tab:: bash Curl + + curl "http://localhost:3000/bigtable" -I \ + -H "Range-Unit: items" \ + -H "Range: 0-24" \ + -H "Prefer: count=exact" + +Note that the larger the table the slower this query runs in the database. The server will respond with the selected range and total + +.. code-block:: http + + HTTP/1.1 206 Partial Content + Range-Unit: items + Content-Range: 0-24/3573458 + +.. _planned_count: + +Planned Count +------------- + +To avoid the shortcomings of :ref:`exact count `, PostgREST can leverage PostgreSQL statistics and get a fairly accurate and fast count. +To do this, specify the ``Prefer: count=planned`` header. + +.. tabs:: + + .. code-tab:: http + + HEAD /bigtable?limit=25 HTTP/1.1 + Prefer: count=planned + + .. code-tab:: bash Curl + + curl "http://localhost:3000/bigtable?limit=25" -I \ + -H "Prefer: count=planned" + +.. code-block:: http + + HTTP/1.1 206 Partial Content + Content-Range: 0-24/3572000 + +Note that the accuracy of this count depends on how up-to-date are the PostgreSQL statistics tables. +For example in this case, to increase the accuracy of the count you can do ``ANALYZE bigtable``. +See `ANALYZE `_ for more details. + +.. _estimated_count: + +Estimated Count +--------------- + +When you are interested in the count, the relative error is important. If you have a :ref:`planned count ` of 1000000 and the exact count is +1001000, the error is small enough to be ignored. But with a planned count of 7, an exact count of 28 would be a huge misprediction. + +In general, when having smaller row-counts, the estimated count should be as close to the exact count as possible. + +To help with these cases, PostgREST can get the exact count up until a threshold and get the planned count when +that threshold is surpassed. To use this behavior, you can specify the ``Prefer: count=estimated`` header. The **threshold** is +defined by :ref:`db-max-rows`. + +Here's an example. Suppose we set ``db-max-rows=1000`` and ``smalltable`` has 321 rows, then we'll get the exact count: + +.. tabs:: + + .. code-tab:: http + + HEAD /smalltable?limit=25 HTTP/1.1 + Prefer: count=estimated + + .. code-tab:: bash Curl + + curl "http://localhost:3000/smalltable?limit=25" -I \ + -H "Prefer: count=estimated" + +.. code-block:: http + + HTTP/1.1 206 Partial Content + Content-Range: 0-24/321 + +If we make a similar request on ``bigtable``, which has 3573458 rows, we would get the planned count: + +.. tabs:: + + .. code-tab:: http + + HEAD /bigtable?limit=25 HTTP/1.1 + Prefer: count=estimated + + .. code-tab:: bash Curl + + curl "http://localhost:3000/bigtable?limit=25" -I \ + -H "Prefer: count=estimated" + +.. code-block:: http + + HTTP/1.1 206 Partial Content + Content-Range: 0-24/3572000 + +.. _res_format: + +Response Format +--------------- + +PostgREST uses proper HTTP content negotiation (`RFC7231 `_) to deliver the desired representation of a resource. That is to say the same API endpoint can respond in different formats like JSON or CSV depending on the client request. + +Use the Accept request header to specify the acceptable format (or formats) for the response: + +.. tabs:: + + .. code-tab:: http + + GET /people HTTP/1.1 + Accept: application/json + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people" \ + -H "Accept: application/json" + +The current possibilities are: + +* ``*/*`` +* ``text/csv`` +* ``application/json`` +* ``application/openapi+json`` +* ``application/geo+json`` + +and in the special case of a single-column select the following additional three formats; +also see the section :ref:`scalar_return_formats`: + +* ``application/octet-stream`` +* ``text/plain`` +* ``text/xml`` + +The server will default to JSON for API endpoints and OpenAPI on the root. + +.. _singular_plural: + +Singular or Plural +------------------ + +By default PostgREST returns all JSON results in an array, even when there is only one item. For example, requesting :code:`/items?id=eq.1` returns + +.. code:: json + + [ + { "id": 1 } + ] + +This can be inconvenient for client code. To return the first result as an object unenclosed by an array, specify :code:`vnd.pgrst.object` as part of the :code:`Accept` header + +.. tabs:: + + .. code-tab:: http + + GET /items?id=eq.1 HTTP/1.1 + Accept: application/vnd.pgrst.object+json + + .. code-tab:: bash Curl + + curl "http://localhost:3000/items?id=eq.1" \ + -H "Accept: application/vnd.pgrst.object+json" + +This returns + +.. code:: json + + { "id": 1 } + +When a singular response is requested but no entries are found, the server responds with an error message and 406 Not Acceptable status code rather than the usual empty array and 200 status: + +.. code-block:: json + + { + "message": "JSON object requested, multiple (or no) rows returned", + "details": "Results contain 0 rows, application/vnd.pgrst.object+json requires 1 row", + "hint": null, + "code": "PGRST505" + } + +.. note:: + + Many APIs distinguish plural and singular resources using a special nested URL convention e.g. `/stories` vs `/stories/1`. Why do we use `/stories?id=eq.1`? The answer is because a singular resource is (for us) a row determined by a primary key, and primary keys can be compound (meaning defined across more than one column). The more familiar nested urls consider only a degenerate case of simple and overwhelmingly numeric primary keys. These so-called artificial keys are often introduced automatically by Object Relational Mapping libraries. + + Admittedly PostgREST could detect when there is an equality condition holding on all columns constituting the primary key and automatically convert to singular. However this could lead to a surprising change of format that breaks unwary client code just by filtering on an extra column. Instead we allow manually specifying singular vs plural to decouple that choice from the URL format. + +.. _resource_embedding: + +Resource Embedding +================== + +In addition to providing RESTful routes for each table and view, PostgREST allows related resources to be included together in a single +API call. This reduces the need for multiple API requests. The server uses **foreign keys** to determine which tables and views can be +returned together. For example, consider a database of films and their awards: + +.. image:: _static/film.png + +.. important:: + + Whenever FOREIGN KEY constraints change in the database schema you must refresh PostgREST's schema cache for Resource Embedding to work properly. See the section :ref:`schema_reloading`. + +.. _many-to-one: + +Many-to-one relationships +------------------------- + +Since ``films`` has a **foreign key** referencing ``directors``, this establishes a many-to-one relationship between them. Because of this, we're able +to request all the films and the director for each film. + +.. tabs:: + + .. code-tab:: http + + GET /films?select=title,directors(id,last_name) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/films?select=title,directors(id,last_name)" + +.. code-block:: json + + [ + { "title": "Workers Leaving The Lumière Factory In Lyon", + "directors": { + "id": 2, + "last_name": "Lumière" + } + }, + { "title": "The Dickson Experimental Sound Film", + "directors": { + "id": 1, + "last_name": "Dickson" + } + }, + { "title": "The Haunted Castle", + "directors": { + "id": 3, + "last_name": "Méliès" + } + } + ] + +Note that the embedded ``directors`` is returned as a JSON object because of the "to-one" end. + +Since the table name is plural, we can be more accurate by making it singular with an alias. + +.. tabs:: + + .. code-tab:: http + + GET /films?select=title,director:directors(id,last_name) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/films?select=title,director:directors(id,last_name)" + +.. code-block:: json + + [ + { "title": "Workers Leaving The Lumière Factory In Lyon", + "director": { + "id": 2, + "last_name": "Lumière" + } + }, + ".." + ] + +.. _one-to-many: + +One-to-many relationships +------------------------- + +The inverse one-to-many relationship between ``directors`` and ``films`` is detected based on the **foreign key** reference. In this case, the embedded ``films`` are returned as a JSON array because of the "to-many" end. + +.. tabs:: + + .. code-tab:: http + + GET /directors?select=last_name,films(title) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/directors?select=last_name,films(title)" + +.. code-block:: json + + [ + { "last_name": "Lumière", + "films": [ + {"title": "Workers Leaving The Lumière Factory In Lyon"} + ] + }, + { "last_name": "Dickson", + "films": [ + {"title": "The Dickson Experimental Sound Film"} + ] + }, + { "last_name": "Méliès", + "films": [ + {"title": "The Haunted Castle"} + ] + } + ] + +.. _many-to-many: + +Many-to-many relationships +-------------------------- + +Many-to-many relationships are detected based on the join table. The join table must contain foreign keys to other two tables +and they must be part of its composite key. + +For the many-to-many relationship between ``films`` and ``actors``, the join table ``roles`` would be: + +.. code-block:: postgresql + + create table roles( + film_id int references films(id) + , actor_id int references actors(id) + , primary key(film_id, actor_id) + ); + + -- the join table can also be detected if the composite key has additional columns + + create table roles( + id int generated always as identity, + , film_id int references films(id) + , actor_id int references actors(id) + , primary key(id, film_id, actor_id) + ); + +.. tabs:: + + .. code-tab:: http + + GET /actors?select=first_name,last_name,films(title) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/actors?select=first_name,last_name,films(title)" + +.. code-block:: json + + [ + { "first_name": "Willem", + "last_name": "Dafoe", + "films": [ + {"title": "The Lighthouse"} + ] + }, + ".." + ] + +.. _one-to-one: + +One-to-one relationships +------------------------ + +one-to-one relationships are detected if there's an unique constraint on a foreign key. + +.. code-block:: postgresql + + CREATE TABLE technical_specs( + film_id INT REFERENCES films UNIQUE, + runtime TIME, + camera TEXT, + sound TEXT + ); + +Or if the foreign key is also a primary key. + +.. code-block:: postgresql + + -- references Films using the primary key as a foreign key + CREATE TABLE technical_specs( + film_id INT PRIMARY KEY REFERENCES films, + runtime TIME, + camera TEXT, + sound TEXT + ); + +.. tabs:: + + .. code-tab:: http + + GET /films?select=title,technical_specs(runtime) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/films?select=title,technical_specs(runtime)" + +.. code-block:: json + + [ + { + "title": "Pulp Fiction", + "technical_specs": {"camera": "Arriflex 35-III"} + }, + ".." + ] + +.. _computed_relationships: + +Computed relationships +---------------------- + +You can manually define relationships between resources. This is useful for database objects that can't define foreign keys, like `Foreign Data Wrappers `_. +To do this, you can create functions similar to :ref:`computed_cols`. + +Assuming there's a foreign table ``premieres`` that we want to relate to ``films``. + +.. code-block:: postgres + + create foreign table premieres ( + id integer, + location text, + "date" date, + film_id integer + ) server import_csv options ( filename '/tmp/directors.csv', format 'csv'); + + create function film(premieres) returns setof films rows 1 as $$ + select * from films where id = $1.film_id + $$ stable language sql; + +The above function defines a relationship between ``premieres`` (the parameter) and ``films`` (the return type) and since there's a ``rows 1``, this defines a many-to-one relationship. +The name of the function ``film`` is arbitrary and can be used to do the embedding: + +.. tabs:: + + .. code-tab:: http + + GET /premieres?select=location,film(name) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/premieres?select=location,film(name)" + +.. code-block:: json + + [ + { + "location": "Cannes Film Festival", + "film": {"name": "Pulp Fiction"} + }, + ".." + ] + +Now let's define the opposite one-to-many relationship with another function. + +.. code-block:: postgres + + create function premieres(films) returns setof premieres as $$ + select * from premieres where film_id = $1.id + $$ stable language sql; + +Similarly, this function defines a relationship between the parameter ``films`` and the return type ``premieres``. +In this case there's an implicit ``ROWS 1000`` defined by PostgreSQL(`search "result_rows" on this PostgreSQL doc `_), +we consider any value greater than 1 as "many" so this defines a one-to-many relationship. + +.. tabs:: + + .. code-tab:: http + + GET /films?select=name,premieres(name) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/films?select=name,premieres(name)" + +.. code-block:: json + + [ + { + "name": "Pulp Ficiton", + "premieres": [{"location": "Cannes Festival"}] + }, + ".." + ] + +Computed relationships also allow you to override the ones that are automatically detected by PostgREST. + +For example, to override the :ref:`many-to-one relationship ` between ``films`` and ``directors``. + +.. code-block:: postgres + + create function directors(films) returns setof directors rows 1 as $$ + select * from directors where id = $1.director_id + $$ stable language sql; + +Taking advantage of overloaded functions, you can use the same function name for different parameters and thus define relationships from other tables/views to ``directors``. + +.. code-block:: postgres + + create function directors(film_schools) returns setof directors as $$ + select * from directors where film_school_id = $1.id + $$ stable language sql; + +Computed relationships have good performance as their intended design follow the `Inlining conditions for table functions `_. + +.. warning:: + + - Always use ``SETOF`` when creating computed relationships. Functions can return a table without using ``SETOF``, but bear in mind that they will not be inlined. + + - Make sure to correctly label the ``to-one`` part of the relationship. When using the ``ROWS 1`` estimation, PostgREST will expect a single row to be returned. If that is not the case, then it will unnest the embedding and return repeated values for the top level resource. + +.. _nested_embedding: + +Nested Embedding +---------------- + +If you want to embed through join tables but need more control on the intermediate resources, you can do nested embedding. For instance, you can request the Actors, their Roles and the Films for those Roles: + +.. tabs:: + + .. code-tab:: http + + GET /actors?select=roles(character,films(title,year)) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/actors?select=roles(character,films(title,year))" + +.. _embed_filters: + +Embedded Filters +---------------- + +Embedded resources can be shaped similarly to their top-level counterparts. To do so, prefix the query parameters with the name of the embedded resource. For instance, to order the actors in each film: + +.. tabs:: + + .. code-tab:: http + + GET /films?select=*,actors(*)&actors.order=last_name,first_name HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/films?select=*,actors(*)&actors.order=last_name,first_name" + +This sorts the list of actors in each film but does *not* change the order of the films themselves. To filter the roles returned with each film: + +.. tabs:: + + .. code-tab:: http + + GET /films?select=*,roles(*)&roles.character=in.(Chico,Harpo,Groucho) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/films?select=*,roles(*)&roles.character=in.(Chico,Harpo,Groucho)" + +Once again, this restricts the roles included to certain characters but does not filter the films in any way. Films without any of those characters would be included along with empty character lists. + +An ``or`` filter can be used for a similar operation: + +.. tabs:: + + .. code-tab:: http + + GET /films?select=*,roles(*)&roles.or=(character.eq.Gummo,character.eq.Zeppo) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/films?select=*,roles(*)&roles.or=(character.eq.Gummo,character.eq.Zeppo)" + +Limit and offset operations are possible: + +.. tabs:: + + .. code-tab:: http + + GET /films?select=*,actors(*)&actors.limit=10&actors.offset=2 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/films?select=*,actors(*)&actors.limit=10&actors.offset=2" + +Embedded resources can be aliased and filters can be applied on these aliases: + +.. tabs:: + + .. code-tab:: http + + GET /films?select=*,90_comps:competitions(name),91_comps:competitions(name)&90_comps.year=eq.1990&91_comps.year=eq.1991 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/films?select=*,90_comps:competitions(name),91_comps:competitions(name)&90_comps.year=eq.1990&91_comps.year=eq.1991" + +Filters can also be applied on nested embedded resources: + +.. tabs:: + + .. code-tab:: http + + GET /films?select=*,roles(*,actors(*))&roles.actors.order=last_name&roles.actors.first_name=like.*Tom* HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/films?select=*,roles(*,actors(*))&roles.actors.order=last_name&roles.actors.first_name=like.*Tom*" + +The result will show the nested actors named Tom and order them by last name. Aliases can also be used instead of the resource names to filter the nested tables. + +.. _embedding_top_level_filter: + +Embedding with Top-level Filtering +---------------------------------- + +By default, :ref:`embed_filters` don't change the top-level resource(``films``) rows at all: + +.. tabs:: + + .. code-tab:: http + + GET /films?select=title,actors(first_name,last_name)&actors.first_name=eq.Jehanne HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/films?select=title,actors(first_name,last_name)&actors.first_name=eq.Jehanne + +.. code-block:: json + + [ + { + "title": "Workers Leaving The Lumière Factory In Lyon", + "actors": [] + }, + { + "title": "The Dickson Experimental Sound Film", + "actors": [] + }, + { + "title": "The Haunted Castle", + "actors": [ + { + "first_name": "Jehanne", + "last_name": "d'Alcy" + } + ] + } + ] + +In order to filter the top level rows you need to add ``!inner`` to the embedded resource. For instance, to get **only** the films that have an actor named ``Jehanne``: + +.. tabs:: + + .. code-tab:: http + + GET /films?select=title,actors!inner(first_name,last_name)&actors.first_name=eq.Jehanne HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/films?select=title,actors!inner(first_name,last_name)&actors.first_name=eq.Jehanne" + +.. code-block:: json + + [ + { + "title": "The Haunted Castle", + "actors": [ + { + "first_name": "Jehanne", + "last_name": "d'Alcy" + } + ] + } + ] + +.. _embedding_partitioned_tables: + +Embedding Partitioned Tables +---------------------------- + +Embedding can also be done between `partitioned tables `_ and other tables. + +For example, let's create the ``box_office`` partitioned table that has the gross daily revenue of a film: + +.. code-block:: postgres + + CREATE TABLE box_office ( + bo_date DATE NOT NULL, + film_id INT REFERENCES test.films NOT NULL, + gross_revenue DECIMAL(12,2) NOT NULL, + PRIMARY KEY (bo_date, film_id) + ) PARTITION BY RANGE (bo_date); + + -- Let's also create partitions for each month of 2021 + + CREATE TABLE box_office_2021_01 PARTITION OF test.box_office + FOR VALUES FROM ('2021-01-01') TO ('2021-01-31'); + + CREATE TABLE box_office_2021_02 PARTITION OF test.box_office + FOR VALUES FROM ('2021-02-01') TO ('2021-02-28'); + + -- and so until december 2021 + +Since it contains the ``films_id`` foreign key, it is possible to embed ``box_office`` and ``films``: + +.. tabs:: + + .. code-tab:: http + + GET /box_office?select=bo_date,gross_revenue,films(title)&gross_revenue=gte.1000000 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/box_office?select=bo_date,gross_revenue,films(title)&gross_revenue=gte.1000000" + +.. note:: + * Embedding on partitions is not allowed because it leads to ambiguity errors (see :ref:`embed_disamb`) between them and their parent partitioned table(more details at `#1783(comment) `_). :ref:`custom_queries` can be used if this is needed. + + * Partitioned tables can reference other tables since PostgreSQL 11 but can only be referenced from any other table since PostgreSQL 12. + +.. _embedding_views: + +Embedding Views +--------------- + +PostgREST will infer the relationships of a view based on its source tables. Source tables are the ones referenced in the ``FROM`` and ``JOIN`` clauses of the view definition. The foreign keys of the relationships must be present in the top ``SELECT`` clause of the view for this to work. + +For instance, the following view has ``nominations``, ``films`` and ``competitions`` as source tables: + +.. code-block:: postgres + + CREATE VIEW nominations_view AS + SELECT + films.title as film_title + , competitions.name as competition_name + , nominations.rank + , nominations.film_id as nominations_film_id + , films.id as film_id + FROM nominations + JOIN films ON films.id = nominations.film_id + JOIN competitions ON competitions.id = nominations.competition_id; + +Since this view contains ``nominations.film_id``, which has a **foreign key** relationship to ``films``, then we can embed the ``films`` table. Similarly, because the view contains ``films.id``, then we can also embed the ``roles`` and the ``actors`` tables (the last one in a many-to-many relationship): + +.. tabs:: + + .. code-tab:: http + + GET /nominations_view?select=film_title,films(language),roles(character),actors(last_name,first_name)&rank=eq.5 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/nominations_view?select=film_title,films(language),roles(character),actors(last_name,first_name)&rank=eq.5" + +It's also possible to embed `Materialized Views `_. + +.. important:: + + - It's not guaranteed that all kinds of views will be embeddable. In particular, views that contain UNIONs will not be made embeddable. + + + Why? PostgREST detects source table foreign keys in the view by querying and parsing `pg_rewrite `_. + This may fail depending on the complexity of the view. + + As a workaround, you can use :ref:`computed_relationships` to define manual relationships for views. + + - If view definitions change you must refresh PostgREST's schema cache for this to work properly. See the section :ref:`schema_reloading`. + +.. _embedding_view_chains: + +Embedding Chains of Views +------------------------- + +Views can also depend on other views, which in turn depend on the actual source table. For PostgREST to pick up those chains recursively to any depth, all the views must be in the search path, so either in the exposed schema (:ref:`db-schemas`) or in one of the schemas set in :ref:`db-extra-search-path`. This does not apply to the source table, which could be in a private schema as well. See :ref:`schema_isolation` for more details. + +.. _s_proc_embed: + +Embedding on Stored Procedures +------------------------------ + +If you have a :ref:`Stored Procedure ` that returns a table type, you can embed its related resources. + +Here's a sample function (notice the ``RETURNS SETOF films``). + +.. code-block:: plpgsql + + CREATE FUNCTION getallfilms() RETURNS SETOF films AS $$ + SELECT * FROM films; + $$ LANGUAGE SQL IMMUTABLE; + +A request with ``directors`` embedded: + +.. tabs:: + + .. code-tab:: http + + GET /rpc/getallfilms?select=title,directors(id,last_name)&title=like.*Workers* HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/getallfilms?select=title,directors(id,last_name)&title=like.*Workers*" + +.. code-block:: json + + [ + { "title": "Workers Leaving The Lumière Factory In Lyon", + "directors": { + "id": 2, + "last_name": "Lumière" + } + } + ] + +.. _mutation_embed: + +Embedding after Insertions/Updates/Deletions +-------------------------------------------- + +You can embed related resources after doing :ref:`insert`, :ref:`update` or :ref:`delete`. + +Say you want to insert a **film** and then get some of its attributes plus embed its **director**. + +.. tabs:: + + .. code-tab:: http + + POST /films?select=title,year,director:directors(first_name,last_name) HTTP/1.1 + Prefer: return=representation + + { + "id": 100, + "director_id": 40, + "title": "127 hours", + "year": 2010, + "rating": 7.6, + "language": "english" + } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/films?select=title,year,director:directors(first_name,last_name)" \ + -H "Prefer: return=representation" \ + -d @- << EOF + { + "id": 100, + "director_id": 40, + "title": "127 hours", + "year": 2010, + "rating": 7.6, + "language": "english" + } + EOF + +Response: + +.. code-block:: json + + { + "title": "127 hours", + "year": 2010, + "director": { + "first_name": "Danny", + "last_name": "Boyle" + } + } + +.. _embed_disamb: + +Embedding Disambiguation +------------------------ + +For doing resource embedding, PostgREST infers the relationship between two tables based on a foreign key between them. +However, in cases where there's more than one foreign key between two tables, it's not possible to infer the relationship unambiguously +by just specifying the tables names. + +.. _target_disamb: + +Target Disambiguation +~~~~~~~~~~~~~~~~~~~~~ + +For example, suppose you have the following ``orders`` and ``addresses`` tables: + +.. image:: _static/orders.png + +And you try to embed ``orders`` with ``addresses`` (this is the **target**): + +.. tabs:: + + .. code-tab:: http + + GET /orders?select=*,addresses(*) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/orders?select=*,addresses(*)" -i + +Since the ``orders`` table has two foreign keys to the ``addresses`` table — an order has a billing address and a shipping address — +the request is ambiguous and PostgREST will respond with an error: + +.. code-block:: http + + HTTP/1.1 300 Multiple Choices + + {..} + +If this happens, you need to disambiguate the request by adding precision to the **target**. +Instead of the **table name**, you can specify the **foreign key constraint name** or the **column name** that is part of the foreign key. + +Let's try first with the **foreign key constraint name**. To make it clearer we can name it: + +.. code-block:: postgresql + + ALTER TABLE orders + ADD CONSTRAINT billing_address foreign key (billing_address_id) references addresses(id), + ADD CONSTRAINT shipping_address foreign key (shipping_address_id) references addresses(id); + + -- Or if the constraints names were already generated by PostgreSQL we can rename them + -- ALTER TABLE orders + -- RENAME CONSTRAINT orders_billing_address_id_fkey TO billing_address, + -- RENAME CONSTRAINT orders_shipping_address_id_fkey TO shipping_address; + +Now we can unambiguously embed the billing address by specifying the ``billing_address`` foreign key constraint as the **target**. + +.. tabs:: + + .. code-tab:: http + + GET /orders?select=name,billing_address(name) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/orders?select=name,billing_address(name)" + +.. code-block:: json + + [ + { + "name": "Personal Water Filter", + "billing_address": { + "name": "32 Glenlake Dr.Dearborn, MI 48124" + } + } + ] + +Alternatively, you can specify the **column name** of the foreign key constraint as the **target**. This can be aliased to make +the result more clear. + +.. tabs:: + + .. code-tab:: http + + GET /orders?select=name,billing_address:billing_address_id(name) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/orders?select=name,billing_address:billing_address_id(name)" + +.. code-block:: json + + [ + { + "name": "Personal Water Filter", + "billing_address": { + "name": "32 Glenlake Dr.Dearborn, MI 48124" + } + } + ] + +.. _hint_disamb: + +Hint Disambiguation +~~~~~~~~~~~~~~~~~~~ + +If specifying the **target** is not enough for unambiguous embedding, you can add a **hint**. For example, let's assume we create +two views of ``addresses``: ``central_addresses`` and ``eastern_addresses``. + +PostgREST cannot detect a view as an embedded resource by using a column name or foreign key name as targets, that is why we need to use the view name ``central_addresses`` instead. But, still, this is not enough for an unambiguous embed. + +.. tabs:: + + .. code-tab:: http + + GET /orders?select=*,central_addresses(*) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/orders?select=*,central_addresses(*)" -i + +.. code-block:: http + + HTTP/1.1 300 Multiple Choices + +For solving this case, in addition to the **target**, we can add a **hint**. +Here, we still specify ``central_addresses`` as the **target** and use the ``billing_address`` foreign key as the **hint**: + +.. tabs:: + + .. code-tab:: http + + GET /orders?select=*,central_addresses!billing_address(*) HTTP/1.1 + + .. code-tab:: bash Curl + + curl 'http://localhost:3000/orders?select=*,central_addresses!billing_address(*)' -i + +.. code-block:: http + + HTTP/1.1 200 OK + + [ ... ] + +Similarly to the **target**, the **hint** can be a **table name**, **foreign key constraint name** or **column name**. + +Hints also work alongside ``!inner`` if a top level filtering is needed. From the above example: + +.. tabs:: + + .. code-tab:: http + + GET /orders?select=*,central_addresses!billing_address!inner(*)¢ral_addresses.code=AB1000 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/orders?select=*,central_addresses!billing_address!inner(*)¢ral_addresses.code=AB1000" + +.. note:: + + If the relationship is so complex that hint disambiguation does not solve it, you can use :ref:`computed_relationships`. + +.. _insert: + +Insertions +========== + +All tables and `auto-updatable views `_ can be modified through the API, subject to permissions of the requester's database role. + +To create a row in a database table post a JSON object whose keys are the names of the columns you would like to create. Missing properties will be set to default values when applicable. + +.. tabs:: + + .. code-tab:: http + + POST /table_name HTTP/1.1 + + { "col1": "value1", "col2": "value2" } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/table_name" \ + -X POST -H "Content-Type: application/json" \ + -d '{ "col1": "value1", "col2": "value2" }' + +If the table has a primary key, the response can contain a :code:`Location` header describing where to find the new object by including the header :code:`Prefer: return=headers-only` in the request. Make sure that the table is not write-only, otherwise constructing the :code:`Location` header will cause a permissions error. + +On the other end of the spectrum you can get the full created object back in the response to your request by including the header :code:`Prefer: return=representation`. That way you won't have to make another HTTP call to discover properties that may have been filled in on the server side. You can also apply the standard :ref:`v_filter` to these results. + +URL encoded payloads can be posted with ``Content-Type: application/x-www-form-urlencoded``. + +.. tabs:: + + .. code-tab:: http + + POST /people HTTP/1.1 + Content-Type: application/x-www-form-urlencoded + + name=John+Doe&age=50&weight=80 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people" \ + -X POST -H "Content-Type: application/x-www-form-urlencoded" \ + -d "name=John+Doe&age=50&weight=80" + +.. note:: + + When inserting a row you must post a JSON object, not quoted JSON. + + .. code:: + + Yes + { "a": 1, "b": 2 } + + No + "{ \"a\": 1, \"b\": 2 }" + + Some JavaScript libraries will post the data incorrectly if you're not careful. For best results try one of the :ref:`clientside_libraries` built for PostgREST. + +.. _bulk_insert: + +Bulk Insert +----------- + +Bulk insert works exactly like single row insert except that you provide either a JSON array of objects having uniform keys, or lines in CSV format. This not only minimizes the HTTP requests required but uses a single INSERT statement on the back-end for efficiency. + +To bulk insert CSV simply post to a table route with :code:`Content-Type: text/csv` and include the names of the columns as the first row. For instance + +.. tabs:: + + .. code-tab:: http + + POST /people HTTP/1.1 + Content-Type: text/csv + + name,age,height + J Doe,62,70 + Jonas,10,55 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people" \ + -X POST -H "Content-Type: text/csv" \ + --data-binary @- << EOF + name,age,height + J Doe,62,70 + Jonas,10,55 + EOF + +An empty field (:code:`,,`) is coerced to an empty string and the reserved word :code:`NULL` is mapped to the SQL null value. Note that there should be no spaces between the column names and commas. + +To bulk insert JSON post an array of objects having all-matching keys + +.. tabs:: + + .. code-tab:: http + + POST /people HTTP/1.1 + Content-Type: application/json + + [ + { "name": "J Doe", "age": 62, "height": 70 }, + { "name": "Janus", "age": 10, "height": 55 } + ] + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people" \ + -X POST -H "Content-Type: application/json" \ + -d @- << EOF + [ + { "name": "J Doe", "age": 62, "height": 70 }, + { "name": "Janus", "age": 10, "height": 55 } + ] + EOF + +.. _specify_columns: + +Specifying Columns +------------------ + +By using the :code:`columns` query parameter it's possible to specify the payload keys that will be inserted and ignore the rest of the payload. + +.. tabs:: + + .. code-tab:: http + + POST /datasets?columns=source,publication_date,figure HTTP/1.1 + Content-Type: application/json + + { + "source": "Natural Disaster Prevention and Control", + "publication_date": "2015-09-11", + "figure": 1100, + "location": "...", + "comment": "...", + "extra": "...", + "stuff": "..." + } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/datasets?columns=source,publication_date,figure" \ + -X POST -H "Content-Type: application/json" \ + -d @- << EOF + { + "source": "Natural Disaster Prevention and Control", + "publication_date": "2015-09-11", + "figure": 1100, + "location": "...", + "comment": "...", + "extra": "...", + "stuff": "..." + } + EOF + +In this case, only **source**, **publication_date** and **figure** will be inserted. The rest of the JSON keys will be ignored. + +Using this also has the side-effect of being more efficient for :ref:`bulk_insert` since PostgREST will not process the JSON and +it'll send it directly to PostgreSQL. + +.. _update: + +Updates +======= + +To update a row or rows in a table, use the PATCH verb. Use :ref:`h_filter` to specify which record(s) to update. Here is an example query setting the :code:`category` column to child for all people below a certain age. + +.. tabs:: + + .. code-tab:: http + + PATCH /people?age=lt.13 HTTP/1.1 + + { "category": "child" } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people?age=lt.13" \ + -X PATCH -H "Content-Type: application/json" \ + -d '{ "category": "child" }' + +Updates also support :code:`Prefer: return=representation` plus :ref:`v_filter`. + +.. warning:: + + Beware of accidentally updating every row in a table. To learn to prevent that see :ref:`block_fulltable`. + +.. _upsert: + +Upsert +====== + +You can make an upsert with :code:`POST` and the :code:`Prefer: resolution=merge-duplicates` header: + +.. tabs:: + + .. code-tab:: http + + POST /employees HTTP/1.1 + Prefer: resolution=merge-duplicates + + [ + { "id": 1, "name": "Old employee 1", "salary": 30000 }, + { "id": 2, "name": "Old employee 2", "salary": 42000 }, + { "id": 3, "name": "New employee 3", "salary": 50000 } + ] + + .. code-tab:: bash Curl + + curl "http://localhost:3000/employees" \ + -X POST -H "Content-Type: application/json" \ + -H "Prefer: resolution=merge-duplicates" \ + -d @- << EOF + [ + { "id": 1, "name": "Old employee 1", "salary": 30000 }, + { "id": 2, "name": "Old employee 2", "salary": 42000 }, + { "id": 3, "name": "New employee 3", "salary": 50000 } + ] + EOF + +By default, upsert operates based on the primary key columns, you must specify all of them. You can also choose to ignore the duplicates with :code:`Prefer: resolution=ignore-duplicates`. This works best when the primary key is natural, but it's also possible to use it if the primary key is surrogate (example: "id serial primary key"). For more details read `this issue `_. + +.. important:: + After creating a table or changing its primary key, you must refresh PostgREST schema cache for upsert to work properly. To learn how to refresh the cache see :ref:`schema_reloading`. + +.. _on_conflict: + +On Conflict +----------- + +By specifying the ``on_conflict`` query parameter, you can make upsert work on a column(s) that has a UNIQUE constraint. + +.. tabs:: + + .. code-tab:: http + + POST /employees?on_conflict=name HTTP/1.1 + Prefer: resolution=merge-duplicates + + [ + { "name": "Old employee 1", "salary": 40000 }, + { "name": "Old employee 2", "salary": 52000 }, + { "name": "New employee 3", "salary": 60000 } + ] + + .. code-tab:: bash Curl + + curl "http://localhost:3000/employees?on_conflict=name" \ + -X POST -H "Content-Type: application/json" \ + -H "Prefer: resolution=merge-duplicates" \ + -d @- << EOF + [ + { "name": "Old employee 1", "salary": 40000 }, + { "name": "Old employee 2", "salary": 52000 }, + { "name": "New employee 3", "salary": 60000 } + ] + EOF + +.. _upsert_put: + +PUT +--- + +A single row upsert can be done by using :code:`PUT` and filtering the primary key columns with :code:`eq`: + +.. tabs:: + + .. code-tab:: http + + PUT /employees?id=eq.4 HTTP/1.1 + + { "id": 4, "name": "Sara B.", "salary": 60000 } + + .. code-tab:: bash Curl + + curl "http://localhost/employees?id=eq.4" \ + -X PUT -H "Content-Type: application/json" \ + -d '{ "id": 4, "name": "Sara B.", "salary": 60000 }' + +All the columns must be specified in the request body, including the primary key columns. + +.. _delete: + +Deletions +========= + +To delete rows in a table, use the DELETE verb plus :ref:`h_filter`. For instance deleting inactive users: + +.. tabs:: + + .. code-tab:: http + + DELETE /user?active=is.false HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/user?active=is.false" -X DELETE + +Deletions also support :code:`Prefer: return=representation` plus :ref:`v_filter`. + +.. tabs:: + + .. code-tab:: http + + DELETE /user?id=eq.1 HTTP/1.1 + Prefer: return=representation + + .. code-tab:: bash Curl + + curl "http://localhost:3000/user?id=eq.1" -X DELETE \ + -H "Prefer: return=representation" + +.. code-block:: json + + {"id": 1, "email": "johndoe@email.com"} + +.. warning:: + + Beware of accidentally deleting all rows in a table. To learn to prevent that see :ref:`block_fulltable`. + +.. _limited_update_delete: + +Limited Updates/Deletions +========================= + +You can limit the amount of affected rows by :ref:`update` or :ref:`delete` with the ``limit`` query parameter. For this, you must add an explicit ``order`` on a unique column(s). + +.. tabs:: + + .. code-tab:: http + + PATCH /users?limit=10&order=id&last_login=lt.2017-01-01 HTTP/1.1 + + { "status": "inactive" } + + .. code-tab:: bash Curl + + curl -X PATCH "/users?limit=10&order=id&last_login=lt.2020-01-01" \ + -H "Content-Type: application/json" \ + -d '{ "status": "inactive" }' + +.. tabs:: + + .. code-tab:: http + + DELETE /users?limit=10&order=id&status=eq.inactive HTTP/1.1 + + .. code-tab:: bash Curl + + curl -X DELETE "http://localhost:3000/users?limit=10&order=id&status=eq.inactive" + +If your table has no unique columns, you can use the `ctid `_ system column. + +Using ``offset`` to target a different subset of rows is also possible. + +.. note:: + + There is no native ``UPDATE...LIMIT`` or ``DELETE...LIMIT`` support in PostgreSQL; the generated query simulates that behavior and is based on `this Crunchy Data blog post `_. + +.. _custom_queries: + +Custom Queries +============== + +The PostgREST URL grammar limits the kinds of queries clients can perform. It prevents arbitrary, potentially poorly constructed and slow client queries. It's good for quality of service, but means database administrators must create custom views and stored procedures to provide richer endpoints. The most common causes for custom endpoints are + +* Table unions +* More complicated joins than those provided by `Resource Embedding`_ +* Geo-spatial queries that require an argument, like "points near (lat,lon)" + +.. _s_procs: + +Stored Procedures +================= + +Every stored procedure in the API-exposed database schema is accessible under the :code:`/rpc` prefix. The API endpoint supports POST (and in some cases GET) to execute the function. + +.. tabs:: + + .. code-tab:: http + + POST /rpc/function_name HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/function_name" -X POST + +Such functions can perform any operations allowed by PostgreSQL (read data, modify data, and even DDL operations). + +To supply arguments in an API call, include a JSON object in the request payload and each key/value of the object will become an argument. + +For instance, assume we have created this function in the database. + +.. code-block:: plpgsql + + CREATE FUNCTION add_them(a integer, b integer) + RETURNS integer AS $$ + SELECT a + b; + $$ LANGUAGE SQL IMMUTABLE; + +.. important:: + + Whenever you create or change a function you must refresh PostgREST's schema cache. See the section :ref:`schema_reloading`. + +The client can call it by posting an object like + +.. tabs:: + + .. code-tab:: http + + POST /rpc/add_them HTTP/1.1 + + { "a": 1, "b": 2 } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/add_them" \ + -X POST -H "Content-Type: application/json" \ + -d '{ "a": 1, "b": 2 }' + +.. code-block:: json + + 3 + + +Procedures must be declared with named parameters. Procedures declared like + +.. code-block:: plpgsql + + CREATE FUNCTION non_named_args(integer, text, integer) ... + +cannot be called with PostgREST, since we use `named notation `_ internally. + +Note that PostgreSQL converts identifier names to lowercase unless you quote them like: + +.. code-block:: postgres + + CREATE FUNCTION "someFunc"("someParam" text) ... + +PostgreSQL has four procedural languages that are part of the core distribution: PL/pgSQL, PL/Tcl, PL/Perl, and PL/Python. There are many other procedural languages distributed as additional extensions. Also, plain SQL can be used to write functions (as shown in the example above). + +.. note:: + + Why the ``/rpc`` prefix? One reason is to avoid name collisions between views and procedures. It also helps emphasize to API consumers that these functions are not normal restful things. The functions can have arbitrary and surprising behavior, not the standard "post creates a resource" thing that users expect from the other routes. + +Immutable and stable functions +------------------------------ + +PostgREST executes POST requests in a read/write transaction except for functions marked as ``IMMUTABLE`` or ``STABLE``. Those must not modify the database and are executed in a read-only transaction compatible for read-replicas. + +Procedures that do not modify the database can be called with the HTTP GET verb as well, if desired. PostgREST executes all GET requests in a read-only transaction. Modifying the database inside read-only transactions is not possible and calling volatile functions with GET will fail. + +.. note:: + + The `volatility marker `_ is a promise about the behavior of the function. PostgreSQL will let you mark a function that modifies the database as ``IMMUTABLE`` or ``STABLE`` without failure. However, because of the read-only transaction this would still fail with PostgREST. + +Because ``add_them`` is ``IMMUTABLE``, we can alternately call the function with a GET request: + +.. tabs:: + + .. code-tab:: http + + GET /rpc/add_them?a=1&b=2 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/add_them?a=1&b=2" + +The function parameter names match the JSON object keys in the POST case, for the GET case they match the query parameters ``?a=1&b=2``. + +.. _s_proc_single_json: + +Calling functions with a single JSON parameter +---------------------------------------------- + +You can also call a function that takes a single parameter of type JSON by sending the header :code:`Prefer: params=single-object` with your request. That way the JSON request body will be used as the single argument. + +.. code-block:: plpgsql + + CREATE FUNCTION mult_them(param json) RETURNS int AS $$ + SELECT (param->>'x')::int * (param->>'y')::int + $$ LANGUAGE SQL; + +.. tabs:: + + .. code-tab:: http + + POST /rpc/mult_them HTTP/1.1 + Prefer: params=single-object + + { "x": 4, "y": 2 } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/mult_them" \ + -X POST -H "Content-Type: application/json" \ + -H "Prefer: params=single-object" \ + -d '{ "x": 4, "y": 2 }' + +.. code-block:: json + + 8 + +.. _s_proc_single_unnamed: + +Calling functions with a single unnamed parameter +------------------------------------------------- + +You can make a POST request to a function with a single unnamed parameter to send raw ``json/jsonb``, ``bytea``, ``text`` or ``xml`` data. + +To send raw JSON, the function must have a single unnamed ``json`` or ``jsonb`` parameter and the header ``Content-Type: application/json`` must be included in the request. + +.. code-block:: plpgsql + + CREATE FUNCTION mult_them(json) RETURNS int AS $$ + SELECT ($1->>'x')::int * ($1->>'y')::int + $$ LANGUAGE SQL; + +.. tabs:: + + .. code-tab:: http + + POST /rpc/mult_them HTTP/1.1 + Content-Type: application/json + + { "x": 4, "y": 2 } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/mult_them" \ + -X POST -H "Content-Type: application/json" \ + -d '{ "x": 4, "y": 2 }' + +.. code-block:: json + + 8 + +.. note:: + + If an overloaded function has a single ``json`` or ``jsonb`` unnamed parameter, PostgREST will call this function as a fallback provided that no other overloaded function is found with the parameters sent in the POST request. + +To send raw XML, the parameter type must be ``xml`` and the header ``Content-Type: text/xml`` must be included in the request. + +To send raw binary, the parameter type must be ``bytea`` and the header ``Content-Type: application/octet-stream`` must be included in the request. + +.. code-block:: plpgsql + + CREATE TABLE files(blob bytea); + + CREATE FUNCTION upload_binary(bytea) RETURNS void AS $$ + INSERT INTO files(blob) VALUES ($1); + $$ LANGUAGE SQL; + +.. tabs:: + + .. code-tab:: http + + POST /rpc/upload_binary HTTP/1.1 + Content-Type: application/octet-stream + + file_name.ext + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/upload_binary" \ + -X POST -H "Content-Type: application/octet-stream" \ + --data-binary "@file_name.ext" + +.. code-block:: http + + HTTP/1.1 200 OK + + [ ... ] + +To send raw text, the parameter type must be ``text`` and the header ``Content-Type: text/plain`` must be included in the request. + +.. _s_procs_array: + +Calling functions with array parameters +--------------------------------------- + +You can call a function that takes an array parameter: + +.. code-block:: postgres + + create function plus_one(arr int[]) returns int[] as $$ + SELECT array_agg(n + 1) FROM unnest($1) AS n; + $$ language sql; + +.. tabs:: + + .. code-tab:: http + + POST /rpc/plus_one HTTP/1.1 + Content-Type: application/json + + {"arr": [1,2,3,4]} + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/plus_one" \ + -X POST -H "Content-Type: application/json" \ + -d '{"arr": [1,2,3,4]}' + +.. code-block:: json + + [2,3,4,5] + +For calling the function with GET, you can pass the array as an `array literal `_, +as in ``{1,2,3,4}``. Note that the curly brackets have to be urlencoded(``{`` is ``%7B`` and ``}`` is ``%7D``). + +.. tabs:: + + .. code-tab:: http + + GET /rpc/plus_one?arr=%7B1,2,3,4%7D' HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/plus_one?arr=%7B1,2,3,4%7D'" + +.. note:: + + For versions prior to PostgreSQL 10, to pass a PostgreSQL native array on a POST payload, you need to quote it and use an array literal: + + .. tabs:: + + .. code-tab:: http + + POST /rpc/plus_one HTTP/1.1 + + { "arr": "{1,2,3,4}" } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/plus_one" \ + -X POST -H "Content-Type: application/json" \ + -d '{ "arr": "{1,2,3,4}" }' + + In these versions we recommend using function parameters of type JSON to accept arrays from the client. + +.. _s_procs_variadic: + +Calling variadic functions +-------------------------- + +You can call a variadic function by passing a JSON array in a POST request: + +.. code-block:: postgres + + create function plus_one(variadic v int[]) returns int[] as $$ + SELECT array_agg(n + 1) FROM unnest($1) AS n; + $$ language sql; + +.. tabs:: + + .. code-tab:: http + + POST /rpc/plus_one HTTP/1.1 + Content-Type: application/json + + {"v": [1,2,3,4]} + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/plus_one" \ + -X POST -H "Content-Type: application/json" \ + -d '{"v": [1,2,3,4]}' + +.. code-block:: json + + [2,3,4,5] + +In a GET request, you can repeat the same parameter name: + +.. tabs:: + + .. code-tab:: http + + GET /rpc/plus_one?v=1&v=2&v=3&v=4 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/plus_one?v=1&v=2&v=3&v=4" + +Repeating also works in POST requests with ``Content-Type: application/x-www-form-urlencoded``: + +.. tabs:: + + .. code-tab:: http + + POST /rpc/plus_one HTTP/1.1 + Content-Type: application/x-www-form-urlencoded + + v=1&v=2&v=3&v=4 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/plus_one" \ + -X POST -H "Content-Type: application/x-www-form-urlencoded" \ + -d 'v=1&v=2&v=3&v=4' + +Scalar functions +---------------- + +PostgREST will detect if the function is scalar or table-valued and will shape the response format accordingly: + +.. tabs:: + + .. code-tab:: http + + GET /rpc/add_them?a=1&b=2 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/add_them?a=1&b=2" + +.. code-block:: json + + 3 + +.. tabs:: + + .. code-tab:: http + + GET /rpc/best_films_2017 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/best_films_2017" + +.. code-block:: json + + [ + { "title": "Okja", "rating": 7.4}, + { "title": "Call me by your name", "rating": 8}, + { "title": "Blade Runner 2049", "rating": 8.1} + ] + +To manually choose a return format such as binary, plain text or XML, see the section :ref:`scalar_return_formats`. + + +.. _bulk_call: + +Bulk Call +--------- + +It's possible to call a function in a bulk way, analogously to :ref:`bulk_insert`. To do this, you need to add the +``Prefer: params=multiple-objects`` header to your request. + +.. tabs:: + + .. code-tab:: http + + POST /rpc/add_them HTTP/1.1 + Content-Type: text/csv + Prefer: params=multiple-objects + + a,b + 1,2 + 3,4 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/add_them" \ + -X POST -H "Content-Type: text/csv" \ + -H "Prefer: params=multiple-objects" \ + --data-binary @- << EOF + a,b + 1,2 + 3,4 + EOF + +.. code-block:: json + + [ 3, 7 ] + +If you have large payloads to process, it's preferable you instead use a function with an :ref:`array parameter ` or JSON parameter, as this will be more efficient. + +It's also possible to :ref:`Specify Columns ` on functions calls. + +Function filters +---------------- + +A function that returns a table type response can be shaped using the same filters as the ones used for tables and views: + +.. code-block:: postgres + + CREATE FUNCTION best_films_2017() RETURNS SETOF films .. + +.. tabs:: + + .. code-tab:: http + + GET /rpc/best_films_2017?select=title,director:directors(*) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/best_films_2017?select=title,director:directors(*)" + +.. tabs:: + + .. code-tab:: http + + GET /rpc/best_films_2017?rating=gt.8&order=title.desc HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/best_films_2017?rating=gt.8&order=title.desc" + +Overloaded functions +-------------------- + +You can call overloaded functions with different number of arguments. + +.. code-block:: postgres + + CREATE FUNCTION rental_duration(customer_id integer) .. + + CREATE FUNCTION rental_duration(customer_id integer, from_date date) .. + +.. tabs:: + + .. code-tab:: http + + GET /rpc/rental_duration?customer_id=232 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/rental_duration?customer_id=232" + +.. tabs:: + + .. code-tab:: http + + GET /rpc/rental_duration?customer_id=232&from_date=2018-07-01 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/rental_duration?customer_id=232&from_date=2018-07-01" + +.. important:: + + Overloaded functions with the same argument names but different types are not supported. + +.. _scalar_return_formats: + +Response Formats For Scalar Responses +===================================== + +For scalar return values such as + +* single-column selects on tables or +* scalar functions, + +you can set the additional content types + +* ``application/octet-stream`` +* ``text/plain`` +* ``text/xml`` + +as part of the :code:`Accept` header. + +Example 1: If you want to return raw binary data from a :code:`bytea` column, you must specify :code:`application/octet-stream` as part of the :code:`Accept` header +and select a single column :code:`?select=bin_data`. + +.. tabs:: + + .. code-tab:: http + + GET /items?select=bin_data&id=eq.1 HTTP/1.1 + Accept: application/octet-stream + + .. code-tab:: bash Curl + + curl "http://localhost:3000/items?select=bin_data&id=eq.1" \ + -H "Accept: application/octet-stream" + +Example 2: You can request XML output when calling `Stored Procedures`_ that return a scalar value of type ``text/xml``. You are not forced to use select for this case. + +.. code-block:: postgres + + CREATE FUNCTION generate_xml_content(..) RETURNS xml .. + +.. tabs:: + + .. code-tab:: http + + POST /rpc/generate_xml_content HTTP/1.1 + Accept: text/xml + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/generate_xml_content" \ + -X POST -H "Accept: text/xml" + +Example 3: If the stored procedure returns non-scalar values, you need to do a :code:`select` in the same way as for GET binary output. + +.. code-block:: sql + + CREATE FUNCTION get_descriptions(..) RETURNS SETOF TABLE(id int, description text) .. + +.. tabs:: + + .. code-tab:: http + + POST /rpc/get_descriptions?select=description HTTP/1.1 + Accept: text/plain + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/get_descriptions?select=description" \ + -X POST -H "Accept: text/plain" + +.. note:: + + If more than one row would be returned the binary/plain-text/xml results will be concatenated with no delimiter. + + +.. _open-api: + +OpenAPI Support +=============== + +Every API hosted by PostgREST automatically serves a full `OpenAPI `_ description on the root path. This provides a list of all endpoints (tables, foreign tables, views, functions), along with supported HTTP verbs and example payloads. + +.. note:: + + By default, this output depends on the permissions of the role that is contained in the JWT role claim (or the :ref:`db-anon-role` if no JWT is sent). If you need to show all the endpoints disregarding the role's permissions, set the :ref:`openapi-mode` config to :code:`ignore-privileges`. + +For extra customization, the OpenAPI output contains a "description" field for every `SQL comment `_ on any database object. For instance, + +.. code-block:: sql + + COMMENT ON SCHEMA mammals IS + 'A warm-blooded vertebrate animal of a class that is distinguished by the secretion of milk by females for the nourishment of the young'; + + COMMENT ON TABLE monotremes IS + 'Freakish mammals lay the best eggs for breakfast'; + + COMMENT ON COLUMN monotremes.has_venomous_claw IS + 'Sometimes breakfast is not worth it'; + +These unsavory comments will appear in the generated JSON as the fields, ``info.description``, ``definitions.monotremes.description`` and ``definitions.monotremes.properties.has_venomous_claw.description``. + +Also if you wish to generate a ``summary`` field you can do it by having a multiple line comment, the ``summary`` will be the first line and the ``description`` the lines that follow it: + +.. code-block:: plpgsql + + COMMENT ON TABLE entities IS + $$Entities summary + + Entities description that + spans + multiple lines$$; + +If you need to include the ``security`` and ``securityDefinitions`` options, set the :ref:`openapi-security-active` configuration to ``true``. + +You can use a tool like `Swagger UI `_ to create beautiful documentation from the description and to host an interactive web-based dashboard. The dashboard allows developers to make requests against a live PostgREST server, and provides guidance with request headers and example request bodies. + +.. important:: + + The OpenAPI information can go out of date as the schema changes under a running server. To learn how to refresh the cache see :ref:`schema_reloading`. + +.. _options_requests: + +OPTIONS +======= + +You can verify which HTTP methods are allowed on endpoints for tables and views by using an OPTIONS request. These methods are allowed depending on what operations *can* be done on the table or view, not on the database permissions assigned to them. + +For a table named ``people``, OPTIONS would show: + +.. tabs:: + + .. code-tab:: http + + OPTIONS /people HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people" -X OPTIONS -i + +.. code-block:: http + + HTTP/1.1 200 OK + Allow: OPTIONS,GET,HEAD,POST,PUT,PATCH,DELETE + +For a view, the methods are determined by the presence of INSTEAD OF TRIGGERS. + +.. table:: + :widths: auto + + +--------------------+-------------------------------------------------------------------------------------------------+ + | Method allowed | View's requirements | + +====================+=================================================================================================+ + | OPTIONS, GET, HEAD | None (Always allowed) | + +--------------------+-------------------------------------------------------------------------------------------------+ + | POST | INSTEAD OF INSERT TRIGGER | + +--------------------+-------------------------------------------------------------------------------------------------+ + | PUT | INSTEAD OF INSERT TRIGGER, INSTEAD OF UPDATE TRIGGER, also requires the presence of a | + | | primary key | + +--------------------+-------------------------------------------------------------------------------------------------+ + | PATCH | INSTEAD OF UPDATE TRIGGER | + +--------------------+-------------------------------------------------------------------------------------------------+ + | DELETE | INSTEAD OF DELETE TRIGGER | + +--------------------+-------------------------------------------------------------------------------------------------+ + | All the above methods are allowed for | + | `auto-updatable views `_ | + +--------------------+-------------------------------------------------------------------------------------------------+ + +For functions, the methods depend on their volatility. ``VOLATILE`` functions allow only ``OPTIONS,POST``, whereas the rest also permit ``GET,HEAD``. + +.. important:: + + Whenever you add or remove tables or views, or modify a view's INSTEAD OF TRIGGERS on the database, you must refresh PostgREST's schema cache for OPTIONS requests to work properly. See the section :ref:`schema_reloading`. + +CORS +---- + +PostgREST sets highly permissive cross origin resource sharing, that is why it accepts Ajax requests from any domain. + +It also handles `preflight requests `_ done by the browser, which are cached using the returned ``Access-Control-Max-Age: 86400`` header (86400 seconds = 24 hours). This is useful to reduce the latency of the subsequent requests. + +A ``POST`` preflight request would look like this: + +.. tabs:: + + .. code-tab:: http + + OPTIONS /items HTTP/1.1 + Origin: http://example.com + Access-Control-Allow-Method: POST + Access-Control-Allow-Headers: Content-Type + + .. code-tab:: bash Curl + + curl -i "http://localhost:3000/items" \ + -X OPTIONS \ + -H "Origin: http://example.com" \ + -H "Access-Control-Request-Method: POST" \ + -H "Access-Control-Request-Headers: Content-Type" + +.. code-block:: http + + HTTP/1.1 200 OK + Access-Control-Allow-Origin: http://example.com + Access-Control-Allow-Credentials: true + Access-Control-Allow-Methods: GET, POST, PATCH, PUT, DELETE, OPTIONS, HEAD + Access-Control-Allow-Headers: Authorization, Content-Type, Accept, Accept-Language, Content-Language + Access-Control-Max-Age: 86400 + +.. _multiple-schemas: + +Switching Schemas +================= + +You can switch schemas at runtime with the ``Accept-Profile`` and ``Content-Profile`` headers. You can only switch to a schema that is included in :ref:`db-schemas`. + +For GET or HEAD, the schema to be used can be selected through the ``Accept-Profile`` header: + +.. tabs:: + + .. code-tab:: http + + GET /items HTTP/1.1 + Accept-Profile: tenant2 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/items" \ + -H "Accept-Profile: tenant2" + +For POST, PATCH, PUT and DELETE, you can use the ``Content-Profile`` header for selecting the schema: + +.. tabs:: + + .. code-tab:: http + + POST /items HTTP/1.1 + Content-Profile: tenant2 + + {...} + + .. code-tab:: bash Curl + + curl "http://localhost:3000/items" \ + -X POST -H "Content-Type: application/json" \ + -H "Content-Profile: tenant2" \ + -d '{...}' + +You can also select the schema for :ref:`s_procs` and :ref:`open-api`. + +.. note:: + + These headers are based on the nascent "Content Negotiation by Profile" spec: https://www.w3.org/TR/dx-prof-conneg + +.. _http_context: + +HTTP Context +============ + +.. _guc_req_headers_cookies_claims: + +Accessing Request Headers, Cookies and JWT claims +------------------------------------------------- + +You can access request headers, cookies and JWT claims by reading GUC variables set by PostgREST per request. They are named :code:`request.headers`, :code:`request.cookies` and :code:`request.jwt.claims`. + +.. code-block:: postgresql + + -- To read the value of the User-Agent request header: + SELECT current_setting('request.headers', true)::json->>'user-agent'; + + -- To read the value of sessionId in a cookie: + SELECT current_setting('request.cookies', true)::json->>'sessionId'; + + -- To read the value of the email claim in a jwt: + SELECT current_setting('request.jwt.claims', true)::json->>'email'; + + -- To get all the headers sent in the request + SELECT current_setting('request.headers', true)::json; + +.. note:: + + The ``role`` in ``request.jwt.claims`` defaults to the value of :ref:`db-anon-role`. + +.. _guc_legacy_names: + +Legacy GUC variable names +~~~~~~~~~~~~~~~~~~~~~~~~~ + +For PostgreSQL versions below 14, PostgREST will take into consideration the :ref:`db-use-legacy-gucs` config, which is set to true by default. This means that the interface for accessing these GUCs is `the same as in older versions `_. You can opt in to use the JSON GUCs mentioned above by setting the ``db-use-legacy-gucs`` to false. + +.. _guc_req_path_method: + +Accessing Request Path and Method +--------------------------------- + +You can also access the request path and method with :code:`request.path` and :code:`request.method`. + +.. code-block:: postgresql + + -- You can get the path of the request with + SELECT current_setting('request.path', true); + + -- You can get the method of the request with + SELECT current_setting('request.method', true); + +.. _guc_resp_hdrs: + +Setting Response Headers +------------------------ + +PostgREST reads the ``response.headers`` SQL variable to add extra headers to the HTTP response. Stored procedures can modify this variable. For instance, this statement would add caching headers to the response: + +.. code-block:: sql + + -- tell client to cache response for two days + + SELECT set_config('response.headers', + '[{"Cache-Control": "public"}, {"Cache-Control": "max-age=259200"}]', true); + +Notice that the variable should be set to an *array* of single-key objects rather than a single multiple-key object. This is because headers such as ``Cache-Control`` or ``Set-Cookie`` need to be repeated when setting multiple values and an object would not allow the repeated key. + +.. note:: + + PostgREST provided headers such as ``Content-Type``, ``Location``, etc. can be overriden this way. Note that irrespective of overridden ``Content-Type`` response header, the content will still be converted to JSON, unless you also set :ref:`raw-media-types` to something like ``text/html``. + +.. _pre_req_headers: + +Setting headers via pre-request +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +By using a :ref:`db-pre-request` function, you can add headers to GET/POST/PATCH/PUT/DELETE responses. +As an example, let's add some cache headers for all requests that come from an Internet Explorer(6 or 7) browser. + +.. code-block:: postgresql + + create or replace function custom_headers() returns void as $$ + declare + user_agent text := current_setting('request.headers', true)::json->>'user-agent'; + begin + if user_agent similar to '%MSIE (6.0|7.0)%' then + perform set_config('response.headers', + '[{"Cache-Control": "no-cache, no-store, must-revalidate"}]', false); + end if; + end; $$ language plpgsql; + + -- set this function on postgrest.conf + -- db-pre-request = custom_headers + +Now when you make a GET request to a table or view, you'll get the cache headers. + +.. tabs:: + + .. code-tab:: http + + GET /people HTTP/1.1 + User-Agent: Mozilla/4.01 (compatible; MSIE 6.0; Windows NT 5.1) + + .. code-tab:: bash Curl + + curl "http://localhost:3000/people" -i \ + -H "User-Agent: Mozilla/4.01 (compatible; MSIE 6.0; Windows NT 5.1)" + +.. code-block:: http + + HTTP/1.1 200 OK + Content-Type: application/json; charset=utf-8 + Cache-Control: no-cache, no-store, must-revalidate + +.. _guc_resp_status: + +Setting Response Status Code +---------------------------- + +You can set the ``response.status`` GUC to override the default status code PostgREST provides. For instance, the following function would replace the default ``200`` status code. + +.. code-block:: postgres + + create or replace function teapot() returns json as $$ + begin + perform set_config('response.status', '418', true); + return json_build_object('message', 'The requested entity body is short and stout.', + 'hint', 'Tip it over and pour it out.'); + end; + $$ language plpgsql; + +.. tabs:: + + .. code-tab:: http + + GET /rpc/teapot HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/teapot" -i + +.. code-block:: http + + HTTP/1.1 418 I'm a teapot + + { + "message" : "The requested entity body is short and stout.", + "hint" : "Tip it over and pour it out." + } + +If the status code is standard, PostgREST will complete the status message(**I'm a teapot** in this example). + +.. _raise_error: + +Raise errors with HTTP Status Codes +----------------------------------- + +Stored procedures can return non-200 HTTP status codes by raising SQL exceptions. For instance, here's a saucy function that always responds with an error: + +.. code-block:: postgresql + + CREATE OR REPLACE FUNCTION just_fail() RETURNS void + LANGUAGE plpgsql + AS $$ + BEGIN + RAISE EXCEPTION 'I refuse!' + USING DETAIL = 'Pretty simple', + HINT = 'There is nothing you can do.'; + END + $$; + +Calling the function returns HTTP 400 with the body + +.. code-block:: json + + { + "message":"I refuse!", + "details":"Pretty simple", + "hint":"There is nothing you can do.", + "code":"P0001" + } + +.. note:: + + Keep in mind that ``RAISE EXCEPTION`` will abort the transaction and rollback all changes. If you don't want this, you can instead use the :ref:`response.status GUC `. + +One way to customize the HTTP status code is by raising particular exceptions according to the PostgREST :ref:`error to status code mapping `. For example, :code:`RAISE insufficient_privilege` will respond with HTTP 401/403 as appropriate. + +For even greater control of the HTTP status code, raise an exception of the ``PTxyz`` type. For instance to respond with HTTP 402, raise 'PT402': + +.. code-block:: sql + + RAISE sqlstate 'PT402' using + message = 'Payment Required', + detail = 'Quota exceeded', + hint = 'Upgrade your plan'; + +Returns: + +.. code-block:: http + + HTTP/1.1 402 Payment Required + Content-Type: application/json; charset=utf-8 + + { + "message": "Payment Required", + "details": "Quota exceeded", + "hint": "Upgrade your plan", + "code": "PT402" + } + +.. _explain_plan: + +Execution plan +-------------- + +You can get the `EXPLAIN execution plan `_ of a request by adding the ``Accept: application/vnd.pgrst.plan`` header when :ref:`db-plan-enabled` is set to ``true``. + +.. tabs:: + + .. code-tab:: http + + GET /users?select=name&order=id HTTP/1.1 + Accept: application/vnd.pgrst.plan + + .. code-tab:: bash Curl + + curl "http://localhost:3000/users?select=name&order=id" \ + -H "Accept: application/vnd.pgrst.plan" + +.. code-block:: psql + + Aggregate (cost=73.65..73.68 rows=1 width=112) + -> Index Scan using users_pkey on users (cost=0.15..60.90 rows=850 width=36) + +The output of the plan is generated in ``text`` format by default but you can change it to JSON by using the ``+json`` suffix. + +.. tabs:: + + .. code-tab:: http + + GET /users?select=name&order=id HTTP/1.1 + Accept: application/vnd.pgrst.plan+json + + .. code-tab:: bash Curl + + curl "http://localhost:3000/users?select=name&order=id" \ + -H "Accept: application/vnd.pgrst.plan+json" + +.. code-block:: json + + [ + { + "Plan": { + "Node Type": "Aggregate", + "Strategy": "Plain", + "Partial Mode": "Simple", + "Parallel Aware": false, + "Async Capable": false, + "Startup Cost": 73.65, + "Total Cost": 73.68, + "Plan Rows": 1, + "Plan Width": 112, + "Plans": [ + { + "Node Type": "Index Scan", + "Parent Relationship": "Outer", + "Parallel Aware": false, + "Async Capable": false, + "Scan Direction": "Forward", + "Index Name": "users_pkey", + "Relation Name": "users", + "Alias": "users", + "Startup Cost": 0.15, + "Total Cost": 60.90, + "Plan Rows": 850, + "Plan Width": 36 + } + ] + } + } + ] + +By default the plan is assumed to generate the JSON representation of a resource(``application/json``), but you can obtain the plan for the :ref:`different representations that PostgREST supports ` by adding them to the ``for`` parameter. For instance, to obtain the plan for a ``text/xml``, you would use ``Accept: application/vnd.pgrst.plan; for="text/xml``. + +The other available parameters are ``analyze``, ``verbose``, ``settings``, ``buffers`` and ``wal``, which correspond to the `EXPLAIN command options `_. To use the ``analyze`` and ``wal`` parameters for example, you would add them like ``Accept: application/vnd.pgrst.plan; options=analyze|wal``. + +Note that akin to the EXPLAIN command, the changes will be committed when using the ``analyze`` option. To avoid this, you can use the :ref:`db-tx-end` and the ``Prefer: tx=rollback`` header. diff --git a/docs/auth.rst b/docs/auth.rst new file mode 100644 index 0000000000..eeb55c0b05 --- /dev/null +++ b/docs/auth.rst @@ -0,0 +1,496 @@ +.. _roles: + +Overview of Role System +======================= + +PostgREST is designed to keep the database at the center of API security. All authorization happens through database roles and permissions. It is PostgREST's job to **authenticate** requests -- i.e. verify that a client is who they say they are -- and then let the database **authorize** client actions. + +Authentication Sequence +----------------------- + +There are three types of roles used by PostgREST, the **authenticator**, **anonymous** and **user** roles. The database administrator creates these roles and configures PostgREST to use them. + +.. image:: _static/security-roles.png + +The authenticator should be created :code:`NOINHERIT` and configured in the database to have very limited access. It is a chameleon whose job is to "become" other users to service authenticated HTTP requests. The picture below shows how the server handles authentication. If auth succeeds, it switches into the user role specified by the request, otherwise it switches into the anonymous role (if it's set in :ref:`db-anon-role`). + +.. image:: _static/security-anon-choice.png + +Here are the technical details. We use `JSON Web Tokens `_ to authenticate API requests. As you'll recall a JWT contains a list of cryptographically signed claims. All claims are allowed but PostgREST cares specifically about a claim called role. + +.. code:: json + + { + "role": "user123" + } + +When a request contains a valid JWT with a role claim PostgREST will switch to the database role with that name for the duration of the HTTP request. + +.. code:: sql + + SET LOCAL ROLE user123; + +Note that the database administrator must allow the authenticator role to switch into this user by previously executing + +.. code:: sql + + GRANT user123 TO authenticator; + +If the client included no JWT (or one without a role claim) then PostgREST switches into the anonymous role whose actual database-specific name, like that of with the authenticator role, is specified in the PostgREST server configuration file. The database administrator must set anonymous role permissions correctly to prevent anonymous users from seeing or changing things they shouldn't. + +Users and Groups +---------------- + +PostgreSQL manages database access permissions using the concept of roles. A role can be thought of as either a database user, or a group of database users, depending on how the role is set up. + +Roles for Each Web User +~~~~~~~~~~~~~~~~~~~~~~~ + +PostgREST can accommodate either viewpoint. If you treat a role as a single user then the JWT-based role switching described above does most of what you need. When an authenticated user makes a request PostgREST will switch into the role for that user, which in addition to restricting queries, is available to SQL through the :code:`current_user` variable. + +You can use row-level security to flexibly restrict visibility and access for the current user. Here is an `example `_ from Tomas Vondra, a chat table storing messages sent between users. Users can insert rows into it to send messages to other users, and query it to see messages sent to them by other users. + +.. code-block:: postgres + + CREATE TABLE chat ( + message_uuid UUID PRIMARY KEY DEFAULT uuid_generate_v4(), + message_time TIMESTAMP NOT NULL DEFAULT now(), + message_from NAME NOT NULL DEFAULT current_user, + message_to NAME NOT NULL, + message_subject VARCHAR(64) NOT NULL, + message_body TEXT + ); + + ALTER TABLE chat ENABLE ROW LEVEL SECURITY; + +We want to enforce a policy that ensures a user can see only those messages sent by them or intended for them. Also we want to prevent a user from forging the message_from column with another person's name. + +PostgreSQL allows us to set this policy with row-level security: + +.. code-block:: postgres + + CREATE POLICY chat_policy ON chat + USING ((message_to = current_user) OR (message_from = current_user)) + WITH CHECK (message_from = current_user) + +Anyone accessing the generated API endpoint for the chat table will see exactly the rows they should, without our needing custom imperative server-side coding. + +.. warning:: + + Roles are namespaced per-cluster rather than per-database so they may be prone to collision. + +Web Users Sharing Role +~~~~~~~~~~~~~~~~~~~~~~ + +Alternately database roles can represent groups instead of (or in addition to) individual users. You may choose that all signed-in users for a web app share the role webuser. You can distinguish individual users by including extra claims in the JWT such as email. + +.. code:: json + + { + "role": "webuser", + "email": "john@doe.com" + } + +SQL code can access claims through GUC variables set by PostgREST per request. For instance to get the email claim, call this function: + +For PostgreSQL server version >= 14 + +.. code:: sql + + current_setting('request.jwt.claims', true)::json->>'email'; + + +For PostgreSQL server version < 14 + +.. code:: sql + + current_setting('request.jwt.claim.email', true); + +This allows JWT generation services to include extra information and your database code to react to it. For instance the RLS example could be modified to use this current_setting rather than current_user. The second 'true' argument tells current_setting to return NULL if the setting is missing from the current configuration. + +Hybrid User-Group Roles +~~~~~~~~~~~~~~~~~~~~~~~ + +You can mix the group and individual role policies. For instance we could still have a webuser role and individual users which inherit from it: + +.. code-block:: postgres + + CREATE ROLE webuser NOLOGIN; + -- grant this role access to certain tables etc + + CREATE ROLE user000 NOLOGIN; + GRANT webuser TO user000; + -- now user000 can do whatever webuser can + + GRANT user000 TO authenticator; + -- allow authenticator to switch into user000 role + -- (the role itself has nologin) + +.. _custom_validation: + +Custom Validation +----------------- + +PostgREST honors the :code:`exp` claim for token expiration, rejecting expired tokens. However it does not enforce any extra constraints. An example of an extra constraint would be to immediately revoke access for a certain user. The configuration file parameter :code:`db-pre-request` specifies a stored procedure to call immediately after the authenticator switches into a new role and before the main query itself runs. + +Here's an example. In the config file specify a stored procedure: + +.. code:: ini + + db-pre-request = "public.check_user" + +In the function you can run arbitrary code to check the request and raise an exception to block it if desired. + +.. code-block:: postgres + + CREATE OR REPLACE FUNCTION check_user() RETURNS void AS $$ + BEGIN + IF current_user = 'evil_user' THEN + RAISE EXCEPTION 'No, you are evil' + USING HINT = 'Stop being so evil and maybe you can log in'; + END IF; + END + $$ LANGUAGE plpgsql; + +.. _client_auth: + +Client Auth +=========== + +To make an authenticated request the client must include an :code:`Authorization` HTTP header with the value :code:`Bearer `. For instance: + +.. tabs:: + + .. code-tab:: http + + GET /foo HTTP/1.1 + Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiamRvZSIsImV4cCI6MTQ3NTUxNjI1MH0.GYDZV3yM0gqvuEtJmfpplLBXSGYnke_Pvnl0tbKAjB4 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/foo" \ + -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiamRvZSIsImV4cCI6MTQ3NTUxNjI1MH0.GYDZV3yM0gqvuEtJmfpplLBXSGYnke_Pvnl0tbKAjB4" + +The ``Bearer`` header value can be used with or without capitalization(``bearer``). + +JWT Generation +-------------- + +You can create a valid JWT either from inside your database or via an external service. Each token is cryptographically signed with a secret key. In the case of symmetric cryptography the signer and verifier share the same secret passphrase. In asymmetric cryptography the signer uses the private key and the verifier the public key. PostgREST supports both symmetric and asymmetric cryptography. + +JWT from SQL +~~~~~~~~~~~~ + +You can create JWT tokens in SQL using the `pgjwt extension `_. It's simple and requires only pgcrypto. If you're on an environment like Amazon RDS which doesn't support installing new extensions, you can still manually run the `SQL inside pgjwt `_ (you'll need to replace ``@extschema@`` with another schema or just delete it) which creates the functions you will need. + +Next write a stored procedure that returns the token. The one below returns a token with a hard-coded role, which expires five minutes after it was issued. Note this function has a hard-coded secret as well. + +.. code-block:: postgres + + CREATE TYPE jwt_token AS ( + token text + ); + + CREATE FUNCTION jwt_test() RETURNS public.jwt_token AS $$ + SELECT public.sign( + row_to_json(r), 'reallyreallyreallyreallyverysafe' + ) AS token + FROM ( + SELECT + 'my_role'::text as role, + extract(epoch from now())::integer + 300 AS exp + ) r; + $$ LANGUAGE sql; + +PostgREST exposes this function to clients via a POST request to ``/rpc/jwt_test``. + +.. note:: + + To avoid hard-coding the secret in stored procedures, save it as a property of the database. + + .. code-block:: postgres + + -- run this once + ALTER DATABASE mydb SET "app.jwt_secret" TO 'reallyreallyreallyreallyverysafe'; + + -- then all functions can refer to app.jwt_secret + SELECT sign( + row_to_json(r), current_setting('app.jwt_secret') + ) AS token + FROM ... + +JWT from Auth0 +~~~~~~~~~~~~~~ + +An external service like `Auth0 `_ can do the hard work transforming OAuth from Github, Twitter, Google etc into a JWT suitable for PostgREST. Auth0 can also handle email signup and password reset flows. + +To use Auth0, create `an application `_ for your app and `an API `_ for your PostgREST server. Auth0 supports both HS256 and RS256 scheme for the issued tokens for APIs. For simplicity, you may first try HS256 scheme while creating your API on Auth0. Your application should use your PostgREST API's `API identifier `_ by setting it with the `audience parameter `_ during the authorization request. This will ensure that Auth0 will issue an access token for your PostgREST API. For PostgREST to verify the access token, you will need to set ``jwt-secret`` on PostgREST config file with your API's signing secret. + +.. note:: + + Our code requires a database role in the JWT. To add it you need to save the database role in Auth0 `app metadata `_. Then, you will need to write `a rule `_ that will extract the role from the user's app_metadata and set it as a `custom claim `_ in the access token. Note that, you may use Auth0's `core authorization feature `_ for more complex use cases. Metadata solution is mentioned here for simplicity. + + .. code:: javascript + + function (user, context, callback) { + + // Follow the documentations at + // https://postgrest.org/en/latest/configuration.html#db-role-claim-key + // to set a custom role claim on PostgREST + // and use it as custom claim attribute in this rule + const myRoleClaim = 'https://myapp.com/role'; + + user.app_metadata = user.app_metadata || {}; + context.accessToken[myRoleClaim] = user.app_metadata.role; + callback(null, user, context); + } + +.. _asym_keys: + +Asymmetric Keys +~~~~~~~~~~~~~~~ + +As described in the :ref:`configuration` section, PostgREST accepts a ``jwt-secret`` config file parameter. If it is set to a simple string value like "reallyreallyreallyreallyverysafe" then PostgREST interprets it as an HMAC-SHA256 passphrase. However you can also specify a literal JSON Web Key (JWK) or set. For example, you can use an RSA-256 public key encoded as a JWK: + +.. code-block:: json + + { + "alg":"RS256", + "e":"AQAB", + "key_ops":["verify"], + "kty":"RSA", + "n":"9zKNYTaYGfGm1tBMpRT6FxOYrM720GhXdettc02uyakYSEHU2IJz90G_MLlEl4-WWWYoS_QKFupw3s7aPYlaAjamG22rAnvWu-rRkP5sSSkKvud_IgKL4iE6Y2WJx2Bkl1XUFkdZ8wlEUR6O1ft3TS4uA-qKifSZ43CahzAJyUezOH9shI--tirC028lNg767ldEki3WnVr3zokSujC9YJ_9XXjw2hFBfmJUrNb0-wldvxQbFU8RPXip-GQ_JPTrCTZhrzGFeWPvhA6Rqmc3b1PhM9jY7Dur1sjYWYVyXlFNCK3c-6feo5WlRfe1aCWmwZQh6O18eTmLeT4nWYkDzQ" + } + +.. note:: + + This could also be a JSON Web Key Set (JWKS) if it was contained within an array assigned to a `keys` member, e.g. ``{ keys: [jwk1, jwk2] }``. + +Just pass it in as a single line string, escaping the quotes: + +.. code-block:: ini + + jwt-secret = "{ \"alg\":\"RS256\", … }" + +To generate such a public/private key pair use a utility like `latchset/jose `_. + +.. code-block:: bash + + jose jwk gen -i '{"alg": "RS256"}' -o rsa.jwk + jose jwk pub -i rsa.jwk -o rsa.jwk.pub + + # now rsa.jwk.pub contains the desired JSON object + +You can specify the literal value as we saw earlier, or reference a filename to load the JWK from a file: + +.. code-block:: ini + + jwt-secret = "@rsa.jwk.pub" + +JWT security +~~~~~~~~~~~~ + +There are at least three types of common critiques against using JWT: 1) against the standard itself, 2) against using libraries with known security vulnerabilities, and 3) against using JWT for web sessions. We'll briefly explain each critique, how PostgREST deals with it, and give recommendations for appropriate user action. + +The critique against the `JWT standard `_ is voiced in detail `elsewhere on the web `_. The most relevant part for PostgREST is the so-called :code:`alg=none` issue. Some servers implementing JWT allow clients to choose the algorithm used to sign the JWT. In this case, an attacker could set the algorithm to :code:`none`, remove the need for any signature at all and gain unauthorized access. The current implementation of PostgREST, however, does not allow clients to set the signature algorithm in the HTTP request, making this attack irrelevant. The critique against the standard is that it requires the implementation of the :code:`alg=none` at all. + +Critiques against JWT libraries are only relevant to PostgREST via the library it uses. As mentioned above, not allowing clients to choose the signature algorithm in HTTP requests removes the greatest risk. Another more subtle attack is possible where servers use asymmetric algorithms like RSA for signatures. Once again this is not relevant to PostgREST since it is not supported. Curious readers can find more information in `this article `_. Recommendations about high quality libraries for usage in API clients can be found on `jwt.io `_. + +The last type of critique focuses on the misuse of JWT for maintaining web sessions. The basic recommendation is to `stop using JWT for sessions `_ because most, if not all, solutions to the problems that arise when you do, `do not work `_. The linked articles discuss the problems in depth but the essence of the problem is that JWT is not designed to be secure and stateful units for client-side storage and therefore not suited to session management. + +PostgREST uses JWT mainly for authentication and authorization purposes and encourages users to do the same. For web sessions, using cookies over HTTPS is good enough and well catered for by standard web frameworks. + +Schema Isolation +================ + +You can isolate your api schema from internal implementation details, as explained in :ref:`schema_isolation`. For an example of wrapping a private table with a public view see the :ref:`public_ui` section below. + +.. _sql_user_management: + +SQL User Management +=================== + +Storing Users and Passwords +--------------------------- + +As mentioned, an external service can provide user management and coordinate with the PostgREST server using JWT. It's also possible to support logins entirely through SQL. It's a fair bit of work, so get ready. + +The following table, functions, and triggers will live in a :code:`basic_auth` schema that you shouldn't expose publicly in the API. The public views and functions will live in a different schema which internally references this internal information. + +First we'll need a table to keep track of our users: + +.. code:: sql + + -- We put things inside the basic_auth schema to hide + -- them from public view. Certain public procs/views will + -- refer to helpers and tables inside. + create schema if not exists basic_auth; + + create table if not exists + basic_auth.users ( + email text primary key check ( email ~* '^.+@.+\..+$' ), + pass text not null check (length(pass) < 512), + role name not null check (length(role) < 512) + ); + +We would like the role to be a foreign key to actual database roles, however PostgreSQL does not support these constraints against the :code:`pg_roles` table. We'll use a trigger to manually enforce it. + +.. code-block:: plpgsql + + create or replace function + basic_auth.check_role_exists() returns trigger as $$ + begin + if not exists (select 1 from pg_roles as r where r.rolname = new.role) then + raise foreign_key_violation using message = + 'unknown database role: ' || new.role; + return null; + end if; + return new; + end + $$ language plpgsql; + + drop trigger if exists ensure_user_role_exists on basic_auth.users; + create constraint trigger ensure_user_role_exists + after insert or update on basic_auth.users + for each row + execute procedure basic_auth.check_role_exists(); + +Next we'll use the pgcrypto extension and a trigger to keep passwords safe in the :code:`users` table. + +.. code-block:: plpgsql + + create extension if not exists pgcrypto; + + create or replace function + basic_auth.encrypt_pass() returns trigger as $$ + begin + if tg_op = 'INSERT' or new.pass <> old.pass then + new.pass = crypt(new.pass, gen_salt('bf')); + end if; + return new; + end + $$ language plpgsql; + + drop trigger if exists encrypt_pass on basic_auth.users; + create trigger encrypt_pass + before insert or update on basic_auth.users + for each row + execute procedure basic_auth.encrypt_pass(); + +With the table in place we can make a helper to check a password against the encrypted column. It returns the database role for a user if the email and password are correct. + +.. code-block:: plpgsql + + create or replace function + basic_auth.user_role(email text, pass text) returns name + language plpgsql + as $$ + begin + return ( + select role from basic_auth.users + where users.email = user_role.email + and users.pass = crypt(user_role.pass, users.pass) + ); + end; + $$; + +.. _public_ui: + +Public User Interface +--------------------- + +In the previous section we created an internal table to store user information. Here we create a login function which takes an email address and password and returns JWT if the credentials match a user in the internal table. + +Permissions +~~~~~~~~~~~ + +Your database roles need access to the schema, tables, views and functions in order to service HTTP requests. +Recall from the `Overview of Role System`_ that PostgREST uses special roles to process requests, namely the authenticator and +anonymous roles. Below is an example of permissions that allow anonymous users to create accounts and attempt to log in. + +.. code-block:: postgres + + -- the names "anon" and "authenticator" are configurable and not + -- sacred, we simply choose them for clarity + create role anon noinherit; + create role authenticator noinherit; + grant anon to authenticator; + +Then, add ``db-anon-role`` to the configuration file to allow anonymous requests. + +.. code:: ini + + db-anon-role = "anon" + +Logins +~~~~~~ + +As described in `JWT from SQL`_, we'll create a JWT inside our login function. Note that you'll need to adjust the secret key which is hard-coded in this example to a secure (at least thirty-two character) secret of your choosing. + +.. code-block:: postgres + + -- add type + CREATE TYPE basic_auth.jwt_token AS ( + token text + ); + + -- login should be on your exposed schema + create or replace function + login(email text, pass text) returns basic_auth.jwt_token as $$ + declare + _role name; + result basic_auth.jwt_token; + begin + -- check email and password + select basic_auth.user_role(email, pass) into _role; + if _role is null then + raise invalid_password using message = 'invalid user or password'; + end if; + + select sign( + row_to_json(r), 'reallyreallyreallyreallyverysafe' + ) as token + from ( + select _role as role, login.email as email, + extract(epoch from now())::integer + 60*60 as exp + ) r + into result; + return result; + end; + $$ language plpgsql security definer; + + grant execute on function login(text,text) to anon; + +Since the above :code:`login` function is defined as `security definer `_, +the anonymous user :code:`anon` doesn't need permission to read the :code:`basic_auth.users` table. It doesn't even need permission to access the :code:`basic_auth` schema. +:code:`grant execute on function` is included for clarity but it might not be needed, see :ref:`func_privs` for more details. + +An API request to call this function would look like: + +.. tabs:: + + .. code-tab:: http + + POST /rpc/login HTTP/1.1 + + { "email": "foo@bar.com", "pass": "foobar" } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/login" \ + -X POST -H "Content-Type: application/json" \ + -d '{ "email": "foo@bar.com", "pass": "foobar" }' + +The response would look like the snippet below. Try decoding the token at `jwt.io `_. (It was encoded with a secret of :code:`reallyreallyreallyreallyverysafe` as specified in the SQL code above. You'll want to change this secret in your app!) + +.. code:: json + + { + "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImZvb0BiYXIuY29tIiwicGFzcyI6ImZvb2JhciJ9.37066TTRlh-1hXhnA9oO9Pj6lgL6zFuJU0iCHhuCFno" + } + + +Alternatives +~~~~~~~~~~~~ + +See the how-to :ref:`sql-user-management-using-postgres-users-and-passwords` for a similar way that completely avoids the table :code:`basic_auth.users`. diff --git a/docs/conf.py b/docs/conf.py new file mode 100644 index 0000000000..39165ee23d --- /dev/null +++ b/docs/conf.py @@ -0,0 +1,298 @@ +# -*- coding: utf-8 -*- +# +# PostgREST documentation build configuration file, created by +# sphinx-quickstart on Sun Oct 9 16:53:00 2016. +# +# This file is execfile()d with the current directory set to its +# containing dir. +# +# Note that not all possible configuration values are present in this +# autogenerated file. +# +# All configuration values have a default; values that are commented out +# serve to show the default. + +import sys +import os + +# If extensions (or modules to document with autodoc) are in another directory, +# add these directories to sys.path here. If the directory is relative to the +# documentation root, use os.path.abspath to make it absolute, like shown here. +# sys.path.insert(0, os.path.abspath('.')) + +# -- General configuration ------------------------------------------------ + +# If your documentation needs a minimal Sphinx version, state it here. +# needs_sphinx = '1.0' + +# Add any Sphinx extension module names here, as strings. They can be +# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom +# ones. +extensions = ["sphinx_tabs.tabs", "sphinx_copybutton"] + +# Add any paths that contain templates here, relative to this directory. +templates_path = ["_templates"] + +# The suffix(es) of source filenames. +# You can specify multiple suffix as a list of string: +# source_suffix = ['.rst', '.md'] +source_suffix = ".rst" + +# The encoding of source files. +# source_encoding = 'utf-8-sig' + +# The master toctree document. +master_doc = "index" + +# General information about the project. +project = "PostgREST" +author = "Joe Nelson, Steve Chavez" +copyright = "2017, " + author + +# The version info for the project you're documenting, acts as replacement for +# |version| and |release|, also used in various other places throughout the +# built documents. +# +# The short X.Y version. +version = "10.2" +# The full version, including alpha/beta/rc tags. +release = "10.2.0" + +# The language for content autogenerated by Sphinx. Refer to documentation +# for a list of supported languages. +# +# This is also used if you do content translation via gettext catalogs. +# Usually you set "language" from the command line for these cases. +language = None + +# There are two options for replacing |today|: either, you set today to some +# non-false value, then it is used: +# today = '' +# Else, today_fmt is used as the format for a strftime call. +# today_fmt = '%B %d, %Y' + +# List of patterns, relative to source directory, that match files and +# directories to ignore when looking for source files. +# This patterns also effect to html_static_path and html_extra_path +exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] + +# The reST default role (used for this markup: `text`) to use for all +# documents. +# default_role = None + +# If true, '()' will be appended to :func: etc. cross-reference text. +# add_function_parentheses = True + +# If true, the current module name will be prepended to all description +# unit titles (such as .. function::). +# add_module_names = True + +# If true, sectionauthor and moduleauthor directives will be shown in the +# output. They are ignored by default. +# show_authors = False + +# The name of the Pygments (syntax highlighting) style to use. +pygments_style = "sphinx" + +# A list of ignored prefixes for module index sorting. +# modindex_common_prefix = [] + +# If true, keep warnings as "system message" paragraphs in the built documents. +# keep_warnings = False + +# If true, `todo` and `todoList` produce output, else they produce nothing. +todo_include_todos = False + + +# -- Options for HTML output ---------------------------------------------- + +# The theme to use for HTML and HTML Help pages. See the documentation for +# a list of builtin themes. +html_theme = "sphinx_rtd_theme" + +# Theme options are theme-specific and customize the look and feel of a theme +# further. For a list of options available for each theme, see the +# documentation. +# html_theme_options = {} + +# Add any paths that contain custom themes here, relative to this directory. +# html_theme_path = [] + +# The name for this set of Sphinx documents. +# " v documentation" by default. +# html_title = u'PostgREST v0.4.0.0' + +# A shorter title for the navigation bar. Default is the same as html_title. +# html_short_title = None + +# The name of an image file (relative to this directory) to place at the top +# of the sidebar. +# html_logo = None + +# The name of an image file (relative to this directory) to use as a favicon of +# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 +# pixels large. +html_favicon = "_static/favicon.ico" + +# Add any paths that contain custom static files (such as style sheets) here, +# relative to this directory. They are copied after the builtin static files, +# so a file named "default.css" will overwrite the builtin "default.css". +html_static_path = ["_static"] + +# Add any extra paths that contain custom files (such as robots.txt or +# .htaccess) here, relative to this directory. These files are copied +# directly to the root of the documentation. +# html_extra_path = [] + +# If not None, a 'Last updated on:' timestamp is inserted at every page +# bottom, using the given strftime format. +# The empty string is equivalent to '%b %d, %Y'. +# html_last_updated_fmt = None + +# If true, SmartyPants will be used to convert quotes and dashes to +# typographically correct entities. +# html_use_smartypants = True + +# Custom sidebar templates, maps document names to template names. +# html_sidebars = {} + +# Additional templates that should be rendered to pages, maps page names to +# template names. +# html_additional_pages = {} + +# If false, no module index is generated. +# html_domain_indices = True + +# If false, no index is generated. +# html_use_index = True + +# If true, the index is split into individual pages for each letter. +# html_split_index = False + +# If true, links to the reST sources are added to the pages. +# html_show_sourcelink = True + +# If true, "Created using Sphinx" is shown in the HTML footer. Default is True. +# html_show_sphinx = True + +# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. +# html_show_copyright = True + +# If true, an OpenSearch description file will be output, and all pages will +# contain a tag referring to it. The value of this option must be the +# base URL from which the finished HTML is served. +# html_use_opensearch = '' + +# This is the file name suffix for HTML files (e.g. ".xhtml"). +# html_file_suffix = None + +# Language to be used for generating the HTML full-text search index. +# Sphinx supports the following languages: +# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja' +# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh' +# html_search_language = 'en' + +# A dictionary with options for the search language support, empty by default. +# 'ja' uses this config value. +# 'zh' user can custom change `jieba` dictionary path. +# html_search_options = {'type': 'default'} + +# The name of a javascript file (relative to the configuration directory) that +# implements a search results scorer. If empty, the default will be used. +# html_search_scorer = 'scorer.js' + +# Output file base name for HTML help builder. +htmlhelp_basename = "PostgRESTdoc" + +# -- Options for LaTeX output --------------------------------------------- + +latex_elements = { + # The paper size ('letterpaper' or 'a4paper'). + #'papersize': 'letterpaper', + # The font size ('10pt', '11pt' or '12pt'). + #'pointsize': '10pt', + # Additional stuff for the LaTeX preamble. + #'preamble': '', + # Latex figure (float) alignment + #'figure_align': 'htbp', +} + +# Grouping the document tree into LaTeX files. List of tuples +# (source start file, target name, title, +# author, documentclass [howto, manual, or own class]). +latex_documents = [ + (master_doc, "PostgREST.tex", "PostgREST Documentation", author, "manual"), +] + +# The name of an image file (relative to this directory) to place at the top of +# the title page. +# latex_logo = None + +# For "manual" documents, if this is true, then toplevel headings are parts, +# not chapters. +# latex_use_parts = False + +# If true, show page references after internal links. +# latex_show_pagerefs = False + +# If true, show URL addresses after external links. +# latex_show_urls = False + +# Documents to append as an appendix to all manuals. +# latex_appendices = [] + +# If false, no module index is generated. +# latex_domain_indices = True + + +# -- Options for manual page output --------------------------------------- + +# One entry per manual page. List of tuples +# (source start file, name, description, authors, manual section). +man_pages = [(master_doc, "postgrest", "PostgREST Documentation", [author], 1)] + +# If true, show URL addresses after external links. +# man_show_urls = False + + +# -- Options for Texinfo output ------------------------------------------- + +# Grouping the document tree into Texinfo files. List of tuples +# (source start file, target name, title, author, +# dir menu entry, description, category) +texinfo_documents = [ + ( + master_doc, + "PostgREST", + "PostgREST Documentation", + author, + "PostgREST", + "REST API for any PostgreSQL database", + "Web", + ), +] + +# Documents to append as an appendix to all manuals. +# texinfo_appendices = [] + +# If false, no module index is generated. +# texinfo_domain_indices = True + +# How to display URL addresses: 'footnote', 'no', or 'inline'. +# texinfo_show_urls = 'footnote' + +# If true, do not generate a @detailmenu in the "Top" node's menu. +# texinfo_no_detailmenu = False + +# -- Custom setup --------------------------------------------------------- + + +def setup(app): + app.add_css_file("css/custom.css") + + +# taken from https://github.com/sphinx-doc/sphinx/blob/82dad44e5bd3776ecb6fd8ded656bc8151d0e63d/sphinx/util/requests.py#L42 +user_agent = "Mozilla/5.0 (X11; Linux x86_64; rv:25.0) Gecko/20100101 Firefox/25.0" + +# sphinx-tabs configuration +sphinx_tabs_disable_tab_closing = True diff --git a/docs/configuration.rst b/docs/configuration.rst new file mode 100644 index 0000000000..a2b41a303d --- /dev/null +++ b/docs/configuration.rst @@ -0,0 +1,728 @@ +.. _configuration: + +Configuration +============= + +Without configuration, PostgREST won't be able to serve requests. At the minimum it needs either :ref:`a role to serve anonymous requests with ` - or :ref:`a secret to use for JWT authentication `. Config parameters can be provided via :ref:`file_config`, via :ref:`env_variables_config` or through :ref:`in_db_config`. + +To connect to a database it uses a `libpq connection string `_. The connection string can be set in the configuration file or via environment variable or can be read from an external file. See :ref:`db-uri` for details. Any parameter that is not set in the connection string is read from `libpq environment variables `_. The default connection string is ``postgresql://``, which reads **all** parameters from the environment. + +The user with whom PostgREST connects to the database is also known as the authenticator role. For more information about the anonymous vs authenticator roles see :ref:`roles`. + +Config parameters are read in the following order: + +1. From the config file. +2. From environment variables, overriding values from the config file. +3. From the database, overriding values from both the config file and environment variables. + +.. _file_config: + +Config File +----------- + +PostgREST can read a config file. There is no predefined location for this file, you must specify the file path as the one and only argument to the server: + +.. code:: bash + + ./postgrest /path/to/postgrest.conf + +.. note:: + + Configuration can be reloaded without restarting the server. See :ref:`config_reloading`. + +The configuration file must contain a set of key value pairs: + +.. code:: + + # postgrest.conf + + # The standard connection URI format, documented at + # https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING + db-uri = "postgres://user:pass@host:5432/dbname" + + # The database role to use when no client authentication is provided. + # Should differ from authenticator + db-anon-role = "anon" + + # The secret to verify the JWT for authenticated requests with. + # Needs to be 32 characters minimum. + jwt-secret = "reallyreallyreallyreallyverysafe" + jwt-secret-is-base64 = False + + # Port the postgrest process is listening on for http requests + server-port = 80 + +You can run ``postgrest --example`` to display all possible configuration parameters and how to use them in a configuration file. + +.. _env_variables_config: + +Environment Variables +--------------------- + +You can also set these :ref:`configuration parameters ` using environment variables. They are capitalized, have a ``PGRST_`` prefix, and use underscores. For example: ``PGRST_DB_URI`` corresponds to ``db-uri`` and ``PGRST_APP_SETTINGS_*`` to ``app.settings.*``. + +.. _in_db_config: + +In-Database Configuration +------------------------- + +By adding settings to the **authenticator** role (see :ref:`roles`), you can make the database the single source of truth for PostgREST's configuration. +This is enabled by :ref:`db-config`. + +For example, you can configure :ref:`db-schemas` and :ref:`jwt-secret` like this: + +.. code:: postgresql + + ALTER ROLE authenticator SET pgrst.db_schemas = "tenant1, tenant2, tenant3" + ALTER ROLE authenticator IN DATABASE SET pgrst.jwt_secret = "REALLYREALLYREALLYREALLYVERYSAFE" + +You can use both database-specific settings with `IN DATABASE` and cluster-wide settings without it. Database-specific settings will override cluster-wide settings if both are used for the same parameter. + +Note that underscores(``_``) need to be used instead of dashes(``-``) for the in-database config parameters. + +.. important:: + + For altering a role in this way, you need a SUPERUSER. You might not be able to use this configuration mode on cloud-hosted databases. + +When using both the configuration file and the in-database configuration, the latter takes precedence. + +.. danger:: + + If direct connections to the database are allowed, then it's not safe to use the in-db configuration for storing the :ref:`jwt-secret`. + The settings of every role are PUBLIC - they can be viewed by any user that queries the ``pg_catalog.pg_db_role_setting`` table. + In this case you should keep the :ref:`jwt-secret` in the configuration file or as environment variables. + +.. _config_reloading: + +Configuration Reloading +======================= + +It's possible to reload PostgREST's configuration without restarting the server. You can do this :ref:`via signal ` or :ref:`via notification `. + +It's not possible to change :ref:`env_variables_config` for a running process and reloading a Docker container configuration will not work. In these cases, you need to restart the PostgREST server or use :ref:`in_db_config` as an alternative. + +.. important:: + + The following settings will not be reloaded. You will need to restart PostgREST to change those. + + * :ref:`admin-server-port` + * :ref:`db-uri` + * :ref:`db-pool` + * :ref:`db-pool-acquisition-timeout` + * :ref:`db-pool-max-lifetime` + * :ref:`server-host` + * :ref:`server-port` + * :ref:`server-unix-socket` + * :ref:`server-unix-socket-mode` + +.. _config_reloading_signal: + +Reload with signal +------------------ + +To reload the configuration via signal, send a SIGUSR2 signal to the server process. + +.. code:: bash + + killall -SIGUSR2 postgrest + +.. _config_reloading_notify: + +Reload with NOTIFY +------------------ + +To reload the configuration from within the database, you can use a NOTIFY command. + +.. code:: postgresql + + NOTIFY pgrst, 'reload config' + +The ``"pgrst"`` notification channel is enabled by default. For configuring the channel, see :ref:`db-channel` and :ref:`db-channel-enabled`. + +.. _config_full_list: + +List of parameters +================== + +=========================== ======= ================= ========== +Name Type Default Reloadable +=========================== ======= ================= ========== +admin-server-port Int +app.settings.* String Y +db-anon-role String Y +db-channel String pgrst Y +db-channel-enabled Boolean True Y +db-config Boolean True Y +db-extra-search-path String public Y +db-max-rows Int ∞ Y +db-plan-enabled Boolean False Y +db-pool Int 10 +db-pool-acquisition-timeout Int 10 +db-pool-max-lifetime Int 1800 +db-pre-request String Y +db-prepared-statements Boolean True Y +db-schemas String public Y +db-tx-end String commit +db-uri String postgresql:// +db-use-legacy-gucs Boolean True Y +jwt-aud String Y +jwt-role-claim-key String .role Y +jwt-secret String Y +jwt-secret-is-base64 Boolean False Y +log-level String error Y +openapi-mode String follow-privileges Y +openapi-security-active Boolean False Y +openapi-server-proxy-uri String Y +raw-media-types String Y +server-host String !4 +server-port Int 3000 +server-unix-socket String +server-unix-socket-mode String 660 +=========================== ======= ================= ========== + +.. _admin-server-port: + +admin-server-port +----------------- + + =============== ======================= + **Environment** PGRST_ADMIN_SERVER_PORT + **In-Database** `n/a` + =============== ======================= + +Specifies the port for the :ref:`health_check` endpoints. + +.. _app.settings.*: + +app.settings.* +-------------- + + =============== ==================== + **Environment** PGRST_APP_SETTINGS_* + **In-Database** pgrst.app_settings_* + =============== ==================== + + Arbitrary settings that can be used to pass in secret keys directly as strings, or via OS environment variables. For instance: :code:`app.settings.jwt_secret = "$(MYAPP_JWT_SECRET)"` will take :code:`MYAPP_JWT_SECRET` from the environment and make it available to postgresql functions as :code:`current_setting('app.settings.jwt_secret')`. + +.. _db-anon-role: + +db-anon-role +------------ + + =============== ================== + **Environment** PGRST_DB_ANON_ROLE + **In-Database** `n/a` + =============== ================== + + The database role to use when executing commands on behalf of unauthenticated clients. For more information, see :ref:`roles`. + + When unset anonymous access will be blocked. + +.. _db-channel: + +db-channel +---------- + + =============== ================ + **Environment** PGRST_DB_CHANNEL + **In-Database** `n/a` + =============== ================ + + The name of the notification channel that PostgREST uses for :ref:`schema_reloading` and configuration reloading. + +.. _db-channel-enabled: + +db-channel-enabled +------------------ + + =============== ======================== + **Environment** PGRST_DB_CHANNEL_ENABLED + **In-Database** `n/a` + =============== ======================== + + When this is set to :code:`true`, the notification channel specified in :ref:`db-channel` is enabled. + + You should set this to ``false`` when using PostgresSQL behind an external connection pooler such as PgBouncer working in transaction pooling mode. See :ref:`this section ` for more information. + +.. _db-config: + +db-config +--------- + + =============== =============== + **Environment** PGRST_DB_CONFIG + **In-Database** `n/a` + =============== =============== + + Enables the in-database configuration. + +.. _db-extra-search-path: + +db-extra-search-path +-------------------- + + =============== ========================== + **Environment** PGRST_DB_EXTRA_SEARCH_PATH + **In-Database** pgrst.db_extra_search_path + =============== ========================== + + Extra schemas to add to the `search_path `_ of every request. These schemas tables, views and stored procedures **don't get API endpoints**, they can only be referred from the database objects inside your :ref:`db-schemas`. + + This parameter was meant to make it easier to use **PostgreSQL extensions** (like PostGIS) that are outside of the :ref:`db-schemas`. + + Multiple schemas can be added in a comma-separated string, e.g. ``public, extensions``. + +.. _db-max-rows: + +db-max-rows +----------- + + *For backwards compatibility, this config parameter is also available without prefix as "max-rows".* + + =============== ================= + **Environment** PGRST_DB_MAX_ROWS + **In-Database** pgrst.db_max_rows + =============== ================= + + A hard limit to the number of rows PostgREST will fetch from a view, table, or stored procedure. Limits payload size for accidental or malicious requests. + +.. _db-plan-enabled: + +db-plan-enabled +--------------- + + =============== ===================== + **Environment** PGRST_DB_PLAN_ENABLED + **In-Database** pgrst.db_plan_enabled + =============== ===================== + + When this is set to :code:`true`, the execution plan of a request can be retrieved by using the :code:`Accept: application/vnd.pgrst.plan` header. See :ref:`explain_plan`. + + It's recommended to use this in testing environments only since it reveals internal database details. + However, if you choose to use it in production you can add a :ref:`db-pre-request` to filter the requests that can use this feature. + + For example, to only allow requests from an IP address to get the execution plans: + + .. code-block:: postgresql + + -- Assuming a proxy(Nginx, Cloudflare, etc) passes an "X-Forwarded-For" header(https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For) + create or replace function filter_plan_requests() + returns void as $$ + declare + headers json := current_setting('request.headers', true)::json; + client_ip text := coalesce(headers->>'x-forwarded-for', ''); + accept text := coalesce(headers->>'accept', ''); + begin + if accept like 'application/vnd.pgrst.plan%' and client_ip != '144.96.121.73' then + raise insufficient_privilege using + message = 'Not allowed to use application/vnd.pgrst.plan'; + end if; + end; $$ language plpgsql; + + -- set this function on your postgrest.conf + -- db-pre-request = filter_plan_requests + +.. _db-pool: + +db-pool +------- + + =============== ================= + **Environment** PGRST_DB_POOL + **In-Database** `n/a` + =============== ================= + + Number of connections to keep open in PostgREST's database pool. Having enough here for the maximum expected simultaneous client connections can improve performance. Note it's pointless to set this higher than the :code:`max_connections` GUC in your database. + +.. _db-pool-acquisition-timeout: + +db-pool-acquisition-timeout +--------------------------- + + =============== ================= + **Environment** PGRST_DB_POOL_ACQUISITION_TIMEOUT + **In-Database** `n/a` + =============== ================= + + Specifies the maximum time in seconds that the request will wait for the pool to free up a connection slot to the database. If it times out without acquiring a connection, then the request is aborted and a ``504`` error is returned. + +.. _db-pool-max-lifetime: + +db-pool-max-lifetime +-------------------- + + =============== ================= + **Environment** PGRST_DB_POOL_MAX_LIFETIME + **In-Database** `n/a` + =============== ================= + + Specifies the maximum time in seconds of an existing connection in the pool. When this lifetime is reached, then the connection will be closed and returned to the pool. + +.. _db-pre-request: + +db-pre-request +-------------- + + *For backwards compatibility, this config parameter is also available without prefix as "pre-request".* + + =============== ================= + **Environment** PGRST_DB_PRE_REQUEST + **In-Database** pgrst.db_pre_request + =============== ================= + + A schema-qualified stored procedure name to call right after switching roles for a client request. This provides an opportunity to modify SQL variables or raise an exception to prevent the request from completing. + +.. _db-prepared-statements: + +db-prepared-statements +---------------------- + + =============== ================= + **Environment** PGRST_DB_PREPARED_STATEMENTS + **In-Database** pgrst.db_prepared_statements + =============== ================= + + Enables or disables prepared statements. + + When disabled, the generated queries will be parameterized (invulnerable to SQL injection) but they will not be prepared (cached in the database session). Not using prepared statements will noticeably decrease performance, so it's recommended to always have this setting enabled. + + You should only set this to ``false`` when using PostgresSQL behind an external connection pooler such as PgBouncer working in transaction pooling mode. See :ref:`this section ` for more information. + +.. _db-schemas: + +db-schemas +---------- + + *For backwards compatibility, this config parameter is also available in singular as "db-schema".* + + =============== ================= + **Environment** PGRST_DB_SCHEMAS + **In-Database** pgrst.db_schemas + =============== ================= + + The database schema to expose to REST clients. Tables, views and stored procedures in this schema will get API endpoints. + + .. code:: bash + + db-schemas = "api" + + This schema gets added to the `search_path `_ of every request. + +List of schemas +~~~~~~~~~~~~~~~ + + You can also specify a list of schemas that can be used for **schema-based multitenancy** and **api versioning** by :ref:`multiple-schemas`. Example: + + .. code:: bash + + db-schemas = "tenant1, tenant2" + + If you don't :ref:`Switch Schemas `, the first schema in the list(``tenant1`` in this case) is chosen as the default schema. + + *Only the chosen schema* gets added to the `search_path `_ of every request. + + .. warning:: + + Never expose private schemas in this way. See :ref:`schema_isolation`. + +.. _db-tx-end: + +db-tx-end +--------- + + =============== ================= + **Environment** PGRST_DB_TX_END + **In-Database** pgrst.db_tx_end + =============== ================= + + Specifies how to terminate the database transactions. + + .. code:: bash + + # The transaction is always committed + db-tx-end = "commit" + + # The transaction is committed unless a "Prefer: tx=rollback" header is sent + db-tx-end = "commit-allow-override" + + # The transaction is always rolled back + db-tx-end = "rollback" + + # The transaction is rolled back unless a "Prefer: tx=commit" header is sent + db-tx-end = "rollback-allow-override" + +.. _db-uri: + +db-uri +------ + + =============== ================= + **Environment** PGRST_DB_URI + **In-Database** `n/a` + =============== ================= + + The standard connection PostgreSQL `URI format `_. Symbols and unusual characters in the password or other fields should be percent encoded to avoid a parse error. If enforcing an SSL connection to the database is required you can use `sslmode `_ in the URI, for example ``postgres://user:pass@host:5432/dbname?sslmode=require``. + + When running PostgREST on the same machine as PostgreSQL, it is also possible to connect to the database using a `Unix socket `_ and the `Peer Authentication method `_ as an alternative to TCP/IP communication and authentication with a password, this also grants higher performance. To do this you can omit the host and the password, e.g. ``postgres://user@/dbname``, see the `libpq connection string `_ documentation for more details. + + Choosing a value for this parameter beginning with the at sign such as ``@filename`` (e.g. ``@./configs/my-config``) loads the connection string out of an external file. + +.. _db-use-legacy-gucs: + +db-use-legacy-gucs +------------------ + + =============== ================= + **Environment** PGRST_DB_USE_LEGACY_GUCS + **In-Database** pgrst.db_use_legacy_gucs + =============== ================= + + Determine if GUC request settings for headers, cookies and jwt claims use the `legacy names `_ (string with dashes, invalid starting from PostgreSQL v14) with text values instead of the :ref:`new names ` (string without dashes, valid on all PostgreSQL versions) with json values. + + On PostgreSQL versions 14 and above, this parameter is ignored. + +.. _jwt-aud: + +jwt-aud +------- + + =============== ================= + **Environment** PGRST_JWT_AUD + **In-Database** pgrst.jwt_aud + =============== ================= + + Specifies the `JWT audience claim `_. If this claim is present in the client provided JWT then you must set this to the same value as in the JWT, otherwise verifying the JWT will fail. + +.. _jwt-role-claim-key: + +jwt-role-claim-key +------------------ + + *For backwards compatibility, this config parameter is also available without prefix as "role-claim-key".* + + =============== ================= + **Environment** PGRST_JWT_ROLE_CLAIM_KEY + **In-Database** pgrst.jwt_role_claim_key + =============== ================= + + A JSPath DSL that specifies the location of the :code:`role` key in the JWT claims. This can be used to consume a JWT provided by a third party service like Auth0, Okta or Keycloak. Usage examples: + + .. code:: bash + + # {"postgrest":{"roles": ["other", "author"]}} + # the DSL accepts characters that are alphanumerical or one of "_$@" as keys + jwt-role-claim-key = ".postgrest.roles[1]" + + # {"https://www.example.com/role": { "key": "author }} + # non-alphanumerical characters can go inside quotes(escaped in the config value) + jwt-role-claim-key = ".\"https://www.example.com/role\".key" + +.. _jwt-secret: + +jwt-secret +---------- + + =============== ================= + **Environment** PGRST_JWT_SECRET + **In-Database** pgrst.jwt_secret + =============== ================= + + The secret or `JSON Web Key (JWK) (or set) `_ used to decode JWT tokens clients provide for authentication. For security the key must be **at least 32 characters long**. If this parameter is not specified then PostgREST refuses authentication requests. Choosing a value for this parameter beginning with the at sign such as :code:`@filename` loads the secret out of an external file. This is useful for automating deployments. Note that any binary secrets must be base64 encoded. Both symmetric and asymmetric cryptography are supported. For more info see :ref:`asym_keys`. + + Choosing a value for this parameter beginning with the at sign such as ``@filename`` (e.g. ``@./configs/my-config``) loads the secret out of an external file. + + .. warning:: + + Only when using the :ref:`file_config`, if the ``jwt-secret`` contains a ``$`` character by itself it will give errors. In this case, use ``$$`` and PostgREST will interpret it as a single ``$`` character. + +.. _jwt-secret-is-base64: + +jwt-secret-is-base64 +-------------------- + + =============== ================= + **Environment** PGRST_JWT_SECRET_IS_BASE64 + **In-Database** pgrst.jwt_secret_is_base64 + =============== ================= + + When this is set to :code:`true`, the value derived from :code:`jwt-secret` will be treated as a base64 encoded secret. + +.. _log-level: + +log-level +--------- + + =============== ================= + **Environment** PGRST_LOG_LEVEL + **In-Database** `n/a` + =============== ================= + + Specifies the level of information to be logged while running PostgREST. + + .. code:: bash + + # Only startup and db connection recovery messages are logged + log-level = "crit" + + # All the "crit" level events plus server errors (status 5xx) are logged + log-level = "error" + + # All the "error" level events plus request errors (status 4xx) are logged + log-level = "warn" + + # All the "warn" level events plus all requests (every status code) are logged + log-level = "info" + + + Because currently there's no buffering for logging, the levels with minimal logging(``crit/error``) will increase throughput. + +.. _openapi-mode: + +openapi-mode +------------ + + =============== ================= + **Environment** PGRST_OPENAPI_MODE + **In-Database** pgrst.openapi_mode + =============== ================= + + Specifies how the OpenAPI output should be displayed. + + .. code:: bash + + # Follows the privileges of the JWT role claim (or from db-anon-role if the JWT is not sent) + # Shows information depending on the permissions that the role making the request has + openapi-mode = "follow-privileges" + + # Ignores the privileges of the JWT role claim (or from db-anon-role if the JWT is not sent) + # Shows all the exposed information, regardless of the permissions that the role making the request has + openapi-mode = "ignore-privileges" + + # Disables the OpenApi output altogether. + # Throws a `404 Not Found` error when accessing the API root path + openapi-mode = "disabled" + +.. _openapi-security-active: + +openapi-security-active +----------------------- + + =============== ============================= + **Environment** PGRST_OPENAPI_SECURITY_ACTIVE + **In-Database** pgrst.openapi_security_active + =============== ============================= + +When this is set to :code:`true`, security options are included in the :ref:`OpenAPI output `. + +.. _openapi-server-proxy-uri: + +openapi-server-proxy-uri +------------------------ + + =============== ================= + **Environment** PGRST_OPENAPI_SERVER_PROXY_URI + **In-Database** pgrst.openapi_server_proxy_uri + =============== ================= + + Overrides the base URL used within the OpenAPI self-documentation hosted at the API root path. Use a complete URI syntax :code:`scheme:[//[user:password@]host[:port]][/]path[?query][#fragment]`. Ex. :code:`https://postgrest.com` + + .. code:: json + + { + "swagger": "2.0", + "info": { + "version": "0.4.3.0", + "title": "PostgREST API", + "description": "This is a dynamic API generated by PostgREST" + }, + "host": "postgrest.com:443", + "basePath": "/", + "schemes": [ + "https" + ] + } + +.. _raw-media-types: + +raw-media-types +--------------- + + =============== ================= + **Environment** PGRST_RAW_MEDIA_TYPES + **In-Database** pgrst.raw_media_types + =============== ================= + + This serves to extend the `Media Types `_ that PostgREST currently accepts through an ``Accept`` header. + + These media types can be requested by following the same rules as the ones defined in :ref:`scalar_return_formats`. + + As an example, the below config would allow you to request an **image** and a **XML** file by doing a request with ``Accept: image/png`` + or ``Accept: font/woff2``, respectively. + + .. code:: bash + + raw-media-types="image/png, font/woff2" + +.. _server-host: + +server-host +----------- + + =============== ================= + **Environment** PGRST_SERVER_HOST + **In-Database** `n/a` + =============== ================= + + Where to bind the PostgREST web server. In addition to the usual address options, PostgREST interprets these reserved addresses with special meanings: + + * :code:`*` - any IPv4 or IPv6 hostname + * :code:`*4` - any IPv4 or IPv6 hostname, IPv4 preferred + * :code:`!4` - any IPv4 hostname + * :code:`*6` - any IPv4 or IPv6 hostname, IPv6 preferred + * :code:`!6` - any IPv6 hostname + +.. _server-port: + +server-port +----------- + + =============== ================= + **Environment** PGRST_SERVER_PORT + **In-Database** `n/a` + =============== ================= + + The TCP port to bind the web server. + +.. _server-unix-socket: + +server-unix-socket +------------------ + + =============== ================= + **Environment** PGRST_SERVER_UNIX_SOCKET + **In-Database** `n/a` + =============== ================= + + `Unix domain socket `_ where to bind the PostgREST web server. + If specified, this takes precedence over :ref:`server-port`. Example: + + .. code:: bash + + server-unix-socket = "/tmp/pgrst.sock" + +.. _server-unix-socket-mode: + +server-unix-socket-mode +----------------------- + + =============== ================= + **Environment** PGRST_SERVER_UNIX_SOCKET_MODE + **In-Database** `n/a` + =============== ================= + + `Unix file mode `_ to be set for the socket specified in :ref:`server-unix-socket` + Needs to be a valid octal between 600 and 777. + + .. code:: bash + + server-unix-socket-mode = "660" diff --git a/docs/default.nix b/docs/default.nix new file mode 100644 index 0000000000..b986ca6ccb --- /dev/null +++ b/docs/default.nix @@ -0,0 +1,91 @@ +let + # Commit of the Nixpkgs repository that we want to use. + nixpkgsVersion = { + date = "2021-06-02"; + rev = "84aa23742f6c72501f9cc209f29c438766f5352d"; + tarballHash = "0h7xl6q0yjrbl9vm3h6lkxw692nm8bg3wy65gm95a2mivhrdjpxp"; + }; + + # Nix files that describe the Nixpkgs repository. We evaluate the expression + # using `import` below. + pkgs = import + (fetchTarball { + url = "https://github.com/nixos/nixpkgs/archive/${nixpkgsVersion.rev}.tar.gz"; + sha256 = nixpkgsVersion.tarballHash; + }) + { }; + + sphinxTabsPkg = ps: ps.callPackage ./extensions/sphinx-tabs.nix { }; + sphinxCopybuttonPkg = ps: ps.callPackage ./extensions/sphinx-copybutton.nix { }; + + python = pkgs.python3.withPackages (ps: [ ps.sphinx ps.sphinx_rtd_theme ps.livereload (sphinxTabsPkg ps) (sphinxCopybuttonPkg ps) ]); +in +{ + inherit pkgs; + + build = + pkgs.writeShellScriptBin "postgrest-docs-build" + '' + set -euo pipefail + cd "$(${pkgs.git}/bin/git rev-parse --show-toplevel)/docs" + + # clean previous build, otherwise some errors might be supressed + rm -rf _build + + ${python}/bin/sphinx-build --color -W -b html -a -n . _build + ''; + + serve = + pkgs.writeShellScriptBin "postgrest-docs-serve" + '' + set -euo pipefail + cd "$(${pkgs.git}/bin/git rev-parse --show-toplevel)/docs" + + # livereload_docs.py needs to find "sphinx-build" + PATH=${python}/bin:$PATH + + ${python}/bin/python livereload_docs.py + ''; + + spellcheck = + pkgs.writeShellScriptBin "postgrest-docs-spellcheck" + '' + set -euo pipefail + cd "$(${pkgs.git}/bin/git rev-parse --show-toplevel)/docs" + + FILES=$(find . -type f -iname '*.rst' | tr '\n' ' ') + + cat $FILES \ + | grep -v '^\(\.\.\| \)' \ + | sed 's/`.*`//g' \ + | ${pkgs.aspell}/bin/aspell -d ${pkgs.aspellDicts.en}/lib/aspell/en_US -p ./postgrest.dict list \ + | sort -f \ + | tee misspellings + test ! -s misspellings + ''; + + # dictcheck detects obsolete entries in postgrest.dict, that are not used anymore + dictcheck = + pkgs.writeShellScriptBin "postgrest-docs-dictcheck" + '' + set -euo pipefail + cd "$(${pkgs.git}/bin/git rev-parse --show-toplevel)/docs" + + FILES=$(find . -type f -iname '*.rst' | tr '\n' ' ') + + cat postgrest.dict \ + | tail -n+2 \ + | tr '\n' '\0' \ + | xargs -0 -n 1 -i \ + sh -c "grep \"{}\" $FILES > /dev/null || echo \"{}\"" + ''; + + linkcheck = + pkgs.writeShellScriptBin "postgrest-docs-linkcheck" + '' + set -euo pipefail + cd "$(${pkgs.git}/bin/git rev-parse --show-toplevel)/docs" + + ${python}/bin/sphinx-build --color -b linkcheck . _build + ''; +} diff --git a/docs/ecosystem.rst b/docs/ecosystem.rst new file mode 100644 index 0000000000..2fe80a94e7 --- /dev/null +++ b/docs/ecosystem.rst @@ -0,0 +1,128 @@ +.. _community_tutorials: + +Community Tutorials +------------------- + +* `Building a Contacts List with PostgREST and Vue.js `_ - + In this video series, DigitalOcean shows how to build and deploy an Nginx + PostgREST(using a managed PostgreSQL database) + Vue.js webapp in an Ubuntu server droplet. + +* `PostgREST + Auth0: Create REST API in mintutes, and add social login using Auth0 `_ - A step-by-step tutorial to show how to dockerize and integrate Auth0 to PostgREST service. + +* `PostgREST + PostGIS API tutorial in 5 minutes `_ - + In this tutorial, GIS • OPS shows how to perform PostGIS calculations through PostgREST :ref:`s_procs` interface. + +* `"CodeLess" backend using postgres, postgrest and oauth2 authentication with keycloak `_ - + A step-by-step tutorial for using PostgREST with KeyCloak(hosted on a managed service). + +* `How PostgreSQL triggers work when called with a PostgREST PATCH HTTP request `_ - A tutorial to see how the old and new values are set or not when doing a PATCH request to PostgREST. + +.. _templates: + +Templates +--------- + +* `compose-postgrest `_ - docker-compose setup with Nginx and HTML example +* `svelte-postgrest-template `_ - Svelte/SvelteKit, PostgREST, EveryLayout and social auth + +.. _eco_example_apps: + +Example Apps +------------ + +* `chronicle `_ - tracking a tree of personal memories +* `code-du-travail-backoffice `_ - data administration portal for the official French Labor Code and Agreements +* `delibrium-postgrest `_ - example school API and front-end in Vue.js +* `elm-workshop `_ - building a simple database query UI +* `ember-postgrest-dynamic-ui `_ - generating Ember forms to edit data +* `ETH-transactions-storage `_ - indexer for Ethereum to get transaction list by ETH address +* `ext-postgrest-crud `_ - browser-based spreadsheet +* `general `_ - example auth back-end +* `goodfilm `_ - example film API +* `guild-operators `_ - example queries and functions that the Cardano Community uses for their Guild Operators' Repository +* `handsontable-postgrest `_ - an excel-like database table editor +* `heritage-near-me `_ - Elm and PostgREST with PostGIS +* `ng-admin-postgrest `_ - automatic database admin panel +* `pgrst-dev-setup `_ - docker-compose and tmuxp setup for experimentation. +* `postgres-postgrest-cloudflared-example `_ - docker-compose setup exposing PostgREST using cloudfared +* `postgrest-demo `_ - multi-tenant logging system +* `postgrest-example `_ - sqitch versioning for API +* `postgrest-sessions-example `_ - example for cookie-based sessions +* `postgrest-translation-proxy `_ - calling to external translation service +* `postgrest-ui `_ - ClojureScript UI components for PostgREST +* `postgrest-vercel `_ - run PostgREST on Vercel (Serverless/AWS Lambda) +* `PostgrestSkeleton `_ - Docker Compose, PostgREST, Nginx and Auth0 +* `PostGUI `_ - React Material UI admin panel +* `prospector `_ - data warehouse and visualization platform + +.. _devops: + +DevOps +------ + +* `cloudgov-demo-postgrest `_ - demo for a federally-compliant REST API on cloud.gov +* `cloudstark/helm-charts `_ - helm chart to deploy PostgREST to a Kubernetes cluster via a Deployment and Service +* `jbkarle/postgrest `_ - helm chart with a demo database for development and test purposes +* `Limezest/postgrest-cloud-run `_ - expose a PostgreSQL database on Cloud SQL using Cloud Run + +.. _eco_external_notification: + +External Notification +--------------------- + +These are PostgreSQL bridges that propagate LISTEN/NOTIFY to external queues for further processing. This allows stored procedures to initiate actions outside the database such as sending emails. + +* `pg-bridge `_ - Amazon SNS +* `pg-kinesis-bridge `_ - Amazon Kinesis +* `pg-notify-webhook `_ - trigger webhooks from PostgreSQL's LISTEN/NOTIFY +* `pgsql-listen-exchange `_ - RabbitMQ +* `postgres-websockets `_ - expose web sockets for PostgreSQL's LISTEN/NOTIFY +* `postgresql-to-amqp `_ - AMQP +* `postgresql2websocket `_ - Websockets +* `skeeter `_ - ZeroMQ + + +.. _eco_extensions: + +Extensions +---------- + +* `aiodata `_ - Python, event-based proxy and caching client. +* `pg-safeupdate `_ - prevent full-table updates or deletes +* `postgrest-auth (criles25) `_ - email based auth/signup +* `postgrest-node `_ - Run a PostgREST server in Node.js via npm module +* `postgrest-oauth `_ - OAuth2 WAI middleware +* `postgrest-oauth/api `_ - OAuth2 server +* `PostgREST-writeAPI `_ - generate Nginx rewrite rules to fit an OpenAPI spec +* `spas `_ - allow file uploads and basic auth + +.. _clientside_libraries: + +Client-Side Libraries +--------------------- + +* `aor-postgrest-client `_ - JS, admin-on-rest +* `elm-postgrest `_ - Elm +* `general-angular `_ - TypeScript, generate UI from API description +* `jarvus-postgrest-apikit `_ - JS, Sencha framework +* `mithril-postgrest `_ - JS, Mithril +* `ng-postgrest `_ - Angular app for browsing, editing data exposed over PostgREST. +* `postgrest-client `_ - JS +* `postgrest-csharp `_ - C# +* `postgrest-dart `_ - Dart +* `postgrest-ex `_ - Elixir +* `postgrest-go `_ - Go +* `postgrest-js `_ - TypeScript/JavaScript +* `postgrest-kt `_ - Kotlin +* `postgrest-py `_ - Python +* `postgrest-request `_ - JS, SuperAgent +* `postgrest-rs `_ - Rust +* `postgrest-sharp-client `_ (needs maintainer) - C#, RestSharp +* `postgrest-swift `_ - Swift +* `postgrest-url `_ - JS, just for generating query URLs +* `postgrest_python_requests_client `_ - Python +* `postgrester `_ - JS + Typescript +* `postgrestR `_ - R +* `py-postgrest `_ - Python +* `redux-postgrest `_ - TypeScript/JS, client integrated with (React) Redux. +* `vue-postgrest `_ - Vue.js + diff --git a/docs/errors.rst b/docs/errors.rst new file mode 100644 index 0000000000..f981d5e4fb --- /dev/null +++ b/docs/errors.rst @@ -0,0 +1,298 @@ +.. _error_source: + +Error Source +============ + +For the most part, error messages will come directly from the database with the same `structure that PostgreSQL uses `_. PostgREST will convert the ``MESSAGE``, ``DETAIL``, ``HINT`` and ``ERRCODE`` from the PostgreSQL error to JSON format and add an HTTP status code to the response (see :ref:`status_codes`). For instance, this is the error you will get when querying a nonexistent table: + +.. code-block:: http + + GET /nonexistent_table?id=eq.1 HTTP/1.1 + +.. code-block:: http + + HTTP/1.1 404 Not Found + Content-Type: application/json; charset=utf-8 + +.. code-block:: json + + { + "hint": null, + "details": null, + "code": "42P01", + "message": "relation \"api.nonexistent_table\" does not exist" + } + +However, some errors do come from PostgREST itself (such as those related to the :ref:`schema_cache`). These have the same structure as the PostgreSQL errors but are differentiated by the ``PGRST`` prefix in the ``code`` field (see :ref:`pgrst_errors`). For instance, when querying a function that does not exist, the error will be: + +.. code-block:: http + + POST /rpc/nonexistent_function HTTP/1.1 + +.. code-block:: http + + HTTP/1.1 404 Not Found + Content-Type: application/json; charset=utf-8 + +.. code-block:: json + + { + "hint": "If a new function was created in the database with this name and parameters, try reloading the schema cache.", + "details": null + "code": "PGRST202", + "message": "Could not find the api.nonexistent_function() function in the schema cache" + } + +.. _status_codes: + +HTTP Status Codes +================= + +PostgREST translates `PostgreSQL error codes `_ into HTTP status as follows: + ++--------------------------+-------------------------+---------------------------------+ +| PostgreSQL error code(s) | HTTP status | Error description | ++==========================+=========================+=================================+ +| 08* | 503 | pg connection err | ++--------------------------+-------------------------+---------------------------------+ +| 09* | 500 | triggered action exception | ++--------------------------+-------------------------+---------------------------------+ +| 0L* | 403 | invalid grantor | ++--------------------------+-------------------------+---------------------------------+ +| 0P* | 403 | invalid role specification | ++--------------------------+-------------------------+---------------------------------+ +| 23503 | 409 | foreign key violation | ++--------------------------+-------------------------+---------------------------------+ +| 23505 | 409 | uniqueness violation | ++--------------------------+-------------------------+---------------------------------+ +| 25006 | 405 | read only sql transaction | ++--------------------------+-------------------------+---------------------------------+ +| 25* | 500 | invalid transaction state | ++--------------------------+-------------------------+---------------------------------+ +| 28* | 403 | invalid auth specification | ++--------------------------+-------------------------+---------------------------------+ +| 2D* | 500 | invalid transaction termination | ++--------------------------+-------------------------+---------------------------------+ +| 38* | 500 | external routine exception | ++--------------------------+-------------------------+---------------------------------+ +| 39* | 500 | external routine invocation | ++--------------------------+-------------------------+---------------------------------+ +| 3B* | 500 | savepoint exception | ++--------------------------+-------------------------+---------------------------------+ +| 40* | 500 | transaction rollback | ++--------------------------+-------------------------+---------------------------------+ +| 53* | 503 | insufficient resources | ++--------------------------+-------------------------+---------------------------------+ +| 54* | 413 | too complex | ++--------------------------+-------------------------+---------------------------------+ +| 55* | 500 | obj not in prerequisite state | ++--------------------------+-------------------------+---------------------------------+ +| 57* | 500 | operator intervention | ++--------------------------+-------------------------+---------------------------------+ +| 58* | 500 | system error | ++--------------------------+-------------------------+---------------------------------+ +| F0* | 500 | config file error | ++--------------------------+-------------------------+---------------------------------+ +| HV* | 500 | foreign data wrapper error | ++--------------------------+-------------------------+---------------------------------+ +| P0001 | 400 | default code for "raise" | ++--------------------------+-------------------------+---------------------------------+ +| P0* | 500 | PL/pgSQL error | ++--------------------------+-------------------------+---------------------------------+ +| XX* | 500 | internal error | ++--------------------------+-------------------------+---------------------------------+ +| 42883 | 404 | undefined function | ++--------------------------+-------------------------+---------------------------------+ +| 42P01 | 404 | undefined table | ++--------------------------+-------------------------+---------------------------------+ +| 42501 | | if authenticated 403, | insufficient privileges | +| | | else 401 | | ++--------------------------+-------------------------+---------------------------------+ +| other | 400 | | ++--------------------------+-------------------------+---------------------------------+ + +.. _pgrst_errors: + +PostgREST Error Codes +===================== + +PostgREST error codes have the form ``PGRSTgxx``, where ``PGRST`` is the prefix that differentiates the error from a PostgreSQL error, ``g`` is the group where the error belongs and ``xx`` is the number that identifies the error in the group. + +.. _pgrst0**: + +Group 0 - Connection +-------------------- + +Related to the connection with the database. + ++---------------+-------------+-------------------------------------------------------------+ +| Code | HTTP status | Description | ++===============+=============+=============================================================+ +| .. _pgrst000: | 503 | Could not connect with the database due to an incorrect | +| | | :ref:`db-uri` or due to the PostgreSQL service not running. | +| PGRST000 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst001: | 503 | Could not connect with the database due to an internal | +| | | error. | +| PGRST001 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst002: | 503 | Could not connect with the database when building the | +| | | :ref:`schema_cache` due to the PostgreSQL service not | +| PGRST002 | | running. | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst003: | 504 | The request timed out waiting for a pool connection | +| | | to be available. See :ref:`db-pool-acquisition-timeout`. | +| PGRST003 | | | ++---------------+-------------+-------------------------------------------------------------+ + +.. _pgrst1**: + +Group 1 - Api Request +--------------------- + +Related to the HTTP request elements. + ++---------------+-------------+-------------------------------------------------------------+ +| Code | HTTP status | Description | ++===============+=============+=============================================================+ +| .. _pgrst100: | 400 | Parsing error in the query string parameter. | +| | | See :ref:`h_filter`, :ref:`operators` and :ref:`ordering`. | +| PGRST100 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst101: | 405 | For :ref:`functions `, only ``GET`` and ``POST`` | +| | | verbs are allowed. Any other verb will throw this error. | +| PGRST101 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst102: | 400 | An invalid request body was sent(e.g. an empty body or | +| | | malformed JSON). | +| PGRST102 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst103: | 416 | An invalid range was specified for :ref:`limits`. | +| | | | +| PGRST103 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst105: | 405 | An invalid :ref:`PUT ` request was done | +| | | | +| PGRST105 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst106: | 406 | The schema specified when | +| | | :ref:`switching schemas ` is not present | +| PGRST106 | | in the :ref:`db-schemas` configuration variable. | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst107: | 415 | The ``Content-Type`` sent in the request is invalid. | +| | | | +| PGRST107 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst108: | 400 | The filter is applied to a embedded resource that is not | +| | | specified in the ``select`` part of the query string. | +| PGRST108 | | See :ref:`embed_filters`. | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst109: | 400 | Restricting a Deletion or an Update using limits must | +| | | include the ordering of a unique column. | +| PGRST109 | | See :ref:`limited_update_delete`. | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst110: | 400 | When restricting a Deletion or an Update using limits | +| | | modifies more rows than the maximum specified in the limit. | +| PGRST110 | | See :ref:`limited_update_delete`. | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst111: | 500 | An invalid ``response.headers`` was set. | +| | | See :ref:`guc_resp_hdrs`. | +| PGRST111 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst112: | 500 | The status code must be a positive integer. | +| | | See :ref:`guc_resp_status`. | +| PGRST112 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst113: | 406 | More than one column was returned for a scalar result. | +| | | See :ref:`scalar_return_formats`. | +| PGRST113 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst114: | 400 | For an :ref:`UPSERT using PUT `, when | +| | | :ref:`limits and offsets ` are used. | +| PGRST114 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst115: | 400 | For an :ref:`UPSERT using PUT `, when the | +| | | primary key in the query string and the body are different. | +| PGRST115 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst116: | 406 | More than 1 or no items where returned when requesting | +| | | a singular response. See :ref:`singular_plural`. | +| PGRST116 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst117: | 405 | The HTTP verb used in the request in not supported. | +| | | | +| PGRST117 | | | ++---------------+-------------+-------------------------------------------------------------+ + +.. _pgrst2**: + +Group 2 - Schema Cache +---------------------- + +Related to a :ref:`stale schema cache `. Most of the time, these errors are solved by :ref:`reloading the schema cache `. + ++---------------+-------------+-------------------------------------------------------------+ +| Code | HTTP status | Description | ++===============+=============+=============================================================+ +| .. _pgrst200: | 400 | Caused by :ref:`stale_fk_relationships`, otherwise any of | +| | | the embedding resources or the relationship itself may not | +| PGRST200 | | exist in the database. | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst201: | 300 | An ambiguous embedding request was made. | +| | | See :ref:`embed_disamb`. | +| PGRST201 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst202: | 404 | Caused by a :ref:`stale_function_signature`, otherwise | +| | | the function may not exist in the database. | +| PGRST202 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst203: | 300 | Caused by requesting overloaded functions with the same | +| | | argument names but different types, or by using a ``POST`` | +| PGRST203 | | verb to request overloaded functions with a ``JSON`` or | +| | | ``JSONB`` type unnamed parameter. The solution is to rename | +| | | the function or add/modify the names of the arguments. | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst204: | 400 | Caused when the :ref:`column specified ` | +| | | in the ``columns`` query parameter is not found. | +| PGRST204 | | | ++---------------+-------------+-------------------------------------------------------------+ + +.. _pgrst3**: + +Group 3 - JWT +------------- + +Related to the authentication process using JWT. You can follow the :ref:`tut1` for an example on how to implement authentication and the :doc:`Authentication page ` for more information on this process. + ++---------------+-------------+-------------------------------------------------------------+ +| Code | HTTP status | Description | ++===============+=============+=============================================================+ +| .. _pgrst300: | 500 | A :ref:`JWT secret ` is missing from the | +| | | configuration. | +| PGRST300 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst301: | 401 | Any error related to the verification of the JWT, | +| | | which means that the JWT provided is invalid in some way. | +| PGRST301 | | | ++---------------+-------------+-------------------------------------------------------------+ +| .. _pgrst302: | 401 | Attempted to do a request without | +| | | :ref:`authentication ` when the anonymous role | +| PGRST302 | | is disabled by not setting it in :ref:`db-anon-role`. | ++---------------+-------------+-------------------------------------------------------------+ + +.. The Internal Errors Group X** is always at the end + +.. _pgrst_X**: + +Group X - Internal +------------------ + +Internal errors. If you encounter any of these, you may have stumbled on a PostgREST bug, please `open an issue `_ and we'll be glad to fix it. + ++---------------+-------------+-------------------------------------------------------------+ +| Code | HTTP status | Description | ++===============+=============+=============================================================+ +| .. _pgrstX00: | 500 | Internal errors related to the library used for connecting | +| | | to the database. | +| PGRSTX00 | | | ++---------------+-------------+-------------------------------------------------------------+ diff --git a/docs/extensions/sphinx-copybutton.nix b/docs/extensions/sphinx-copybutton.nix new file mode 100644 index 0000000000..8d408d300c --- /dev/null +++ b/docs/extensions/sphinx-copybutton.nix @@ -0,0 +1,33 @@ +{ lib +, buildPythonPackage +, fetchFromGitHub +, sphinx +}: + +buildPythonPackage rec { + pname = "sphinx-copybutton"; + version = "0.4.0"; + + src = fetchFromGitHub { + owner = "executablebooks"; + repo = "sphinx-copybutton"; + rev = "v${version}"; + sha256 = "sha256-vrEIvQeP7AMXSme1PBp0ox5k8Q1rz+1cbHIO+o17Jqc="; + fetchSubmodules = true; + }; + + propagatedBuildInputs = [ + sphinx + ]; + + doCheck = false; # no tests + + pythonImportsCheck = [ "sphinx_copybutton" ]; + + meta = with lib; { + description = "A small sphinx extension to add a \"copy\" button to code blocks"; + homepage = "https://github.com/executablebooks/sphinx-copybutton"; + license = licenses.mit; + maintainers = with maintainers; [ Luflosi ]; + }; +} diff --git a/docs/extensions/sphinx-tabs.nix b/docs/extensions/sphinx-tabs.nix new file mode 100644 index 0000000000..fbc164419b --- /dev/null +++ b/docs/extensions/sphinx-tabs.nix @@ -0,0 +1,29 @@ +{ lib +, buildPythonPackage +, fetchPypi +, sphinx +}: + +buildPythonPackage rec { + pname = "sphinx-tabs"; + version = "3.2.0"; + + src = fetchPypi { + inherit pname version; + sha256 = "sha256:1970aahi6sa7c37cpz8nwgdb2xzf21rk6ykdd1m6w9wvxla7j4rk"; + }; + + propagatedBuildInputs = [ + sphinx + ]; + + doCheck = false; + + pythonImportsCheck = [ "sphinx_tabs" ]; + + meta = with lib; { + description = "Create tabbed content in Sphinx documentation when building HTML"; + homepage = "https://sphinx-tabs.readthedocs.io"; + license = licenses.mit; + }; +} diff --git a/docs/how-tos/create-soap-endpoint.rst b/docs/how-tos/create-soap-endpoint.rst new file mode 100644 index 0000000000..62f0d0412b --- /dev/null +++ b/docs/how-tos/create-soap-endpoint.rst @@ -0,0 +1,222 @@ +.. _create_soap_endpoint: + +Create a SOAP endpoint +====================== + +:author: `fjf2002 `_ + +PostgREST now has XML support. With a bit of work, SOAP endpoints become possible. + +Please note that PostgREST supports just ``text/xml`` MIME type in request/response headers ``Content-Type`` and ``Accept``. +If you have to use other MIME types such as ``application/soap+xml``, you could manipulate the headers in your reverse proxy. + + + +Minimal Example +--------------- +This example will simply return the request body, inside a tag ``therequestbodywas``. + +Add the following function to your PostgreSQL database: + +.. code-block:: postgres + + CREATE OR REPLACE FUNCTION my_soap_endpoint(xml) RETURNS xml AS $$ + DECLARE + nsarray CONSTANT text[][] := ARRAY[ + ARRAY['soapenv', 'http://schemas.xmlsoap.org/soap/envelope/'] + ]; + BEGIN + RETURN xmlelement( + NAME "soapenv:Envelope", + XMLATTRIBUTES('http://schemas.xmlsoap.org/soap/envelope/' AS "xmlns:soapenv"), + xmlelement(NAME "soapenv:Header"), + xmlelement( + NAME "soapenv:Body", + xmlelement( + NAME theRequestBodyWas, + (xpath('/soapenv:Envelope/soapenv:Body', $1, nsarray))[1] + ) + ) + ); + END; + $$ LANGUAGE plpgsql; + +Do not forget to refresh the :ref:`PostgREST schema cache `. + +Use ``curl`` for a first test: + +.. code-block:: bash + + curl http://localhost:3000/rpc/my_soap_endpoint \ + --header 'Content-Type: text/xml' \ + --header 'Accept: text/xml' \ + --data-binary @- < + + + + My SOAP Content + + + + XML + +The output should contain the original request body within the ``therequestbodywas`` entity, +and should roughly look like: + +.. code-block:: xml + + + + + + + + My SOAP Content + + + + + + +Unfortunately the ``Accept: text/xml`` header is currently mandatory concerning PostgREST, otherwise it will respond +with a ``Content-Type: application/json`` header and enclose the response with quotes. +(You can check the returned headers by adding ``-v`` to the curl call.) + +If your SOAP clients do not send the ``Accept: text/xml`` header, you can fix that in your nginx reverse proxy +by adding something like ... + +.. code-block:: nginx + + set $accept $http_accept; + if ($contentType ~ "^text/xml($|;)") { + set $accept "text/xml"; + } + proxy_set_header Accept $accept; + +to your ``location`` nginx configuration. +(The given example sets the ``Accept`` header for each request of Content-Type ``text/xml``.) + + +A more elaborate example +------------------------ + +Here we have a SOAP service that converts a fraction to a decimal value, +with pass-through of PostgreSQL errors to the SOAP response. +Please note that in production you probably should not pass through plain database errors +potentially disclosing internals to the client, but instead handle the errors directly. + + +.. code-block:: postgres + + -- helper function + CREATE OR REPLACE FUNCTION _soap_envelope(body xml) + RETURNS xml + LANGUAGE sql + AS $function$ + SELECT xmlelement( + NAME "soapenv:Envelope", + XMLATTRIBUTES('http://schemas.xmlsoap.org/soap/envelope/' AS "xmlns:soapenv"), + xmlelement(NAME "soapenv:Header"), + xmlelement(NAME "soapenv:Body", body) + ); + $function$; + + -- helper function + CREATE OR REPLACE FUNCTION _soap_exception( + faultcode text, + faultstring text + ) + RETURNS xml + LANGUAGE sql + AS $function$ + SELECT _soap_envelope( + xmlelement(NAME "soapenv:Fault", + xmlelement(NAME "faultcode", faultcode), + xmlelement(NAME "faultstring", faultstring) + ) + ); + $function$; + + CREATE OR REPLACE FUNCTION fraction_to_decimal(xml) + RETURNS xml + LANGUAGE plpgsql + AS $function$ + DECLARE + nsarray CONSTANT text[][] := ARRAY[ + ARRAY['soapenv', 'http://schemas.xmlsoap.org/soap/envelope/'] + ]; + exc_msg text; + exc_detail text; + exc_hint text; + exc_sqlstate text; + BEGIN + -- simulating a statement that results in an exception: + RETURN _soap_envelope(xmlelement( + NAME "decimalValue", + ( + (xpath('/soapenv:Envelope/soapenv:Body/fraction/numerator/text()', $1, nsarray))[1]::text::int + / + (xpath('/soapenv:Envelope/soapenv:Body/fraction/denominator/text()', $1, nsarray))[1]::text::int + )::text::xml + )); + EXCEPTION WHEN OTHERS THEN + GET STACKED DIAGNOSTICS + exc_msg := MESSAGE_TEXT, + exc_detail := PG_EXCEPTION_DETAIL, + exc_hint := PG_EXCEPTION_HINT, + exc_sqlstate := RETURNED_SQLSTATE; + RAISE WARNING USING + MESSAGE = exc_msg, + DETAIL = exc_detail, + HINT = exc_hint; + RETURN _soap_exception(faultcode => exc_sqlstate, faultstring => concat(exc_msg, ', DETAIL: ', exc_detail, ', HINT: ', exc_hint)); + END + $function$; + +Let's test the ``fraction_to_decimal`` service with illegal values: + +.. code-block:: bash + + curl http://localhost:3000/rpc/fraction_to_decimal \ + --header 'Content-Type: text/xml' \ + --header 'Accept: text/xml' \ + --data-binary @- < + + + + 42 + 0 + + + + XML + +The output should roughly look like: + +.. code-block:: xml + + + + + + 22012 + division by zero, DETAIL: , HINT: + + + + + +References +---------- +For more information concerning PostgREST, cf. + +- :ref:`s_proc_single_unnamed` +- :ref:`scalar_return_formats` +- :ref:`Nginx reverse proxy ` + +For SOAP reference, visit + +- the specification at https://www.w3.org/TR/soap/ +- shorter more practical advice is available at https://www.w3schools.com/xml/xml_soap.asp diff --git a/docs/how-tos/providing-images-for-img.rst b/docs/how-tos/providing-images-for-img.rst new file mode 100644 index 0000000000..f1d892e0d6 --- /dev/null +++ b/docs/how-tos/providing-images-for-img.rst @@ -0,0 +1,96 @@ +.. _providing_img: + +Providing images for ```` +============================== + +:author: `pkel `_ + +In this how-to, you will learn how to create an endpoint for providing images to HTML :code:`` tags without client side JavaScript. In fact, the presented technique is suitable for providing not only images, but arbitrary files. + +We will start with a minimal example that highlights the general concept. +Afterwards we present a more detailed solution that fixes a few shortcomings of the first approach. + +.. warning:: + + Be careful when saving binaries in the database, having a separate storage service for these is preferable in most cases. See `Storing Binary files in the Database `_. + +Minimal Example +--------------- + +First, we need a public table for storing the files. + +.. code-block:: postgres + + create table files( + id int primary key + , blob bytea + ); + +Let's assume this table contains an image of two cute kittens with id 42. +We can retrieve this image in binary format from our PostgREST API by requesting :code:`/files?select=blob&id=eq.42` with the :code:`Accept: application/octet-stream` header. +Unfortunately, putting the URL into the :code:`src` of an :code:`` tag will not work. +That's because browsers do not send the required :code:`Accept: application/octet-stream` header. + +Luckily we can specify the accepted media types in the :ref:`raw-media-types` configuration variable. +In this case, the :code:`Accept: image/webp` header is sent by many web browsers by default, so let's add it to the configuration variable, like this: :code:`raw-media-types="image/webp"`. +Now, the image will be displayed in the HTML page: + +.. code-block:: html + + Cute Kittens + +Improved Version +---------------- + +The basic solution has some shortcomings: + +1. The response :code:`Content-Type` header is set to :code:`image/webp`. + This might be a problem if you want to specify a different format for the file. +2. Download requests (e.g. Right Click -> Save Image As) to :code:`/files?select=blob&id=eq.42` will propose :code:`files` as filename. + This might confuse users. +3. Requests to the binary endpoint are not cached. + This will cause unnecessary load on the database. + +The following improved version addresses these problems. +First, in addition to the minimal example, we need to store the media types and names of our files in the database. + +.. code-block:: postgres + + alter table files + add column type text, + add column name text; + +Next, we set up an RPC endpoint that sets the content type and filename. +We use this opportunity to configure some basic, client-side caching. +For production, you probably want to configure additional caches, e.g. on the :ref:`reverse proxy `. + +.. code-block:: postgres + + create function file(id int) returns bytea as + $$ + declare headers text; + declare blob bytea; + begin + select format( + '[{"Content-Type": "%s"},' + '{"Content-Disposition": "inline; filename=\"%s\""},' + '{"Cache-Control": "max-age=259200"}]' + , files.type, files.name) + from files where files.id = file.id into headers; + perform set_config('response.headers', headers, true); + select files.blob from files where files.id = file.id into blob; + if found + then return(blob); + else raise sqlstate 'PT404' using + message = 'NOT FOUND', + detail = 'File not found', + hint = format('%s seems to be an invalid file id', file.id); + end if; + end + $$ language plpgsql; + +With this, we can obtain the cat image from :code:`/rpc/file?id=42`. Thus, the resulting HTML will be: + +.. code-block:: html + + Cute Kittens diff --git a/docs/how-tos/sql-user-management-using-postgres-users-and-passwords.rst b/docs/how-tos/sql-user-management-using-postgres-users-and-passwords.rst new file mode 100644 index 0000000000..4dec8ed1fa --- /dev/null +++ b/docs/how-tos/sql-user-management-using-postgres-users-and-passwords.rst @@ -0,0 +1,341 @@ +.. _sql-user-management-using-postgres-users-and-passwords: + +SQL User Management using postgres' users and passwords +======================================================= + +:author: `fjf2002 `_ + + +This is an alternative to chapter :ref:`sql_user_management`, solely using the PostgreSQL built-in table `pg_catalog.pg_authid `_ for user management. This means + +- no dedicated user table (aside from :code:`pg_authid`) is required + +- PostgreSQL's users and passwords (i. e. the stuff in :code:`pg_authid`) are also used at the PostgREST level. + +.. note:: + Only PostgreSQL users with SCRAM-SHA-256 password hashes (the default since PostgreSQL v14) are supported. + +.. warning:: + + This is experimental. We can't give you any guarantees, especially concerning security. Use at your own risk. + + + +Working with pg_authid and SCRAM-SHA-256 hashes +----------------------------------------------- + +As in :ref:`sql_user_management`, we create a :code:`basic_auth` schema: + +.. code-block:: postgres + + -- We put things inside the basic_auth schema to hide + -- them from public view. Certain public procs/views will + -- refer to helpers and tables inside. + CREATE SCHEMA IF NOT EXISTS basic_auth; + + +As in :ref:`sql_user_management`, we create the :code:`pgcrypto` and :code:`pgjwt` extensions. Here we prefer to put the extensions in its own schemas: + +.. code-block:: postgres + + CREATE SCHEMA ext_pgcrypto; + ALTER SCHEMA ext_pgcrypto OWNER TO postgres; + CREATE EXTENSION IF NOT EXISTS pgcrypto WITH SCHEMA ext_pgcrypto; + + +Concerning the `pgjwt extension `_, please cf. to :ref:`client_auth`. + +.. code-block:: postgres + + CREATE SCHEMA ext_pgjwt; + ALTER SCHEMA ext_pgjwt OWNER TO postgres; + CREATE EXTENSION IF NOT EXISTS pgjwt WITH SCHEMA ext_pgjwt; + + +In order to be able to work with postgres' SCRAM-SHA-256 password hashes, we also need the PBKDF2 key derivation function. Luckily there is `a PL/pgSQL implementation on stackoverflow `_: + +.. code-block:: plpgsql + + CREATE FUNCTION basic_auth.pbkdf2(salt bytea, pw text, count integer, desired_length integer, algorithm text) RETURNS bytea + LANGUAGE plpgsql IMMUTABLE + AS $$ + DECLARE + hash_length integer; + block_count integer; + output bytea; + the_last bytea; + xorsum bytea; + i_as_int32 bytea; + i integer; + j integer; + k integer; + BEGIN + algorithm := lower(algorithm); + CASE algorithm + WHEN 'md5' then + hash_length := 16; + WHEN 'sha1' then + hash_length = 20; + WHEN 'sha256' then + hash_length = 32; + WHEN 'sha512' then + hash_length = 64; + ELSE + RAISE EXCEPTION 'Unknown algorithm "%"', algorithm; + END CASE; + -- + block_count := ceil(desired_length::real / hash_length::real); + -- + FOR i in 1 .. block_count LOOP + i_as_int32 := E'\\000\\000\\000'::bytea || chr(i)::bytea; + i_as_int32 := substring(i_as_int32, length(i_as_int32) - 3); + -- + the_last := salt::bytea || i_as_int32; + -- + xorsum := ext_pgcrypto.HMAC(the_last, pw::bytea, algorithm); + the_last := xorsum; + -- + FOR j IN 2 .. count LOOP + the_last := ext_pgcrypto.HMAC(the_last, pw::bytea, algorithm); + + -- xor the two + FOR k IN 1 .. length(xorsum) LOOP + xorsum := set_byte(xorsum, k - 1, get_byte(xorsum, k - 1) # get_byte(the_last, k - 1)); + END LOOP; + END LOOP; + -- + IF output IS NULL THEN + output := xorsum; + ELSE + output := output || xorsum; + END IF; + END LOOP; + -- + RETURN substring(output FROM 1 FOR desired_length); + END $$; + + ALTER FUNCTION basic_auth.pbkdf2(salt bytea, pw text, count integer, desired_length integer, algorithm text) OWNER TO postgres; + + +Analogous to :ref:`sql_user_management` creates the function :code:`basic_auth.user_role`, we create a helper function to check the user's password, here with another name and signature (since we want the username, not an email address). +But contrary to :ref:`sql_user_management`, this function does not use a dedicated :code:`users` table with passwords, but instead utilizes the built-in table `pg_catalog.pg_authid `_: + +.. code-block:: plpgsql + + CREATE FUNCTION basic_auth.check_user_pass(username text, password text) RETURNS name + LANGUAGE sql + AS + $$ + SELECT rolname AS username + FROM pg_authid + -- regexp-split scram hash: + CROSS JOIN LATERAL regexp_match(rolpassword, '^SCRAM-SHA-256\$(.*):(.*)\$(.*):(.*)$') AS rm + -- identify regexp groups with sane names: + CROSS JOIN LATERAL (SELECT rm[1]::integer AS iteration_count, decode(rm[2], 'base64') as salt, decode(rm[3], 'base64') AS stored_key, decode(rm[4], 'base64') AS server_key, 32 AS digest_length) AS stored_password_part + -- calculate pbkdf2-digest: + CROSS JOIN LATERAL (SELECT basic_auth.pbkdf2(salt, check_user_pass.password, iteration_count, digest_length, 'sha256')) AS digest_key(digest_key) + -- based on that, calculate hashed passwort part: + CROSS JOIN LATERAL (SELECT ext_pgcrypto.digest(ext_pgcrypto.hmac('Client Key', digest_key, 'sha256'), 'sha256') AS stored_key, ext_pgcrypto.hmac('Server Key', digest_key, 'sha256') AS server_key) AS check_password_part + WHERE rolpassword IS NOT NULL + AND pg_authid.rolname = check_user_pass.username + -- verify password: + AND check_password_part.stored_key = stored_password_part.stored_key + AND check_password_part.server_key = stored_password_part.server_key; + $$; + + ALTER FUNCTION basic_auth.check_user_pass(username text, password text) OWNER TO postgres; + + + +Public User Interface +--------------------- + +Analogous to :ref:`sql_user_management`, we create a login function which takes a username and password and returns a JWT if the credentials match a user in the internal table. +Here we use the username instead of the email address to identify a user. + + +Logins +~~~~~~ + +As described in :ref:`client_auth`, we'll create a JWT token inside our login function. Note that you'll need to adjust the secret key which is hard-coded in this example to a secure (at least thirty-two character) secret of your choosing. + + +.. code-block:: plpgsql + + CREATE TYPE basic_auth.jwt_token AS ( + token text + ); + + -- if you are not using psql, you need to replace :DBNAME with the current database's name. + ALTER DATABASE :DBNAME SET "app.jwt_secret" to 'reallyreallyreallyreallyverysafe'; + + + CREATE FUNCTION public.login(username text, password text) RETURNS basic_auth.jwt_token + LANGUAGE plpgsql security definer + AS $$ + DECLARE + _role name; + result basic_auth.jwt_token; + BEGIN + -- check email and password + SELECT basic_auth.check_user_pass(username, password) INTO _role; + IF _role IS NULL THEN + RAISE invalid_password USING message = 'invalid user or password'; + END IF; + -- + SELECT ext_pgjwt.sign( + row_to_json(r), current_setting('app.jwt_secret') + ) AS token + FROM ( + SELECT login.username as role, + extract(epoch FROM now())::integer + 60*60 AS exp + ) r + INTO result; + RETURN result; + END; + $$; + + ALTER FUNCTION public.login(username text, password text) OWNER TO postgres; + + + +Permissions +~~~~~~~~~~~ + +Analogous to :ref:`sql_user_management`: +Your database roles need access to the schema, tables, views and functions in order to service HTTP requests. +Recall from the :ref:`roles` that PostgREST uses special roles to process requests, namely the authenticator and +anonymous roles. Below is an example of permissions that allow anonymous users to attempt to log in. + + +.. code-block:: postgres + + -- the names "anon" and "authenticator" are configurable and not + -- sacred, we simply choose them for clarity + CREATE ROLE anon NOINHERIT; + CREATE role authenticator NOINHERIT LOGIN PASSWORD 'secret'; + GRANT anon TO authenticator; + + GRANT EXECUTE ON FUNCTION public.login(username text, password text) TO anon; + + +Since the above :code:`login` function is defined as `security definer `_, +the anonymous user :code:`anon` doesn't need permission to access the table :code:`pg_catalog.pg_authid` . +:code:`grant execute on function` is included for clarity but it might not be needed, see :ref:`func_privs` for more details. + +Choose a secure password for role :code:`authenticator`. +Do not forget to configure PostgREST to use the :code:`authenticator` user to connect, and to use the :code:`anon` user as anonymous user. + + +Testing +------- + +Let us create a sample user: + +.. code-block:: postgres + + CREATE ROLE foo PASSWORD 'bar'; + + +Test at the SQL level +~~~~~~~~~~~~~~~~~~~~~ + +Execute: + +.. code-block:: postgres + + SELECT * FROM public.login('foo', 'bar'); + + +This should return a single scalar field like: + +:: + + token + ----------------------------------------------------------------------------------------------------------------------------- + eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiZm9vIiwiZXhwIjoxNjY4MTg4ODQ3fQ.idBBHuDiQuN_S7JJ2v3pBOr9QypCliYQtCgwYOzAqEk + (1 row) + + +Test at the REST level +~~~~~~~~~~~~~~~~~~~~~~ +An API request to call this function would look like: + +.. tabs:: + + .. code-tab:: http + + POST /rpc/login HTTP/1.1 + + { "username": "foo", "password": "bar" } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/login" \ + -X POST -H "Content-Type: application/json" \ + -d '{ "username": "foo", "password": "bar" }' + +The response would look like the snippet below. Try decoding the token at `jwt.io `_. (It was encoded with a secret of :code:`reallyreallyreallyreallyverysafe` as specified in the SQL code above. You'll want to change this secret in your app!) + +.. code:: json + + { + "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoic2VwcCIsImV4cCI6MTY2ODE4ODQzN30.WSytcouNMQe44ZzOQit2AQsqTKFD5mIvT3z2uHwdoYY" + } + + + +A more sophisticated test at the REST level +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Let's add a table, intended for the :code:`foo` user: + + +.. code-block:: postgres + + CREATE TABLE public.foobar(foo int, bar text, baz float); + ALTER TABLE public.foobar owner TO postgres; + + +Now try to get the table's contents with: + +.. tabs:: + + .. code-tab:: http + + GET /foobar HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/foobar" + + +This should fail --- of course, we haven't specified the user, thus PostgREST falls back to the :code:`anon` user and denies access. +Add an :code:`Authorization` header. Please use the token value from the login function call above instead of the one provided below. + +.. tabs:: + + .. code-tab:: http + + GET /foobar HTTP/1.1 + Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiZm9vIiwiZXhwIjoxNjY4MTkyMjAyfQ.zzdHCBjfkqDQLQ8D7CHO3cIALF6KBCsfPTWgwhCiHCY + + .. code-tab:: bash Curl + + curl "http://localhost:3000/foobar" \ + -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiZm9vIiwiZXhwIjoxNjY4MTkyMjAyfQ.zzdHCBjfkqDQLQ8D7CHO3cIALF6KBCsfPTWgwhCiHCY" + + +This will fail again --- we get :code:`Permission denied to set role`. We forgot to allow the authenticator role to switch into this user by executing: + +.. code-block:: postgres + + GRANT foo TO authenticator; + + +Re-execute the last REST request. We fail again --- we also forgot to grant permissions for :code:`foo` on the table. Execute: + +.. code-block:: postgres + + GRANT SELECT ON TABLE public.foobar TO foo; + +Now the REST request should succeed. An empty JSON array :code:`[]` is returned. diff --git a/docs/how-tos/working-with-postgresql-data-types.rst b/docs/how-tos/working-with-postgresql-data-types.rst new file mode 100644 index 0000000000..04e6562de1 --- /dev/null +++ b/docs/how-tos/working-with-postgresql-data-types.rst @@ -0,0 +1,763 @@ +.. _working_with_types: + +Working with PostgreSQL data types +================================== + +:author: `Laurence Isla `_ + +PostgREST makes use of PostgreSQL string representations to work with data types. Thanks to this, you can use special values, such as ``now`` for timestamps, ``yes`` for booleans or time values including the time zones. This page describes how you can take advantage of these string representations to perform operations on different PostgreSQL data types. + +.. contents:: + :local: + :depth: 1 + +Timestamps +---------- + +You can use the **time zone** to filter or send data if needed. + +.. code-block:: postgres + + create table reports ( + id int primary key + , due_date timestamptz + ); + +Suppose you are located in Sydney and want create a report with the date in the local time zone. Your request should look like this: + +.. tabs:: + + .. code-tab:: http + + POST /reports HTTP/1.1 + Content-Type: application/json + + [{ "id": 1, "due_date": "2022-02-24 11:10:15 Australia/Sydney" }, + { "id": 2, "due_date": "2022-02-27 22:00:00 Australia/Sydney" }] + + .. code-tab:: bash Curl + + curl "http://localhost:3000/reports" \ + -X POST -H "Content-Type: application/json" \ + -d '[{ "id": 1, "due_date": "2022-02-24 11:10:15 Australia/Sydney" },{ "id": 2, "due_date": "2022-02-27 22:00:00 Australia/Sydney" }]' + +Someone located in Cairo can retrieve the data using their local time, too: + +.. tabs:: + + .. code-tab:: http + + GET /reports?due_date=eq.2022-02-24+02:10:15+Africa/Cairo HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/reports?due_date=eq.2022-02-24+02:10:15+Africa/Cairo" + +.. code-block:: json + + [ + { + "id": 1, + "due_date": "2022-02-23T19:10:15-05:00" + } + ] + +The response has the date in the time zone configured by the server: ``UTC -05:00``. + +You can use other comparative filters and also all the `PostgreSQL special date/time input values `_ as illustrated in this example: + +.. tabs:: + + .. code-tab:: http + + GET /reports?or=(and(due_date.gte.today,due_date.lte.tomorrow),and(due_date.gt.-infinity,due_date.lte.epoch)) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/reports?or=(and(due_date.gte.today,due_date.lte.tomorrow),and(due_date.gt.-infinity,due_date.lte.epoch))" + +.. code-block:: json + + [ + { + "id": 2, + "due_date": "2022-02-27T06:00:00-05:00" + } + ] + +JSON +---- + +To work with a ``json`` type column, you can handle the value as a JSON object. + +.. code-block:: postgres + + create table products ( + id int primary key, + name text unique, + extra_info json + ); + +You can insert a new product using a JSON object for the ``extra_info`` column: + +.. tabs:: + + .. code-tab:: http + + POST /products HTTP/1.1 + Content-Type: application/json + + { + "id": 1, + "name": "Canned fish", + "extra_info": { + "expiry_date": "2025-12-31", + "exportable": true + } + } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/products" \ + -X POST -H "Content-Type: application/json" \ + -d @- << EOF + { + "id": 1, + "name": "Canned fish", + "extra_info": { + "expiry_date": "2025-12-31", + "exportable": true + } + } + EOF + +To query and filter the data see :ref:`json_columns` for a complete reference. + +Arrays +------ + +To handle `array types `_ you can use string representation or JSON array format. + +.. code-block:: postgres + + create table movies ( + id int primary key, + title text not null, + tags text[], + performance_times time[] + ); + +You can insert a new value using string representation. + +.. tabs:: + + .. code-tab:: http + + POST /movies HTTP/1.1 + Content-Type: application/json + + { + "id": 1, + "title": "Paddington", + "tags": "{family,comedy,not streamable}", + "performance_times": "{12:40,15:00,20:00}" + } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/movies" \ + -X POST -H "Content-Type: application/json" \ + -d @- << EOF + { + "id": 1, + "title": "Paddington", + "tags": "{family,comedy,not streamable}", + "performance_times": "{12:40,15:00,20:00}" + } + EOF + +Or you could send the same data using JSON array format: + +.. tabs:: + + .. code-tab:: http + + POST /movies HTTP/1.1 + Content-Type: application/json + + { + "id": 1, + "title": "Paddington", + "tags": ["family", "comedy", "not streamable"], + "performance_times": ["12:40", "15:00", "20:00"] + } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/movies" \ + -X POST -H "Content-Type: application/json" \ + -d @- << EOF + { + "id": 1, + "title": "Paddington", + "tags": ["family", "comedy", "not streamable"], + "performance_times": ["12:40", "15:00", "20:00"] + } + EOF + +To query the data you can use arrow operators. See :ref:`composite_array_columns`. + +Multidimensional Arrays +~~~~~~~~~~~~~~~~~~~~~~~ + +Similarly to one-dimensional arrays, both the string representation and JSON array format are allowed. + +.. code-block:: postgres + + -- This new column stores the cinema, floor and auditorium numbers in that order + alter table movies + add column cinema_floor_auditorium int[][][]; + +You can now update the item using JSON array format: + +.. tabs:: + + .. code-tab:: http + + PATCH /movies?id=eq.1 HTTP/1.1 + Content-Type: application/json + + { + "cinema_floor_auditorium": [ [ [1,2], [6,7] ], [ [3,5], [8,9] ] ] + } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/movies?id=eq.1" \ + -X PATCH -H "Content-Type: application/json" \ + -d @- << EOF + { + "cinema_floor_auditorium": [ [ [1,2], [6,7] ], [ [3,5], [8,9] ] ] + } + EOF + +Then, for example, to query the auditoriums that are located in the first cinema (position 0 in the array) and on the second floor (position 1 in the next inner array), we can use the arrow operators this way: + +.. tabs:: + + .. code-tab:: http + + GET /movies?select=title,auditorium:cinema_floor_auditorium->0->1&id=eq.1 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/movies?select=title,auditorium:cinema_floor_auditorium->0->1&id=eq.1" + +.. code-block:: json + + [ + { + "title": "Paddington", + "auditorium": [6,7] + } + ] + +Composite Types +--------------- + +With PostgREST, you have two options to handle `composite type columns `_. + +.. code-block:: postgres + + create type dimension as ( + length decimal(6,2), + width decimal (6,2), + height decimal (6,2), + unit text + ); + + create table products ( + id int primary key, + size dimension + ); + + insert into products (id, size) + values (1, '(5.0,5.0,10.0,"cm")'); + +On one hand you can insert values using string representation. + +.. tabs:: + + .. code-tab:: http + + POST /products HTTP/1.1 + Content-Type: application/json + + { "id": 2, "size": "(0.7,0.5,1.8,\"m\")" } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/products" \ + -X POST -H "Content-Type: application/json" \ + -d @- << EOF + { "id": 2, "size": "(0.7,0.5,1.8,\"m\")" } + EOF + +Or you could insert the same data in JSON format. + +.. tabs:: + + .. code-tab:: http + + POST /products HTTP/1.1 + Content-Type: application/json + + { + "id": 2, + "size": { + "length": 0.7, + "width": 0.5, + "height": 1.8, + "unit": "m" + } + } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/products" \ + -X POST -H "Content-Type: application/json" \ + -d @- << EOF + { + "id": 2, + "size": { + "length": 0.7, + "width": 0.5, + "height": 1.8, + "unit": "m" + } + } + EOF + +You can also query the data using arrow operators. See :ref:`composite_array_columns`. + +Ranges +------ + +PostgREST allows you to handle `ranges `_. + +.. code-block:: postgres + + create table events ( + id int primary key, + name text unique, + duration tsrange + ); + +To insert a new event, specify the ``duration`` value as a string representation of the ``tsrange`` type: + +.. tabs:: + + .. code-tab:: http + + POST /events HTTP/1.1 + Content-Type: application/json + + { + "id": 1, + "name": "New Year's Party", + "duration": "['2022-12-31 11:00','2023-01-01 06:00']" + } + + .. code-tab:: bash Curl + + curl "http://localhost:3000/events" \ + -X POST -H "Content-Type: application/json" \ + -d @- << EOF + { + "id": 1, + "name": "New Year's Party", + "duration": "['2022-12-31 11:00','2023-01-01 06:00']" + } + EOF + +You can use range :ref:`operators ` to filter the data. But, in this case, requesting a filter like ``events?duration=cs.2023-01-01`` will return an error, because PostgreSQL needs an explicit cast from string to timestamp. A workaround is to use a range starting and ending in the same date: + +.. tabs:: + + .. code-tab:: http + + GET /events?duration=cs.[2023-01-01,2023-01-01] HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/events?duration=cs.\[2023-01-01,2023-01-01\]" + +.. code-block:: json + + [ + { + "id": 1, + "name": "New Year's Party", + "duration": "[\"2022-12-31 11:00:00\",\"2023-01-01 06:00:00\"]" + } + ] + +.. _casting_range_to_json: + +Casting a Range to a JSON Object +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +As you may have noticed, the ``tsrange`` value is returned as a string literal. To return it as a JSON value, first you need to create a function that will do the conversion from a ``tsrange`` type: + +.. code-block:: postgres + + create or replace function tsrange_to_json(tsrange) returns json as $$ + select json_build_object( + 'lower', lower($1) + , 'upper', upper($1) + , 'lower_inc', lower_inc($1) + , 'upper_inc', upper_inc($1) + ); + $$ language sql; + +Then, create the cast using this function: + +.. code-block:: postgres + + create cast (tsrange as json) with function tsrange_to_json(tsrange) as assignment; + +Finally, do the request :ref:`casting the range column `: + +.. tabs:: + + .. code-tab:: http + + GET /events?select=id,name,duration::json HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/events?select=id,name,duration::json" + +.. code-block:: json + + [ + { + "id": 1, + "name": "New Year's Party", + "duration": { + "lower": "2022-12-31T11:00:00", + "upper": "2023-01-01T06:00:00", + "lower_inc": true, + "upper_inc": true + } + } + ] + +.. note:: + + If you don't want to modify casts for built-in types, an option would be to `create a custom type `_ + for your own ``tsrange`` and add its own cast. + + .. code-block:: postgres + + create type mytsrange as range (subtype = timestamp, subtype_diff = tsrange_subdiff); + + -- define column types and casting function analogously to the above example + -- ... + + create cast (mytsrange as json) with function mytsrange_to_json(mytsrange) as assignment; + +Bytea +----- + +To send raw binary to PostgREST you need a function with a single unnamed parameter of `bytea type `_. + +.. code-block:: postgres + + create table files ( + id int primary key generated always as identity, + file bytea + ); + + create function upload_binary(bytea) returns void as $$ + insert into files (file) values ($1); + $$ language sql; + +Let's download the PostgREST logo for our test. + +.. code-block:: bash + + curl "https://postgrest.org/en/latest/_images/logo.png" -o postgrest-logo.png + +Now, to send the file ``postgrest-logo.png`` we need to set the ``Content-Type: application/octet-stream`` header in the request: + +.. tabs:: + + .. code-tab:: http + + POST /rpc/upload_binary HTTP/1.1 + Content-Type: application/octet-stream + + postgrest-logo.png + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/upload_binary" \ + -X POST -H "Content-Type: application/octet-stream" \ + --data-binary "@postgrest-logo.png" + +To get the image from the database, set the ``Accept: application/octet-stream`` header and select only the +``bytea`` type column. + +.. tabs:: + + .. code-tab:: http + + GET /files?select=file&id=eq.1 HTTP/1.1 + Accept: application/octet-stream + + .. code-tab:: bash Curl + + curl "http://localhost:3000/files?select=file&id=eq.1" \ + -H "Accept: application/octet-stream" + +Use more accurate headers according to the type of the files by using the :ref:`raw-media-types` configuration. For example, adding the ``raw-media-types="image/png"`` setting to the configuration file will allow you to use the ``Accept: image/png`` header: + +.. tabs:: + + .. code-tab:: http + + GET /files?select=file&id=eq.1 HTTP/1.1 + Accept: image/png + + .. code-tab:: bash Curl + + curl "http://localhost:3000/files?select=file&id=eq.1" \ + -H "Accept: image/png" + +See :ref:`providing_img` for a step-by-step example on how to handle images in HTML. + +.. warning:: + + Be careful when saving binaries in the database, having a separate storage service for these is preferable in most cases. See `Storing Binary files in the Database `_. + +hstore +------ + +You can work with data types belonging to additional supplied modules such as `hstore `_. + +.. code-block:: postgres + + -- Activate the hstore module in the current database + create extension if not exists hstore; + + create table countries ( + id int primary key, + name hstore unique + ); + +The ``name`` column will have the name of the country in different formats. You can insert values using the string representation for that data type: + +.. tabs:: + + .. code-tab:: http + + POST /countries HTTP/1.1 + Content-Type: application/json + + [ + { "id": 1, "name": "common => Egypt, official => \"Arab Republic of Egypt\", native => مصر" }, + { "id": 2, "name": "common => Germany, official => \"Federal Republic of Germany\", native => Deutschland" } + ] + + .. code-tab:: bash Curl + + curl "http://localhost:3000/countries" \ + -X POST -H "Content-Type: application/json" \ + -d @- << EOF + [ + { "id": 1, "name": "common => Egypt, official => \"Arab Republic of Egypt\", native => مصر" }, + { "id": 2, "name": "common => Germany, official => \"Federal Republic of Germany\", native => Deutschland" } + ] + EOF + +Notice that the use of ``"`` in the value of the ``name`` column needs to be escaped using a backslash ``\``. + +You can also query and filter the value of a ``hstore`` column using the arrow operators, as you would do for a :ref:`JSON column`. For example, if you want to get the native name of Egypt: + +.. tabs:: + + .. code-tab:: http + + GET /countries?select=name->>native&name->>common=like.Egypt HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/countries?select=name->>native&name->>common=like.Egypt" + +.. code-block:: json + + [{ "native": "مصر" }] + +.. _ww_postgis: + +PostGIS +------- + +You can use the string representation for `PostGIS `_ data types such as ``geometry`` or ``geography`` (you need to `install PostGIS `_ first). + +.. code-block:: postgres + + -- Activate the postgis module in the current database + create extension if not exists postgis; + + create table coverage ( + id int primary key, + name text unique, + area geometry + ); + +To add areas in polygon format, you can use string representation: + +.. tabs:: + + .. code-tab:: http + + POST /coverage HTTP/1.1 + Content-Type: application/json + + [ + { "id": 1, "name": "small", "area": "SRID=4326;POLYGON((0 0, 1 0, 1 1, 0 1, 0 0))" }, + { "id": 2, "name": "big", "area": "SRID=4326;POLYGON((0 0, 10 0, 10 10, 0 10, 0 0))" } + ] + + .. code-tab:: bash Curl + + curl "http://localhost:3000/coverage" \ + -X POST -H "Content-Type: application/json" \ + -d @- << EOF + [ + { "id": 1, "name": "small", "area": "SRID=4326;POLYGON((0 0, 1 0, 1 1, 0 1, 0 0))" }, + { "id": 2, "name": "big", "area": "SRID=4326;POLYGON((0 0, 10 0, 10 10, 0 10, 0 0))" } + ] + EOF + +Now, when you request the information, PostgREST will automatically cast the ``area`` column into a ``Polygon`` geometry type. Although this is useful, you may need the whole output to be in `GeoJSON `_ format out of the box, which can be done by including the ``Accept: application/geo+json`` in the request. This will work for PostGIS versions 3.0.0 and up and will return the output as a `FeatureCollection Object `_: + +.. tabs:: + + .. code-tab:: http + + GET /coverage HTTP/1.1 + Accept: application/geo+json + + .. code-tab:: bash Curl + + curl "http://localhost:3000/coverage" \ + -H "Accept: application/geo+json" + +.. code-block:: json + + { + "type": "FeatureCollection", + "features": [ + { + "type": "Feature", + "geometry": { + "type": "Polygon", + "coordinates": [ + [[0,0],[1,0],[1,1],[0,1],[0,0]] + ] + }, + "properties": { + "id": 1, + "name": "small" + } + }, + { + "type": "Feature", + "geometry": { + "type": "Polygon", + "coordinates": [ + [[0,0],[10,0],[10,10],[0,10],[0,0]] + ] + }, + "properties": { + "id": 2, + "name": "big" + } + } + ] + } + +If you need to add an extra property, like the area in square units by using ``st_area(area)``, you could add a generated column to the table and it will appear in the ``properties`` key of each ``Feature``. + +.. code-block:: postgres + + alter table coverage + add square_units double precision generated always as ( st_area(area) ) stored; + +In the case that you are using older PostGIS versions, then creating a function is your best option: + +.. code-block:: postgres + + create or replace function coverage_geo_collection() returns json as $$ + select + json_build_object( + 'type', 'FeatureCollection', + 'features', json_agg( + json_build_object( + 'type', 'Feature', + 'geometry', st_AsGeoJSON(c.area)::json, + 'properties', json_build_object('id', c.id, 'name', c.name) + ) + ) + ) + from coverage c; + $$ language sql; + +Now this query will return the same results: + +.. tabs:: + + .. code-tab:: http + + GET /rpc/coverage_geo_collection HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/coverage_geo_collection" + +.. code-block:: json + + { + "type": "FeatureCollection", + "features": [ + { + "type": "Feature", + "geometry": { + "type": "Polygon", + "coordinates": [ + [[0,0],[1,0],[1,1],[0,1],[0,0]] + ] + }, + "properties": { + "id": 1, + "name": "small" + } + }, + { + "type": "Feature", + "geometry": { + "type": "Polygon", + "coordinates": [ + [[0,0],[10,0],[10,10],[0,10],[0,0]] + ] + }, + "properties": { + "id": 2, + "name": "big" + } + } + ] + } diff --git a/docs/index.rst b/docs/index.rst new file mode 100644 index 0000000000..f138876d81 --- /dev/null +++ b/docs/index.rst @@ -0,0 +1,311 @@ +.. title:: PostgREST Documentation + +PostgREST Documentation +======================= + +.. container:: image-container + + .. figure:: _static/logo.png + +.. image:: https://img.shields.io/github/stars/postgrest/postgrest.svg?style=social + :target: https://github.com/PostgREST/postgrest + +.. image:: https://img.shields.io/github/v/release/PostgREST/postgrest.svg + :target: https://github.com/PostgREST/postgrest/releases + +.. image:: https://img.shields.io/docker/pulls/postgrest/postgrest.svg + :target: https://hub.docker.com/r/postgrest/postgrest/ + +.. image:: https://img.shields.io/badge/gitter-join%20chat%20%E2%86%92-brightgreen.svg + :target: https://gitter.im/begriffs/postgrest + +.. image:: https://img.shields.io/badge/Donate-Patreon-orange.svg?colorB=F96854 + :target: https://www.patreon.com/postgrest + +.. image:: https://img.shields.io/badge/Donate-PayPal-green.svg + :target: https://www.paypal.com/paypalme/postgrest + +| + +PostgREST is a standalone web server that turns your PostgreSQL database directly into a RESTful API. The structural constraints and permissions in the database determine the API endpoints and operations. + +Sponsors +-------- + +.. container:: image-container + + .. image:: _static/cybertec-new.png + :target: https://www.cybertec-postgresql.com/en/?utm_source=postgrest.org&utm_medium=referral&utm_campaign=postgrest + :width: 13em + + .. image:: _static/2ndquadrant.png + :target: https://www.2ndquadrant.com/en/?utm_campaign=External%20Websites&utm_source=PostgREST&utm_medium=Logo + :width: 13em + + .. image:: _static/retool.png + :target: https://retool.com/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + + .. image:: _static/gnuhost.png + :target: https://gnuhost.eu/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + + .. image:: _static/supabase.png + :target: https://supabase.com/?utm_source=postgrest%20backers&utm_medium=open%20source%20partner&utm_campaign=postgrest%20backers%20github&utm_term=homepage + :width: 13em + + .. image:: _static/oblivious.jpg + :target: https://oblivious.ai/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + + .. The static/empty.png(created with `convert -size 320x95 xc:#fcfcfc empty.png`) is an ugly workaround + to create space and center the logos. It's not easy to layout with restructuredText. + + .. .. image:: _static/empty.png + :target: #sponsors + :width: 13em + +| + +Motivation +---------- + +Using PostgREST is an alternative to manual CRUD programming. Custom API servers suffer problems. Writing business logic often duplicates, ignores or hobbles database structure. Object-relational mapping is a leaky abstraction leading to slow imperative code. The PostgREST philosophy establishes a single declarative source of truth: the data itself. + +Declarative Programming +----------------------- + +It's easier to ask PostgreSQL to join data for you and let its query planner figure out the details than to loop through rows yourself. It's easier to assign permissions to db objects than to add guards in controllers. (This is especially true for cascading permissions in data dependencies.) It's easier to set constraints than to litter code with sanity checks. + +Leak-proof Abstraction +---------------------- + +There is no ORM involved. Creating new views happens in SQL with known performance implications. A database administrator can now create an API from scratch with no custom programming. + +One Thing Well +-------------- + +PostgREST has a focused scope. It works well with other tools like Nginx. This forces you to cleanly separate the data-centric CRUD operations from other concerns. Use a collection of sharp tools rather than building a big ball of mud. + +Getting Support +---------------- + +The project has a friendly and growing community. Join our `chat room `_ for discussion and help. You can also report or search for bugs/features on the Github `issues `_ page. + +.. toctree:: + :glob: + :caption: Release Notes + :titlesonly: + :hidden: + + v10.2.0 + v10.0.0 + v9.0.1 + v9.0.0 + releases/v8.0.0 + releases/v7.0.1 + releases/v7.0.0 + releases/v6.0.2 + releases/v5.2.0 + +Tutorials +--------- + +Are you new to PostgREST? This is the place to start! + +.. toctree:: + :glob: + :caption: Tutorials + :hidden: + + tutorials/* + +- :doc:`tutorials/tut0` +- :doc:`tutorials/tut1` + +Also have a look at :doc:`Installation ` and :ref:`community_tutorials`. + +Reference guides +---------------- + +Technical references for PostgREST's functionality. + +.. toctree:: + :caption: API + :hidden: + + api.rst + +.. toctree:: + :caption: Configuration + :hidden: + + configuration.rst + +.. toctree:: + :caption: Schema Cache + :hidden: + + schema_cache.rst + +.. toctree:: + :caption: Errors + :hidden: + + errors.rst + +- :doc:`API ` +- :doc:`configuration` +- :doc:`Schema Cache ` +- :doc:`Errors ` + +Topic guides +------------ + +Explanations of some key concepts in PostgREST. + +.. toctree:: + :caption: Authentication + :hidden: + + auth.rst + +.. toctree:: + :caption: Schema Structure + :hidden: + + schema_structure.rst + +.. toctree:: + :caption: Administration + :hidden: + + admin.rst + +.. toctree:: + :caption: Installation + :hidden: + + install.rst + +- :doc:`Authentication ` +- :doc:`Schema Structure ` +- :doc:`Administration ` +- :doc:`Installation ` + +.. _how_tos: + +How-to guides +------------- + +These are recipes that'll help you address specific use-cases. + +.. toctree:: + :glob: + :caption: How-to guides + :hidden: + + how-tos/working-with-postgresql-data-types + how-tos/providing-images-for-img + how-tos/create-soap-endpoint + how-tos/sql-user-management-using-postgres-users-and-passwords + +- :doc:`how-tos/providing-images-for-img` +- :doc:`how-tos/working-with-postgresql-data-types` +- :doc:`how-tos/create-soap-endpoint` +- :doc:`how-tos/sql-user-management-using-postgres-users-and-passwords` + +Ecosystem +--------- + +PostgREST has a growing ecosystem of examples, libraries, and experiments. Here is a selection. + +.. toctree:: + :caption: Ecosystem + :hidden: + + ecosystem.rst + +* :ref:`community_tutorials` +* :ref:`templates` +* :ref:`eco_example_apps` +* :ref:`devops` +* :ref:`eco_external_notification` +* :ref:`eco_extensions` +* :ref:`clientside_libraries` + + +Release Notes +------------- + +Changes among versions. + +- :doc:`releases/v9.0.0` +- :doc:`releases/v8.0.0` + +In Production +------------- + +Here are some companies that use PostgREST in production. + +* `Catarse `_ +* `Datrium `_ +* `Drip Depot `_ +* `Image-charts `_ +* `Moat `_ +* `MotionDynamic - Fast highly dynamic video generation at scale `_ +* `Netwo `_ +* `Nimbus `_ + - See how Nimbus uses PostgREST in `Paul Copplestone's blog post `_. +* `OpenBooking `_ +* `Redsmin `_ +* `Sompani `_ +* `Supabase `_ + +.. Certs are failing + * `eGull `_ + +Testimonials +------------ + + "It's so fast to develop, it feels like cheating!" + + -- François-Guillaume Ribreau + + "I just have to say that, the CPU/Memory usage compared to our + Node.js/Waterline ORM based API is ridiculous. It's hard to even push + it over 60/70 MB while our current API constantly hits 1GB running on 6 + instances (dynos)." + + -- Louis Brauer + + "I really enjoyed the fact that all of a sudden I was writing + microservices in SQL DDL (and v8 JavaScript functions). I dodged so + much boilerplate. The next thing I knew, we pulled out a full rewrite + of a Spring+MySQL legacy app in 6 months. Literally 10x faster, and + code was super concise. The old one took 3 years and a team of 4 + people to develop." + + -- Simone Scarduzio + + "I like the fact that PostgREST does one thing, and one thing well. + While PostgREST takes care of bridging the gap between our HTTP server + and PostgreSQL database, we can focus on the development of our API in + a single language: SQL. This puts the database in the center of our + architecture, and pushed us to improve our skills in SQL programming + and database design." + + -- Eric Bréchemier, Data Engineer, eGull SAS + + "PostgREST is performant, stable, and transparent. It allows us to + bootstrap projects really fast, and to focus on our data and application + instead of building out the ORM layer. In our k8s cluster, we run a few + pods per schema we want exposed, and we scale up/down depending on demand. + Couldn't be happier." + + -- Anupam Garg, Datrium, Inc. + +Contributing +------------ + +Please see the `Contributing guidelines `_ in the main PostgREST repository. diff --git a/docs/install.rst b/docs/install.rst new file mode 100644 index 0000000000..8ba9f56d4d --- /dev/null +++ b/docs/install.rst @@ -0,0 +1,381 @@ +.. _install: + +Installation +============ + +The release page has `pre-compiled binaries for Mac OS X, Windows, Linux and FreeBSD `_ . +The Linux binary is a static executable that can be run on any Linux distribution. + +You can also use your OS package manager. + +.. tabs:: + + .. group-tab:: Mac OSX + + You can install PostgREST from the `Homebrew official repo `_. + + .. code:: bash + + brew install postgrest + + .. group-tab:: FreeBSD + + You can install PostgREST from the `official ports `_. + + .. code:: bash + + pkg install hs-postgrest + + .. group-tab:: Linux + + .. tabs:: + + .. tab:: Arch Linux + + You can install PostgREST from the `community repo `_. + + .. code:: bash + + pacman -S postgrest + + .. tab:: Nix + + You can install PostgREST from nixpkgs. + + .. code:: bash + + nix-env -i haskellPackages.postgrest + + .. group-tab:: Windows + + You can install PostgREST using `Chocolatey `_ or `Scoop `_. + + .. code:: bash + + choco install postgrest + scoop install postgrest + +Running PostgREST +================= + +If you downloaded PostgREST from the release page, first extract the compressed file to obtain the executable. + +.. code-block:: bash + + # For UNIX platforms + tar Jxf postgrest-[version]-[platform].tar.xz + + # On Windows you should unzip the file + +Now you can run PostgREST with the :code:`--help` flag to see usage instructions: + +.. code-block:: bash + + # Running postgrest binary + ./postgrest --help + + # Running postgrest installed from a package manager + postgrest --help + + # You should see a usage help message + +The PostgREST server reads a configuration file as its only argument: + +.. code:: bash + + postgrest /path/to/postgrest.conf + + # You can also generate a sample config file with + # postgrest -e > postgrest.conf + # You'll need to edit this file and remove the usage parts for postgrest to read it + +For a complete reference of the configuration file, see :ref:`configuration`. + +.. note:: + + If you see a dialog box like this on Windows, it may be that the :code:`pg_config` program is not in your system path. + + .. image:: _static/win-err-dialog.png + + It usually lives in :code:`C:\Program Files\PostgreSQL\\bin`. See this `article `_ about how to modify the system path. + + To test that the system path is set correctly, run ``pg_config`` from the command line. You should see it output a list of paths. + +.. _pg-dependency: + +PostgreSQL dependency +--------------------- + +To use PostgREST you will need an underlying database. We require PostgreSQL 9.6 or greater. You can use something like `Amazon RDS `_ but installing your own locally is cheaper and more convenient for development. You can also run PostgreSQL in a :ref:`docker container`. + +Docker +====== + +You can get the `official PostgREST Docker image `_ with: + +.. code-block:: bash + + docker pull postgrest/postgrest + +To configure the container image, use :ref:`env_variables_config`. + +There are two ways to run the PostgREST container: with an existing external database, or through docker-compose. + +Containerized PostgREST with native PostgreSQL +---------------------------------------------- + +The first way to run PostgREST in Docker is to connect it to an existing native database on the host. + +.. code-block:: bash + + # Run the server + docker run --rm --net=host \ + -e PGRST_DB_URI="postgres://app_user:password@localhost/postgres" \ + postgrest/postgrest + +The database connection string above is just an example. Adjust the role and password as necessary. You may need to edit PostgreSQL's :code:`pg_hba.conf` to grant the user local login access. + +.. note:: + + Docker on Mac does not support the :code:`--net=host` flag. Instead you'll need to create an IP address alias to the host. Requests for the IP address from inside the container are unable to resolve and fall back to resolution by the host. + + .. code-block:: bash + + sudo ifconfig lo0 10.0.0.10 alias + + You should then use 10.0.0.10 as the host in your database connection string. Also remember to include the IP address in the :code:`listen_address` within postgresql.conf. For instance: + + .. code-block:: bash + + listen_addresses = 'localhost,10.0.0.10' + + You might also need to add a new IPv4 local connection within pg_hba.conf. For instance: + + .. code-block:: bash + + host all all 10.0.0.10/32 trust + + The docker command will then look like this: + + .. code-block:: bash + + # Run the server + docker run --rm -p 3000:3000 \ + -e PGRST_DB_URI="postgres://app_user:password@10.0.0.10/postgres" \ + postgrest/postgrest + +.. _pg-in-docker: + +Containerized PostgREST *and* db with docker-compose +---------------------------------------------------- + +To avoid having to install the database at all, you can run both it and the server in containers and link them together with docker-compose. Use this configuration: + +.. code-block:: yaml + + # docker-compose.yml + + version: '3' + services: + server: + image: postgrest/postgrest + ports: + - "3000:3000" + environment: + PGRST_DB_URI: postgres://app_user:password@db:5432/app_db + PGRST_OPENAPI_SERVER_PROXY_URI: http://127.0.0.1:3000 + depends_on: + - db + db: + image: postgres + ports: + - "5432:5432" + environment: + POSTGRES_DB: app_db + POSTGRES_USER: app_user + POSTGRES_PASSWORD: password + # Uncomment this if you want to persist the data. + # volumes: + # - "./pgdata:/var/lib/postgresql/data" + +Go into the directory where you saved this file and run :code:`docker-compose up`. You will see the logs of both the database and PostgREST, and be able to access the latter on port 3000. + +If you want to have a visual overview of your API in your browser you can add swagger-ui to your :code:`docker-compose.yml`: + +.. code-block:: yaml + + swagger: + image: swaggerapi/swagger-ui + ports: + - "8080:8080" + expose: + - "8080" + environment: + API_URL: http://localhost:3000/ + +With this you can see the swagger-ui in your browser on port 8080. + +.. _build_source: + +Building from Source +==================== + +When a pre-built binary does not exist for your system you can build the project from source. + +.. note:: + + We discourage building and using PostgREST on **Alpine Linux** because of a reported GHC memory leak on that platform. + +You can build PostgREST from source with `Stack `_. It will install any necessary Haskell dependencies on your system. + +* `Install Stack `_ for your platform +* Install Library Dependencies + + ===================== ======================================= + Operating System Dependencies + ===================== ======================================= + Ubuntu/Debian libpq-dev, libgmp-dev, zlib1g-dev + CentOS/Fedora/Red Hat postgresql-devel, zlib-devel, gmp-devel + BSD postgresql12-client + OS X libpq, gmp + ===================== ======================================= + +* Build and install binary + + .. code-block:: bash + + git clone https://github.com/PostgREST/postgrest.git + cd postgrest + + # adjust local-bin-path to taste + stack build --install-ghc --copy-bins --local-bin-path /usr/local/bin + +.. note:: + + - If building fails and your system has less than 1GB of memory, try adding a swap file. + - `--install-ghc` flag is only needed for the first build and can be omitted in the subsequent builds. + +* Check that the server is installed: :code:`postgrest --help`. + +.. _deploy_heroku: + +Deploying to Heroku +=================== + +1. Log into Heroku using the `Heroku CLI `_: + + .. code-block:: bash + + # If you have multiple Heroku accounts, use flag '--interactive' to switch between them + heroku login --interactive + + +2. Create a new Heroku app using the PostgREST buildpack: + + .. code-block:: bash + + mkdir ${YOUR_APP_NAME} + cd ${YOUR_APP_NAME} + git init . + + heroku apps:create ${YOUR_APP_NAME} --buildpack https://github.com/PostgREST/postgrest-heroku.git + heroku git:remote -a ${YOUR_APP_NAME} + +3. Create a new Heroku PostgreSQL add-on attached to the app and keep notes of the assigned add-on name (e.g. :code:`postgresql-curly-58902`) referred later as ${HEROKU_PG_DB_NAME} + + .. code-block:: bash + + heroku addons:create heroku-postgresql:standard-0 -a ${YOUR_APP_NAME} + # wait until the add-on is available + heroku pg:wait -a ${YOUR_APP_NAME} + +4. Create the necessary user roles according to the + `PostgREST documentation `_: + + .. code-block:: bash + + heroku pg:credentials:create --name api_user -a ${YOUR_APP_NAME} + # use the following command to ensure the new credential state is active before attaching it + heroku pg:credentials -a ${YOUR_APP_NAME} + + heroku addons:attach ${HEROKU_PG_DB_NAME} --credential api_user -a ${YOUR_APP_NAME} + +5. Connect to the PostgreSQL database and create some sample data: + + .. code-block:: bash + + heroku psql -a ${YOUR_APP_NAME} + + .. code-block:: postgres + + # from the psql command prompt execute the following commands: + create schema api; + + create table api.todos ( + id serial primary key, + done boolean not null default false, + task text not null, + due timestamptz + ); + + insert into api.todos (task) values + ('finish tutorial 0'), ('pat self on back'); + + grant usage on schema api to api_user; + grant select on api.todos to api_user; + +6. Create the :code:`Procfile`: + + .. code-block:: bash + + web: PGRST_SERVER_HOST=0.0.0.0 PGRST_SERVER_PORT=${PORT} PGRST_DB_URI=${PGRST_DB_URI:-${DATABASE_URL}} ./postgrest-${POSTGREST_VER} + .. + + Set the following environment variables on Heroku: + + .. code-block:: bash + + heroku config:set POSTGREST_VER=10.0.0 + heroku config:set PGRST_DB_SCHEMA=api + heroku config:set PGRST_DB_ANON_ROLE=api_user + .. + + PGRST_DB_URI can be set if an external database is used or if it's different from the default Heroku DATABASE_URL. This latter is used if nothing is provided. + POSTGREST_VER is mandatory to select and build the required PostgREST release. + + See https://postgrest.org/en/stable/configuration.html#environment-variables for the full list of environment variables. + +7. Build and deploy your app: + + .. code-block:: bash + + git add Procfile + git commit -m "PostgREST on Heroku" + git push heroku master + .. + + Your Heroku app should be live at :code:`${YOUR_APP_NAME}.herokuapp.com` + +8. Test your app + + From a terminal display the application logs: + + .. code-block:: bash + + heroku logs -t + .. + + From a different terminal retrieve with curl the records previously created: + + .. code-block:: bash + + curl https://${YOUR_APP_NAME}.herokuapp.com/todos + .. + + and test that any attempt to modify the table via a read-only user is not allowed: + + .. code-block:: bash + + curl https://${YOUR_APP_NAME}.herokuapp.com/todos -X POST \ + -H "Content-Type: application/json" \ + -d '{"task": "do bad thing"}' diff --git a/docs/livereload_docs.py b/docs/livereload_docs.py new file mode 100755 index 0000000000..67a2165d66 --- /dev/null +++ b/docs/livereload_docs.py @@ -0,0 +1,11 @@ +#!/usr/bin/env python +from livereload import Server, shell +from subprocess import call + +## Build docs at startup +call(["sphinx-build", "-b", "html", "-a", "-n", ".", "_build"]) +server = Server() +server.watch("**/*.rst", shell("sphinx-build -b html -a -n . _build")) +# For custom port and host +# server.serve(root='_build/', host='192.168.1.2') +server.serve(root="_build/") diff --git a/docs/postgrest.dict b/docs/postgrest.dict new file mode 100644 index 0000000000..789cdcdf4c --- /dev/null +++ b/docs/postgrest.dict @@ -0,0 +1,208 @@ +personal_ws-1.1 en 0 utf-8 +Adossi +AMQP +api +API's +Archlinux +aud +Auth +auth +authenticator +backoff +balancer +Beles +booleans +Bouscal +buildpack +Bytea +Cardano +cd +centric +changelog +ClojureScript +cloudfared +config +CORS +CPUs +cryptographically +CSV +Daemonizing +DDL +DevOps +DiBiase +dockerize +DoS +eq +ETH +Ethereum +EveryLayout +Fenko +Fernandes +filename +FreeBSD +fts +GC +GeoJSON +GHC +Github +Google +grantor +GraphQL +gte +GUC +gucs +Gumbs +Haskell +Heroku +HMAC +Homebrew +hstore +HTTP +HTTPS +HV +Ibarluzea +ilike +imatch +io +IP +JS +js +JSON +JWK +JWT +jwt +JWTs +Kinesis +Kofi +Kubernetes +localhost +login +Logins +logins +lon +lt +lte +middleware +misprediction +Mithril +multi +MVCC +namespace +namespaced +neq +nginx +ngrep +nixpkgs +npm +nxl +nxr +OAuth +onwards +OpenAPI +openapi +ORM +ov +passphrase +Pawel +PBKDF +Pelletier +Petr +PgBouncer +pgcrypto +pgjwt +pgrst +pgrstX +PGRSTX +pgSQL +authid +phfts +phraseto +plainto +plfts +poolers +POSIX +PostGIS +PostgreSQL +PostgreSQL's +PostgREST +postgres +postgrest +PostgREST's +pre +preflight +psql +Qin +RabbitMQ +Rafaj +RDS +reallyreallyreallyreallyverysafe +Rechkemmer +reconnection +Redux +refactor +Reloadable +Remo +requester's +RESTful +RestSharp +RLS +RPC +RSA +Saleeba +savepoint +schemas +Sencha +Serverless +Severin +SHA +signup +SIGUSR +sl +SNS +sqitch +SQL +sql +sr +SSL +stateful +stdout +Stolarz +subselect +SuperAgent +SvelteKit +SwaggerUI +syslog +systemd +Tcl +tmuxp +todo +todos +Tsingson +tsquery +tx +Tyll +TypeScript +UI +ui +unicode +unix +updatable +UPSERT +Upsert +upsert +uri +url +urls +variadic +Vercel +verifier +versioning +Vondra +Vue +WAI +webhooks +websearch +Websockets +webuser +wfts +ZeroMQ diff --git a/docs/releases/v10.0.0.rst b/docs/releases/v10.0.0.rst new file mode 100644 index 0000000000..7dc46fedae --- /dev/null +++ b/docs/releases/v10.0.0.rst @@ -0,0 +1,227 @@ + +PostgREST 10.0.0 +================ + +Features +-------- + +XML/SOAP support for RPC +~~~~~~~~~~~~~~~~~~~~~~~~ + +RPC now understands the ``text/xml`` media type, allowing SQL functions to send XML output(``Accept: text/xml``) and receive XML input(``Content-Type: text/xml``). This makes SOAP endpoints possible, check the :ref:`create_soap_endpoint` how-to and the :ref:`scalar_return_formats` reference for more details. + +GeoJSON support +~~~~~~~~~~~~~~~ + +GeoJSON is supported across the board(reads, writes, RPC) with the ``Accept: application/geo+json`` header, this depends on PostGIS from the versions 3.0.0 and up. The :ref:`working with PostGIS section ` has an example to get you started. + +Execution Plan +~~~~~~~~~~~~~~ + +The :ref:`execution plan ` of a request is now obtainable with the ``Accept: application/vnd.pgrst.plan`` header. The result can be in ``text`` or ``json`` formats and is compatible with EXPLAIN vizualizers like `explain.depesz.com `_ or `explain.dalibo.com `_. + +Resource Embedding +~~~~~~~~~~~~~~~~~~ + +- A :ref:`one-to-one relationship ` is now detected when a foreign key is unique. + +- Using :ref:`computed_relationships`, you can add custom relationships or override automatically detected ones. This makes :ref:`resource_embedding` possible on Foreign Data Wrappers and complex SQL views. + +Horizontal/Vertical Filtering +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +- :ref:`Accessing fields of a Composite type or elements of an Array type ` is now possible with the arrow operators(``->``, ``->>``) in the same way you would access a JSON type fields. + +- :ref:`pattern_matching` operators for `POSIX regular expressions `_ are now available: ``match`` and ``imatch``, equivalent in PostgreSQL to ``~`` and ``~*`` respectively. + +Insertions/Updates +~~~~~~~~~~~~~~~~~~ + +- ``limit`` can now affect the number of updated/deleted rows. See :ref:`limited_update_delete`. + +OpenAPI +~~~~~~~ + +You can now activate the "Authorize" button in SwaggerUI by enabling the :ref:`openapi-security-active` configuration. Add your JWT token prepending :code:`Bearer` to it and you'll be able to request protected resources. + +Administration +~~~~~~~~~~~~~~ + +- Two :ref:`health check endpoints ` are now exposed in a secondary port. + +- :ref:`pgrst_logging` now shows the database user. + +- It is now possible to execute PostgREST without specifying any configuration variable. The three that were mandatory on the previous versions, are no longer so. + + - If :ref:`db-uri` is not set, PostgREST will use the `libpq environment variables `_ for the database connection. + - If :ref:`db-schemas` is not set, it will use the database ``public`` schema. + - If :ref:`db-anon-role` is not set, it will not allow anonymous requests. + +Error messages +~~~~~~~~~~~~~~ + +- To increase consistency, all the errors messages are now normalized. The ``hint``, ``details``, ``code`` and ``message`` fields will always be present in the body, each one defaulting to a ``null`` value. In the same way, the :ref:`errors that were raised ` with ``SQLSTATE`` now include the ``message`` and ``code`` in the body. + +- To further clarify the source of an error, we now add a ``PGRST`` prefix to the error code of all the errors that are PostgREST-specific and don't come from the database. These errors have unique codes that identify them and are documented in the :ref:`pgrst_errors` section. + +Documentation improvements +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +* Added a :doc:`/how-tos/working-with-postgresql-data-types` how-to, which contains explanations and examples on how to work with different PostgreSQL data types such as timestamps, ranges or PostGIS types, among others. + +* Added in-database and environment variable settings for each :ref:`configuration variable `. + +* Added the :ref:`file_descriptors` subsection. + +* Added a reference page for :doc:`Error documentation `. + +* Moved the :ref:`error_source` and the :ref:`status_codes` sections to the :doc:`errors reference page `. + +* Moved the *Casting type to custom JSON* how-to to the :ref:`casting_range_to_json` subsection. + +* Removed direct links for PostgREST versions older than 8.0 from the versions menu. + +* Removed the *Embedding table from another schema* how-to. + +* Restructured the :ref:`resource_embedding` section: + + - Added a :ref:`one-to-many` and :ref:`many-to-one` subsections. + + - Renamed the *Embedding through join tables* subsection to :ref:`many-to-many`. + +* Split up the *Insertions/Updates* section into :ref:`insert` and :ref:`update`. + +Breaking changes +---------------- + +* Many-to-many relationships now require that foreign key columns be part of the join table composite key + + - This was needed to reduce :ref:`embed_disamb` errors in complex schemas(`#2070 `_). + + - For migrating to this version, the less invasive method is to use :ref:`computed_relationships` to replace the previous many-to-many relationships. + + - Otherwise you can change your join table primary key. For example with ``alter table permission_user drop constraint permission_user_pkey, add primary key (id, user_id, permission_id);`` + +* Views now are not detected when embedding using :ref:`target_disamb`. + + - This embedding form was easily made ambiguous whenever a new view was added(`#2277 `_). + + - For migrating to this version, you can use :ref:`computed_relationships` to replace the previous view relationships. + + - :ref:`hint_disamb` works as usual on views. + +* ``limit/offset`` now limits the affected rows on ``UPDATE``/``DELETE`` + + - Previously, ``limit``/``offset`` only limited the returned rows but not the actual updated rows(`#2156 `_) + +* ``max-rows`` is no longer applied on ``POST``, ``PATCH``, ``PUT`` and ``DELETE`` returned rows + + - This was misleading because the affected rows were not really affected by ``max-rows``, only the returned rows were limited(`#2155 `_) + +* Return ``204 No Content`` without ``Content-Type`` for RPCs returning ``VOID`` + + - Previously, those RPCs would return ``null`` as a body with ``Content-Type: application/json`` (`#2001 `_). + +* Using ``Prefer: return=representation`` no longer returns a ``Location`` header + + - This reduces unnecessary computing for all insertions (`#2312 `_) + +Bug fixes +--------- + +* Return ``204 No Content`` without ``Content-Type`` for ``PUT`` (`#2058 `_) + +* Clarify error for failed schema cache load. (`#2107 `_) + + - From ``Database connection lost. Retrying the connection`` to ``Could not query the database for the schema cache. Retrying.`` + +* Fix silently ignoring filter on a non-existent embedded resource (`#1771 `_) + +* Remove functions, which are not callable due to unnamed arguments, from schema cache and OpenAPI output. (`#2152 `_) + +* Fix accessing JSON array fields with ``->`` and ``->>`` in ``?select=`` and ``?order=``. (`#2145 `_) + +* Ignore ``max-rows`` on ``POST``, ``PATCH``, ``PUT`` and ``DELETE`` (`#2155 `_) + +* Fix inferring a foreign key column as a primary key column on views (`#2254 `_) + +* Restrict generated many-to-many relationships (`#2070 `_) + + - Only adds many-to-many relationships when a table has foreign keys to two other tables and these foreign key columns are part of the table's primary key columns. + +* Allow casting to types with underscores and numbers (e.g. ``select=oid_array::_int4``) (`#2278 `_) + +* Prevent views from breaking one-to-many/many-to-one embeds when using column or foreign key as target (`#2277 `_, `#2238 `_, `#1643 `_) + + - When using a column or foreign key as target for embedding (``/tbl?select=*,col-or-fk(*)``), only tables are now detected and views are not. + + - You can still use a column or an inferred foreign key on a view to embed a table (``/view?select=*,col-or-fk(*)``) + +* Increase the ``db-pool-timeout`` to 1 hour to prevent frequent high connection latency (`#2317 `_) + +* The search path now correctly identifies schemas with uppercase and special characters in their names (regression) (`#2341 `_) + +* "404 Not Found" on nested routes and "405 Method Not Allowed" errors no longer start an empty database transaction (`#2364 `_) + +* Fix inaccurate result count when an inner embed was selected after a normal embed in the query string (`#2342 `_) + +* ``OPTIONS`` requests no longer start an empty database transaction (`#2376 `_) + +* Allow using columns with dollar sign ($) without double quoting in filters and ``select`` (`#2395 `_) + +* Fix loop crash error on startup in PostgreSQL 15 beta 3. ``Log: "UNION types \"char\" and text cannot be matched."`` (`#2410 `_) + +* Fix race conditions managing database connection helper (`#2397 `_) + +* Allow ``limit=0`` in the request query to return an empty array (`#2269 `_) + +Thanks +------ + +Big thanks from the `PostgREST team `_ to our sponsors! + +.. container:: image-container + + .. image:: ../_static/cybertec-new.png + :target: https://www.cybertec-postgresql.com/en/?utm_source=postgrest.org&utm_medium=referral&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/2ndquadrant.png + :target: https://www.2ndquadrant.com/en/?utm_campaign=External%20Websites&utm_source=PostgREST&utm_medium=Logo + :width: 13em + + .. image:: ../_static/retool.png + :target: https://retool.com/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/gnuhost.png + :target: https://gnuhost.eu/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/supabase.png + :target: https://supabase.com/?utm_source=postgrest%20backers&utm_medium=open%20source%20partner&utm_campaign=postgrest%20backers%20github&utm_term=homepage + :width: 13em + + .. image:: ../_static/oblivious.jpg + :target: https://oblivious.ai/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + +* Evans Fernandes +* `Jan Sommer `_ +* `Franz Gusenbauer `_ +* `Daniel Babiak `_ +* Tsingson Qin +* Michel Pelletier +* Jay Hannah +* Robert Stolarz +* Nicholas DiBiase +* Christopher Reid +* Nathan Bouscal +* Daniel Rafaj +* David Fenko +* Remo Rechkemmer +* Severin Ibarluzea +* Tom Saleeba +* Pawel Tyll + +If you like to join them please consider `supporting PostgREST development `_. diff --git a/docs/releases/v10.2.0.rst b/docs/releases/v10.2.0.rst new file mode 100644 index 0000000000..3011e470e2 --- /dev/null +++ b/docs/releases/v10.2.0.rst @@ -0,0 +1,153 @@ + +PostgREST 10.2.0 +================ + +This minor version adds bug fixes and some features that provide stability to v10.0.0. These release notes include the changes added in versions `10.1.0 `_, `10.1.1 `_ and `10.1.2 `_. You can look at the detailed changelog and download the pre-compiled binaries on the `GitHub release page `_. + +Features +-------- + +Pool Connection Lifetime +~~~~~~~~~~~~~~~~~~~~~~~~ + +To prevent memory leaks caused by long-lived connections, PostgREST limits their lifetime in the pool through :ref:`db-pool-max-lifetime`. + +Pool Connection Acquisition Timeout +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +There is now a time limit to wait for pool connections to be acquired. If a new request cannot get a connection in the time specified in :ref:`db-pool-acquisition-timeout` then a response with a ``504`` status is returned. + +Documentation improvements +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +* Added HTTP status codes to the :ref:`pgrst_errors`. + +* Added a how-to on :ref:`sql-user-management-using-postgres-users-and-passwords`. + +* Updated the :ref:`Heroku installation page `. + +Changes +------- + +* Removed ``db-pool-timeout`` option because it was removed in the ``hasql-pool`` library that PostgREST uses for SQL connections. (`#2444 `_) + +Deprecated +---------- + +* Deprecate bulk-calls when including the ``Prefer: params=multiple-objects`` in the request. It is preferable to use a function with an :ref:`array ` or JSON parameter for a better performance. (`#1385 `_) + +Bug fixes +--------- + +* Reduce allocations communication with PostgreSQL, particularly for request bodies. (`#2261 `_, `#2349 `_, `#2467 `_) + +* Fix ``SIGUSR1`` to fully flush the connection pool. (`#2401 `_, `#2444 `_) + +* Fix opening an empty transaction on failed resource embedding. (`#2428 `_) + +* Fix embedding the same table multiple times. (`#2455 `_) + +* Fix a regression when embedding views where base tables have a different column order for foreign key columns (`#2518 `_) + +* Fix a regression with the ``Location`` header when :ref:`inserting ` into views with primary keys from multiple tables (`#2458 `_) + +* Fix a regression in OpenAPI output with mode ``follow-privileges`` (`#2356 `_) + +* Fix infinite recursion when loading schema cache with self-referencing view (`#2283 `_) + +* Return status code ``200`` instead of ``404`` for ``PATCH`` requests which don't affect any rows (`#2343 `_) + +* Treat the :ref:`computed relationships ` that do not return ``SETOF`` as M2O/O2O relationship (`#2481 `_) + +* Fix embedding a computed relationship with a normal relationship (`#2534 `_) + +* Fix error message when ``[]`` is used inside ``select`` (`#2362 `_) + +* Disallow ``!inner`` on computed columns (`#2475 `_) + +* Ignore leading and trailing spaces in column names when parsing the query string (`#2285 `_) + +* Fix ``UPSERT`` with PostgreSQL 15 (`#2545 `_) + +* Fix embedding views with multiple references to the same base column (`#2459 `_) + +* Fix regression when embedding views with partial references to multi column foreign keys (`#2548 `_) + +* Fix regression when requesting ``limit=0`` and ``db-max-row`` is set (`#2558 `_) + +* Return a clear error without hitting the database when trying to update or insert an unknown column with ``?columns`` (`#2542 `_) + +* Fix bad M2M embedding on RPC (`#2565 `_) + +* Replace misleading error message when no function is found with a hint containing functions/parameters names suggestions (`#2575 `_) + +* Move explanation about "single parameters" from the ``message`` to the ``details`` in the error output (`#2582 `_) + +* Replace misleading error message when no relationship is found with a hint containing parent/child names suggestions (`#2569 `_) + +* Add the required OpenAPI items object when the parameter is an array (`#1405 `_) + +* Add upsert headers for ``POST`` requests to the OpenAPI output (`#2592 `_) + +* Fix foreign keys pointing to ``VIEW`` instead of ``TABLE`` in OpenAPI output (`#2623 `_) + +* Consider any PostgreSQL authentication failure as fatal and exit immediately (`#2622 `_) + +* Fix ``NOTIFY pgrst`` not reloading the db connections catalog cache (`#2620 `_) + +* Fix ``db-pool-acquisition-timeout`` not logging to stderr when the timeout is reached (`#2667 `_) + +* Fix PostgreSQL resource leak with long-lived connections through the :ref:`db-pool-max-lifetime` configuration (`#2638 `_) + +* There is now a stricter parsing of the query string. Instead of silently ignoring, the parser now returns a :ref:`PostgREST error ` on invalid syntax. (`#2537 `_) + +Thanks +------ + +Big thanks from the `PostgREST team `_ to our sponsors! + +.. container:: image-container + + .. image:: ../_static/cybertec-new.png + :target: https://www.cybertec-postgresql.com/en/?utm_source=postgrest.org&utm_medium=referral&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/2ndquadrant.png + :target: https://www.2ndquadrant.com/en/?utm_campaign=External%20Websites&utm_source=PostgREST&utm_medium=Logo + :width: 13em + + .. image:: ../_static/retool.png + :target: https://retool.com/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/gnuhost.png + :target: https://gnuhost.eu/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/supabase.png + :target: https://supabase.com/?utm_source=postgrest%20backers&utm_medium=open%20source%20partner&utm_campaign=postgrest%20backers%20github&utm_term=homepage + :width: 13em + + .. image:: ../_static/oblivious.jpg + :target: https://oblivious.ai/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + +* Evans Fernandes +* `Jan Sommer `_ +* `Franz Gusenbauer `_ +* `Daniel Babiak `_ +* Tsingson Qin +* Michel Pelletier +* Jay Hannah +* Robert Stolarz +* Nicholas DiBiase +* Christopher Reid +* Nathan Bouscal +* Daniel Rafaj +* David Fenko +* Remo Rechkemmer +* Severin Ibarluzea +* Tom Saleeba +* Pawel Tyll + +If you like to join them please consider `supporting PostgREST development `_. diff --git a/docs/releases/v5.2.0.rst b/docs/releases/v5.2.0.rst new file mode 100644 index 0000000000..d32044526e --- /dev/null +++ b/docs/releases/v5.2.0.rst @@ -0,0 +1,26 @@ +v5.2.0 +====== + +* Explicit qualification introduced in ``v5.0`` is no longer necessary, this section will not be included from this version onwards. A :ref:`db-extra-search-path` configuration parameter was introduced to avoid the need to explictly qualify database objects. If you install PostgreSQL extensions on the ``public`` schema, they'll work normally from now on. + +* Now you can filter :ref:`tabs-cols-w-spaces`. + +* Included the ability to quote columns that have :ref:`reserved-chars`. + +* Thanks to `Zhou Feng `_, now is possible to reference an external file in :ref:`db-uri`. + +* Thanks to `Russell Davies `_, Json Web Key Sets are now accepted by :ref:`jwt-secret`. + +Thanks +------ + +This release was made possible thanks to: + +* `Daniel Babiak `_ +* `Michel Pelletier `_ +* Tsingson Qin +* Jay Hannah +* Victor Adossi +* Petr Beles + +If you like to join them please consider `supporting PostgREST development `_. diff --git a/docs/releases/v6.0.2.rst b/docs/releases/v6.0.2.rst new file mode 100644 index 0000000000..47f870e443 --- /dev/null +++ b/docs/releases/v6.0.2.rst @@ -0,0 +1,79 @@ +.. |br| raw:: html + +
+ +v6.0.2 +====== + +Full changelog is available at `PostgREST releases page `_. + +Added +----- + +* Ignoring payload keys for insert/update can be now done with the ``?columns`` query parameter. See :ref:`specify_columns`. + |br| -- `@steve-chavez `_ + +* `websearch_to_tsquery `_ can now be used + through the ``wfts`` operator. See :ref:`fts`. + |br| -- `@herulume `_ + +* Resource Embedding on materialized views is now possible. See :ref:`embedding_views`. + |br| -- `@vitorbaptista `_ + +* Bulk calling an RPC is now allowed. See :ref:`bulk_call`. + |br| -- `@steve-chavez `_ + +* It's now possible to request a ``text/plain`` output. See :ref:`scalar_return_formats`. + |br| -- `@steve-chavez `_ + +* Config option for specifying PostgREST database pool timeout ``db-pool-timeout``. + |br| -- `@Qu4tro `_ + +* Config option for binding the PostgREST web server to an unix socket. See :ref:`server-unix-socket`. + |br| -- `@Dansvidania `_ + +* Config option for extending the supported media types. See :ref:`raw-media-types`. + |br| -- `@Dansvidania `_ + +* We now offer an statically linked binary for Linux. Look for **postgrest--linux-x64-static.tar.xz** on the + `releases page `_. + |br| -- `@clojurians-org `_ + +* A :ref:`how_tos` section was added to the documentation. + +Changed +------- + +* ``SIGHUP`` support was removed. You should use ``SIGUSR1`` instead. See :ref:`schema_reloading`. + +* server-host default of ``127.0.0.1`` was changed to ``!4``. See :ref:`server-host`. + +Thanks +------ + +This release is sponsored by: + +.. image:: ../_static/cybertec.png + :target: https://www.cybertec-postgresql.com/en/ + :width: 13em + +.. image:: ../_static/2ndquadrant.png + :target: https://www.2ndquadrant.com/en/?utm_campaign=External%20Websites&utm_source=PostgREST&utm_medium=Logo + :width: 13em + +.. image:: ../_static/retool.png + :target: https://retool.com/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + +* `Daniel Babiak `_ +* Evans Fernandes +* Tsingson Qin +* Michel Pelletier +* Jay Hannah +* Robert Stolarz +* Kofi Gumbs +* Nicholas DiBiase +* Christopher Reid +* Nathan Bouscal + +If you like to join them please consider `supporting PostgREST development `_. diff --git a/docs/releases/v7.0.0.rst b/docs/releases/v7.0.0.rst new file mode 100644 index 0000000000..241f0e9010 --- /dev/null +++ b/docs/releases/v7.0.0.rst @@ -0,0 +1,106 @@ +.. |br| raw:: html + +
+ +v7.0.0 +====== + +You can download this release at the `PostgREST v7.0.0 release page `_. + +Added +----- + +* Support for :ref:`Switching to a schema ` defined in :ref:`db-schemas`. + |br| -- `@steve-chavez `_, `@mahmoudkassem `_ + +* Support for :ref:`planned_count` and :ref:`estimated_count`. + |br| -- `@steve-chavez `_, `@LorenzHenk `_ + +* Support for the :ref:`on_conflict ` query parameter to UPSERT based on a unique constraint. + |br| -- `@ykst `_ + +* Support for :ref:`Resource Embedding Disambiguation `. + |br| -- `@steve-chavez `_ + +* Support for user defined socket permission via :ref:`server-unix-socket-mode` config option + |br| -- `@Dansvidania `_ + +* HTTP logic improvements -- `@steve-chavez `_ + + + Support for HTTP HEAD requests. + + GUCs for :ref:`guc_req_path_method`. + + Support for :ref:`pre_req_headers`. + + Allow overriding provided headers(Content-Type, Location, etc) by :ref:`guc_resp_hdrs` + + Access to the ``Authorization`` header value through ``request.header.authorization`` + +* Documentation improvements + + + Explanation for :doc:`Schema Structure <../schema_structure>`. + + Reference for :ref:`s_proc_embed`. + + Reference for :ref:`mutation_embed`. + + Reference for filters on :ref:`json_columns`. + + How-to for :ref:`providing_img`. + + Added :ref:`community_tutorials` section. + +Fixed +----- + +* Allow embedding a view when its source table foreign key is UNIQUE + |br| -- `@bwbroersma `_ + +* ``Accept: application/vnd.pgrst.object+json`` behavior is now enforced for POST/PATCH/DELETE regardless of ``Prefer: return=minimal`` + |br| -- `@dwagin `_ + +* Fix self join resource embedding on PATCH + |br| -- `@herulume `_, `@steve-chavez `_ + +* Allow PATCH/DELETE without ``Prefer: return=minimal`` on tables with no SELECT privileges + |br| -- `@steve-chavez `_ + +* Fix many to many resource embedding for RPC/PATCH + |br| -- `@steve-chavez `_ + +Changed +------- + +* :ref:`bulk_call` should now be done by specifying a ``Prefer: params=multiple-objects`` header. This fixes a performance regression when calling stored procedures. + +* Resource Embedding now outputs an error when multiple relationships between two tables are found, see :ref:`embed_disamb`. + +* ``server-proxy-uri`` config option has been renamed to :ref:`openapi-server-proxy-uri`. + +* Default Unix Socket file mode from 755 to 660 + +Thanks +------ + +This release was made possible thanks to: + +.. image:: ../_static/cybertec.png + :target: https://www.cybertec-postgresql.com/en/ + :width: 13em + +.. image:: ../_static/2ndquadrant.png + :target: https://www.2ndquadrant.com/en/?utm_campaign=External%20Websites&utm_source=PostgREST&utm_medium=Logo + :width: 13em + +.. image:: ../_static/retool.png + :target: https://retool.com/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + +* `Daniel Babiak `_ +* Evans Fernandes +* `Jan Sommer `_ +* Tsingson Qin +* Michel Pelletier +* Jay Hannah +* Robert Stolarz +* Kofi Gumbs +* Nicholas DiBiase +* Christopher Reid +* Nathan Bouscal +* Daniel Rafaj +* David Fenko + + +If you like to join them please consider `supporting PostgREST development `_. diff --git a/docs/releases/v7.0.1.rst b/docs/releases/v7.0.1.rst new file mode 100644 index 0000000000..5186216e5d --- /dev/null +++ b/docs/releases/v7.0.1.rst @@ -0,0 +1,69 @@ +.. |br| raw:: html + +
+ +v7.0.1 +====== + +You can see the full changelog at `PostgREST v7.0.1 release page `_. + +Fixed +----- + +* Fix overloaded computed columns on RPC + |br| -- `@wolfgangwalther `_ + +* Fix POST, PATCH, DELETE with ``?select=`` and ``Prefer: return=minimal`` and PATCH with empty body + |br| -- `@wolfgangwalther `_ + +* Fix missing ``openapi-server-proxy-uri`` config option + |br| -- `@steve-chavez `_ + +* Fix ``Content-Profile`` not working for POST RPC + |br| -- `@steve-chavez `_ + +* Fix PUT restriction for including all columns in payload + |br| -- `@steve-chavez `_ + +* Documentation improvements + + + Added package managers to :ref:`install`. + +Changed +------- + +* From this version onwards, the release page will include a single Linux static executable that can be run on any Linux distribution. + +Thanks +------ + +This release was made possible thanks to: + +.. image:: ../_static/cybertec.png + :target: https://www.cybertec-postgresql.com/en/ + :width: 13em + +.. image:: ../_static/2ndquadrant.png + :target: https://www.2ndquadrant.com/en/?utm_campaign=External%20Websites&utm_source=PostgREST&utm_medium=Logo + :width: 13em + +.. image:: ../_static/retool.png + :target: https://retool.com/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + +* `Daniel Babiak `_ +* Evans Fernandes +* `Jan Sommer `_ +* Tsingson Qin +* Michel Pelletier +* Jay Hannah +* Robert Stolarz +* Kofi Gumbs +* Nicholas DiBiase +* Christopher Reid +* Nathan Bouscal +* Daniel Rafaj +* David Fenko + + +If you'd like to join them, consider `supporting PostgREST development `_. diff --git a/docs/releases/v8.0.0.rst b/docs/releases/v8.0.0.rst new file mode 100644 index 0000000000..bdaea49162 --- /dev/null +++ b/docs/releases/v8.0.0.rst @@ -0,0 +1,191 @@ +.. |br| raw:: html + +
+ +v8.0.0 +====== + +You can download this release at the `PostgREST v8.0.0 release page `_. + +Added +----- + +* Allow HTTP status override through the :ref:`response.status ` GUC. + |br| -- `@steve-chavez `_ + +* Allow :ref:`s_procs_variadic`. + |br| -- `@wolfgangwalther `_ + +* Allow :ref:`embedding_view_chains` recursively to any depth. + |br| -- `@wolfgangwalther `_ + +* No downtime when reloading the schema cache. See :ref:`schema_reloading`. + |br| -- `@steve-chavez `_ + +* Allow schema cache reloading using PostgreSQL :ref:`NOTIFY ` command. This enables :ref:`auto_schema_reloading`. + |br| -- `@steve-chavez `_ + +* Allow sending the header ``Prefer: headers-only`` to get a response with a ``Location`` header. See :ref:`insert`. + |br| -- `@laurenceisla `_ + +* Allow :ref:`external_connection_poolers` such as PgBouncer in transaction pooling mode. + |br| -- `@laurenceisla `_ + +* Allow :ref:`config_reloading` by sending a SIGUSR2 signal. + |br| -- `@steve-chavez `_ + +* Allow ``Bearer`` with and without capitalization as authentication schema. See :ref:`client_auth`. + |br| -- `@wolfgangwalther `_ + +* :ref:`in_db_config` that can be :ref:`reloaded with NOTIFY `. + |br| -- `@steve-chavez `_ + +* Allow OPTIONS to generate HTTP methods based on views triggers. See :ref:`OPTIONS requests `. + |br| -- `@laurenceisla `_ + +* Show timestamps for server diagnostic information. See :ref:`pgrst_logging`. + |br| -- `@steve-chavez `_ + +* Config options for showing a full OpenAPI output regardless of the JWT role privileges and for disabling it altogether. See :ref:`openapi-mode`. + |br| -- `@steve-chavez `_ + +* Config option for logging level. See :ref:`log-level`. + |br| -- `@steve-chavez `_ + +* Config option for enabling or disabling prepared statements. See :ref:`db-prepared-statements`. + |br| -- `@steve-chavez `_ + +* Config option for specifying how to terminate the transactions (allowing rollbacks, useful for testing). See :ref:`db-tx-end`. + |br| -- `@wolfgangwalther `_ + +* Documentation improvements + + + Added the :doc:`../schema_cache` page. + + Moved the :ref:`schema_reloading` reference from :doc:`../admin` to :doc:`../schema_cache` + +Changed +------- + +* Docker images are now optimized to be built from the scratch image. This reduces the compressed image size from over 30 MB to about 4 MB. + For more details, see `Docker image built with Nix `_. + |br| -- `@monacoremo `_ + +* The Docker image no longer has an internal ``/etc/postgrest.conf`` file, you must use :ref:`env_variables_config` to configure it. + |br| -- `@wolfgangwalther `_ + +* The ``pg_listen`` `utility `_ is no longer needed to automatically reload the schema cache + and it's replaced entirely by database notifications. See :ref:`auto_schema_reloading`. + |br| -- `@steve-chavez `_ + +* POST requests for insertions no longer include a ``Location`` header in the response by default and behave the same way as having a + ``Prefer: return=minimal`` header in the request. This prevents permissions errors when having a write-only table. See :ref:`insert`. + |br| -- `@laurenceisla `_ + +* Modified the default logging level from ``info`` to ``error``. See :ref:`log-level`. + |br| -- `@steve-chavez `_ + +* Changed the error message for a not found RPC on a stale schema (see :ref:`stale_function_signature`) and for the unsupported case of + overloaded functions with the same argument names but different types. + |br| -- `@laurenceisla `_ + +* Changed the error message for the no relationship found error. See :ref:`stale_fk_relationships`. + |br| -- `@laurenceisla `_ + +Fixed +----- + +* Fix showing UNKNOWN on ``postgrest --help`` invocation. + |br| -- `@monacoremo `_ + +* Removed single column restriction to allow composite foreign keys in join tables. + |br| -- `@goteguru `_ + +* Fix expired JWTs starting an empty transaction on the db. + |br| -- `@steve-chavez `_ + +* Fix location header for POST request with ``select=`` without PK. + |br| -- `@wolfgangwalther `_ + +* Fix error messages on connection failure for localized PostgreSQL on Windows. + |br| -- `@wolfgangwalther `_ + +* Fix ``application/octet-stream`` appending ``charset=utf-8``. + |br| -- `@steve-chavez `_ + +* Fix overloading of functions with unnamed arguments. + |br| -- `@wolfgangwalther `_ + +* Return ``405 Method not Allowed`` for GET of volatile RPC instead of 500. + |br| -- `@wolfgangwalther `_ + +* Fix RPC return type handling and embedding for domains with composite base type. + |br| -- `@wolfgangwalther `_ + +* Fix embedding through views that have COALESCE with subselect. + |br| -- `@wolfgangwalther `_ + +* Fix parsing of boolean config values for Docker environment variables, now it accepts double quoted truth values ``("true", "false")`` and numbers ``("1", "0")``. + |br| -- `@wolfgangwalther `_ + +* Fix using ``app.settings.xxx`` config options in Docker, now they can be used as ``PGRST_APP_SETTINGS_xxx``. + |br| -- `@wolfgangwalther `_ + +* Fix panic when attempting to run with unix socket on non-unix host and properly close unix domain socket on exit. + |br| -- `@monacoremo `_ + +* Disregard internal junction (in non-exposed schema) when embedding. + |br| -- `@steve-chavez `_ + +* Fix requests for overloaded functions from HTML forms to no longer hang. + |br| -- `@laurenceisla `_ + +Thanks +------ + +Big thanks from the `PostgREST team `_ to our sponsors! + +.. container:: image-container + + .. image:: ../_static/cybertec-new.png + :target: https://www.cybertec-postgresql.com/en/?utm_source=postgrest.org&utm_medium=referral&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/2ndquadrant.png + :target: https://www.2ndquadrant.com/en/?utm_campaign=External%20Websites&utm_source=PostgREST&utm_medium=Logo + :width: 13em + + .. image:: ../_static/retool.png + :target: https://retool.com/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/gnuhost.png + :target: https://gnuhost.eu/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/supabase.png + :target: https://supabase.com/?utm_source=postgrest%20backers&utm_medium=open%20source%20partner&utm_campaign=postgrest%20backers%20github&utm_term=homepage + :width: 13em + + .. image:: ../_static/oblivious.jpg + :target: https://oblivious.ai/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + +* Evans Fernandes +* `Jan Sommer `_ +* `Franz Gusenbauer `_ +* `Daniel Babiak `_ +* Tsingson Qin +* Michel Pelletier +* Jay Hannah +* Robert Stolarz +* Nicholas DiBiase +* Christopher Reid +* Nathan Bouscal +* Daniel Rafaj +* David Fenko +* Remo Rechkemmer +* Severin Ibarluzea +* Tom Saleeba +* Pawel Tyll + +If you like to join them please consider `supporting PostgREST development `_. diff --git a/docs/releases/v9.0.0.rst b/docs/releases/v9.0.0.rst new file mode 100644 index 0000000000..d451bba7eb --- /dev/null +++ b/docs/releases/v9.0.0.rst @@ -0,0 +1,130 @@ + +PostgREST 9.0.0 +=============== + +This major version is released with PostgreSQL 14 compatibility and is accompanied with new features and bug fixes. You can look at the detailed changelog and download the pre-compiled binaries on the `GitHub release page `_. + +Features +-------- + +PostgreSQL 14 compatibility +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +PostgreSQL 14 Beta 1 tightened its GUC naming scheme making it impossible to use multiple dots (``.``) and dashes (``-``) on custom GUC parameters, this caused our `old HTTP Context `_ to fail across all requests. Thankfully, `@robertsosinski `_ got the PostgreSQL team to reconsider allowing multiple dots in the GUC name, allowing us to avoid a major breaking change. You can see the full discussion `here `_. + +Still, dashes cannot be used on PostgreSQL 14 custom GUC parameters, so we changed our HTTP Context :ref:`to namespace using a mix of dots and JSON `. On older PostgreSQL versions we still use the :ref:`guc_legacy_names`. If you wish to use the new JSON GUCs on these versions, set the :ref:`db-use-legacy-gucs` config option to false. + +Resource Embedding with Top-level Filtering +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Historically, Resource Embedding was always done with a query that included the equivalent of a ``LEFT JOIN``, which meant you could not +exclude any of the top-level resource rows. You can now use :ref:`embedding_top_level_filter` to do the equivalent of an ``INNER JOIN``, thus you can filter the top-level resource rows with any of the available operators. + +Partitioned Tables +~~~~~~~~~~~~~~~~~~ + +Partitioned tables now integrate with all the feature set. You can :ref:`embed partitioned tables `, UPSERT, INSERT(with a correctly generated Location header) and make OPTIONS requests on them. They're also included in the generated OpenAPI. + +Functions(RPC) +~~~~~~~~~~~~~~ + +* Functions with a :ref:`single unnamed parameter ` can now be used to POST raw ``bytea``, ``text`` or ``json/jsonb``. + +Horizontal Filtering +~~~~~~~~~~~~~~~~~~~~ + +* The ``unknown`` value for three-valued logic can now be used on the ``is`` :ref:`operator `. + +* Escaping double quotes(``"``) in double-quoted surrounded strings is now possible by using backslashes, e.g. ``?col=in.("Double\"Quote")``. Backslashes can be escaped with a preceding backslash, e.g. ``?col=in.("Back\\slash")``. See :ref:`reserved-chars`. + +Administration +~~~~~~~~~~~~~~ + +* A ``Retry-After`` header is now added when PostgREST is doing :ref:`automatic_recovery`. + +Error messages +~~~~~~~~~~~~~~ + +* :ref:`embed_disamb` now shows an improved error message that includes relevant hints for clearing out the ambiguous embedding. + +Documentation improvements +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +* Added ``curl`` snippets to the :doc:`API <../api>` page. + +* Added the :ref:`automatic_recovery` section. + +* Added the :ref:`nested_embedding` section. + +* Added the :ref:`logical_operators` section. + +* Added the :ref:`templates` and :ref:`devops` sections to the :doc:`Ecosystem `. + +Bug fixes +--------- + +* Correct RPC return type handling for RETURNS TABLE with a single column (`#1930 `_). + +* Schema Cache query failing with ``standard_conforming_strings = off`` (`#1992 `_). + +* OpenAPI missing default values for String types (`#1871 `_). + +Breaking changes +---------------- + +* Dropped support for PostgreSQL 9.5 as it already reached its end-of-life according to `PostgreSQL versioning policy `_. + +* Partitions of a `partitioned table `_ are no longer included in the :doc:`../schema_cache`. This is so errors are not generated when doing resource embedding on partitioned tables. + +* Dropped support for doing :ref:`hint_disamb` using dots instead of exclamation marks, e.g. doing ``select=*,projects.client_id(*)`` instead of ``select=*,projects!client_id(*)``). Using dots was undocumented and deprecated back in `v6.0.2 `_. + +Thanks +------ + +Big thanks from the `PostgREST team `_ to our sponsors! + +.. container:: image-container + + .. image:: ../_static/cybertec-new.png + :target: https://www.cybertec-postgresql.com/en/?utm_source=postgrest.org&utm_medium=referral&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/2ndquadrant.png + :target: https://www.2ndquadrant.com/en/?utm_campaign=External%20Websites&utm_source=PostgREST&utm_medium=Logo + :width: 13em + + .. image:: ../_static/retool.png + :target: https://retool.com/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/gnuhost.png + :target: https://gnuhost.eu/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/supabase.png + :target: https://supabase.com/?utm_source=postgrest%20backers&utm_medium=open%20source%20partner&utm_campaign=postgrest%20backers%20github&utm_term=homepage + :width: 13em + + .. image:: ../_static/oblivious.jpg + :target: https://oblivious.ai/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + +* Evans Fernandes +* `Jan Sommer `_ +* `Franz Gusenbauer `_ +* `Daniel Babiak `_ +* Tsingson Qin +* Michel Pelletier +* Jay Hannah +* Robert Stolarz +* Nicholas DiBiase +* Christopher Reid +* Nathan Bouscal +* Daniel Rafaj +* David Fenko +* Remo Rechkemmer +* Severin Ibarluzea +* Tom Saleeba +* Pawel Tyll + +If you like to join them please consider `supporting PostgREST development `_. diff --git a/docs/releases/v9.0.1.rst b/docs/releases/v9.0.1.rst new file mode 100644 index 0000000000..ffec1ee2ad --- /dev/null +++ b/docs/releases/v9.0.1.rst @@ -0,0 +1,89 @@ + +PostgREST 9.0.1 +=============== + +This version includes important fixes for production environments and other miscellaneous fixes. You can download the pre-compiled binaries on the `GitHub release page `_. + +Bug Fixes +--------- + +* Keep working when ``EMFILE (Too many open files)`` is reached. (`#2042 `_) + +* Disable parallel GC for better performance on higher core CPUs (`#2294 `_). Thanks to `NoRedInk for their blog post `_ that lead us to this fix. + +* Fix using CPU while idle. (`#1076 `_) + +* Fix reading database configuration properly when ``=`` is present in the value. (`#2120 `_) + +* Fix ``is`` not working with upper or mixed case values like ``NULL``, ``TrUe``, ``FaLsE``. (`#2077 `_) + +* Execute deferred constraint triggers when using ``Prefer: tx=rollback``. (`#2020 `_) + +* Ignore ``Content-Type`` headers for ``GET`` requests when calling RPCs. (`#2147 `_) + + * Previously, ``GET`` without parameters, but with ``Content-Type: text/plain`` or ``Content-Type: application/octet-stream`` would fail with ``404 Not Found``, even if a function without arguments was available. + +* Fix wrong CORS header from ``Authentication`` to ``Authorization``. (`#1724 `_) + +* Fix ``json`` and ``jsonb`` columns showing a type in OpenAPI spec. (`#2165 `_) + +* Remove trigger functions from the schema cache and OpenAPI output, because they can't be called directly anyway. (`#2135 `_) + +* Remove aggregates, procedures and window functions from the schema cache and OpenAPI output. (`#2101 `_) + +* Fix schema cache loading when views with ``XMLTABLE`` and ``DEFAULT`` are present. (`#2024 `_) + +* Fix ``--dump-schema`` running with a wrong PG version. (`#2153 `_) + +* Fix misleading disambiguation error where the content of the ``relationship`` key looks like valid syntax. (`#2239 `_) + +Thanks +------ + +Big thanks from the `PostgREST team `_ to our sponsors! + +.. container:: image-container + + .. image:: ../_static/cybertec-new.png + :target: https://www.cybertec-postgresql.com/en/?utm_source=postgrest.org&utm_medium=referral&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/2ndquadrant.png + :target: https://www.2ndquadrant.com/en/?utm_campaign=External%20Websites&utm_source=PostgREST&utm_medium=Logo + :width: 13em + + .. image:: ../_static/retool.png + :target: https://retool.com/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/gnuhost.png + :target: https://gnuhost.eu/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + + .. image:: ../_static/supabase.png + :target: https://supabase.com/?utm_source=postgrest%20backers&utm_medium=open%20source%20partner&utm_campaign=postgrest%20backers%20github&utm_term=homepage + :width: 13em + + .. image:: ../_static/oblivious.jpg + :target: https://oblivious.ai/?utm_source=sponsor&utm_campaign=postgrest + :width: 13em + +* Evans Fernandes +* `Jan Sommer `_ +* `Franz Gusenbauer `_ +* `Daniel Babiak `_ +* Tsingson Qin +* Michel Pelletier +* Jay Hannah +* Robert Stolarz +* Nicholas DiBiase +* Christopher Reid +* Nathan Bouscal +* Daniel Rafaj +* David Fenko +* Remo Rechkemmer +* Severin Ibarluzea +* Tom Saleeba +* Pawel Tyll + +If you like to join them please consider `supporting PostgREST development `_. diff --git a/docs/requirements.txt b/docs/requirements.txt new file mode 100644 index 0000000000..57ed0b59fe --- /dev/null +++ b/docs/requirements.txt @@ -0,0 +1,5 @@ +docutils==0.16 +sphinx>=4.3.0 +sphinx-copybutton +sphinx-rtd-theme>=0.5.1 +sphinx-tabs \ No newline at end of file diff --git a/docs/schema_cache.rst b/docs/schema_cache.rst new file mode 100644 index 0000000000..c966a39907 --- /dev/null +++ b/docs/schema_cache.rst @@ -0,0 +1,237 @@ +.. _schema_cache: + +Schema Cache +============ + +Certain PostgREST features require metadata from the database schema. Getting this metadata requires executing expensive queries, so +in order to avoid repeating this work, PostgREST uses a schema cache. + ++--------------------------------------------+-------------------------------------------------------------------------------+ +| Feature | Required Metadata | ++============================================+===============================================================================+ +| :ref:`resource_embedding` | Foreign key constraints | ++--------------------------------------------+-------------------------------------------------------------------------------+ +| :ref:`Stored Functions ` | Function signature (parameters, return type, volatility and | +| | `overloading `_) | ++--------------------------------------------+-------------------------------------------------------------------------------+ +| :ref:`Upserts ` | Primary keys | ++--------------------------------------------+-------------------------------------------------------------------------------+ +| :ref:`Insertions ` | Primary keys (optional: only if the Location header is requested) | ++--------------------------------------------+-------------------------------------------------------------------------------+ +| :ref:`OPTIONS requests ` | View INSTEAD OF TRIGGERS and primary keys | ++--------------------------------------------+-------------------------------------------------------------------------------+ +| :ref:`open-api` | Table columns, primary keys and foreign keys | ++ +-------------------------------------------------------------------------------+ +| | View columns and INSTEAD OF TRIGGERS | ++ +-------------------------------------------------------------------------------+ +| | Function signature | ++--------------------------------------------+-------------------------------------------------------------------------------+ + +.. _stale_schema: + +The Stale Schema Cache +---------------------- + +When you make changes on the metadata mentioned above, the schema cache will turn stale on a running PostgREST. Future requests that use the above features will need the :ref:`schema cache to be reloaded `; otherwise, you'll get an error instead of the expected result. + +For instance, let's see what would happen if you have a stale schema cache for foreign key relationships and function signatures. + +.. _stale_fk_relationships: + +Stale Foreign Key Relationships +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Suppose you add a ``cities`` table to your database and define a foreign key that references an existing ``countries`` table. Then, you make a request to get the ``cities`` and their belonging ``countries``. + +.. tabs:: + + .. code-tab:: http + + GET /cities?select=name,country:countries(id,name) HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/cities?select=name,country:countries(id,name)" + +The result will be an error: + +.. code-block:: json + + { + "hint": "Verify that 'cities' and 'countries' exist in the schema 'api' and that there is a foreign key relationship between them. If a new relationship was created, try reloading the schema cache.", + "details": null, + "code": "PGRST200", + "message": "Could not find a relationship between 'cities' and 'countries' in the schema cache" + } + +As you can see, PostgREST couldn't find the newly created foreign key in the schema cache. See :ref:`schema_reloading` and :ref:`auto_schema_reloading` to solve this issue. + +.. _stale_function_signature: + +Stale Function Signature +~~~~~~~~~~~~~~~~~~~~~~~~ + +The same issue will occur on newly created functions on a running PostgREST. + +.. code-block:: plpgsql + + CREATE FUNCTION plus_one(num integer) + RETURNS integer AS $$ + SELECT num + 1; + $$ LANGUAGE SQL IMMUTABLE; + +.. tabs:: + + .. code-tab:: http + + GET /rpc/plus_one?num=1 HTTP/1.1 + + .. code-tab:: bash Curl + + curl "http://localhost:3000/rpc/plus_one?num=1" + +.. code-block:: json + + { + "hint": "If a new function was created in the database with this name and parameters, try reloading the schema cache.", + "details": null, + "code": "PGRST202", + "message": "Could not find the api.plus_one(num) function in the schema cache" + } + +Here, PostgREST tries to find the function on the stale schema to no avail. See :ref:`schema_reloading` and :ref:`auto_schema_reloading` to solve this issue. + +.. _schema_reloading: + +Schema Cache Reloading +---------------------- + +To reload the cache without restarting the PostgREST server, send a SIGUSR1 signal to the server process. + +.. code:: bash + + killall -SIGUSR1 postgrest + + +For docker you can do: + +.. code:: bash + + docker kill -s SIGUSR1 + + # or in docker-compose + docker-compose kill -s SIGUSR1 + +There's no downtime when reloading the schema cache. The reloading will happen on a background thread while requests keep being served. + +.. _schema_reloading_notify: + +Reloading with NOTIFY +~~~~~~~~~~~~~~~~~~~~~ + +There are environments where you can't send the SIGUSR1 Unix Signal (like on managed containers in cloud services or on Windows systems). For this reason, PostgREST also allows you to reload its schema cache through PostgreSQL `NOTIFY `_ as follows: + +.. code-block:: postgresql + + NOTIFY pgrst, 'reload schema' + +The ``"pgrst"`` notification channel is enabled by default. For configuring the channel, see :ref:`db-channel` and :ref:`db-channel-enabled`. + +.. _auto_schema_reloading: + +Automatic Schema Cache Reloading +-------------------------------- + +You can do automatic schema cache reloading in a pure SQL way and forget about stale schema cache errors with an `event trigger `_ and ``NOTIFY``. + +.. code-block:: postgresql + + -- Create an event trigger function + CREATE OR REPLACE FUNCTION pgrst_watch() RETURNS event_trigger + LANGUAGE plpgsql + AS $$ + BEGIN + NOTIFY pgrst, 'reload schema'; + END; + $$; + + -- This event trigger will fire after every ddl_command_end event + CREATE EVENT TRIGGER pgrst_watch + ON ddl_command_end + EXECUTE PROCEDURE pgrst_watch(); + +Now, whenever the ``pgrst_watch`` trigger is fired in the database, PostgREST will automatically reload the schema cache. + +To disable auto reloading, drop the trigger: + +.. code-block:: postgresql + + DROP EVENT TRIGGER pgrst_watch + +Finer-Grained Event Trigger +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +You can refine the previous event trigger and only react to the events relevant to the schema cache. This also prevents unnecessary +reloading when creating temporary tables(``CREATE TEMP TABLE``) inside functions. + +.. code-block:: postgresql + + -- watch create and alter + CREATE OR REPLACE FUNCTION pgrst_ddl_watch() RETURNS event_trigger AS $$ + DECLARE + cmd record; + BEGIN + FOR cmd IN SELECT * FROM pg_event_trigger_ddl_commands() + LOOP + IF cmd.command_tag IN ( + 'CREATE SCHEMA', 'ALTER SCHEMA' + , 'CREATE TABLE', 'CREATE TABLE AS', 'SELECT INTO', 'ALTER TABLE' + , 'CREATE FOREIGN TABLE', 'ALTER FOREIGN TABLE' + , 'CREATE VIEW', 'ALTER VIEW' + , 'CREATE MATERIALIZED VIEW', 'ALTER MATERIALIZED VIEW' + , 'CREATE FUNCTION', 'ALTER FUNCTION' + , 'CREATE TRIGGER' + , 'CREATE TYPE', 'ALTER TYPE' + , 'CREATE RULE' + , 'COMMENT' + ) + -- don't notify in case of CREATE TEMP table or other objects created on pg_temp + AND cmd.schema_name is distinct from 'pg_temp' + THEN + NOTIFY pgrst, 'reload schema'; + END IF; + END LOOP; + END; $$ LANGUAGE plpgsql; + + -- watch drop + CREATE OR REPLACE FUNCTION pgrst_drop_watch() RETURNS event_trigger AS $$ + DECLARE + obj record; + BEGIN + FOR obj IN SELECT * FROM pg_event_trigger_dropped_objects() + LOOP + IF obj.object_type IN ( + 'schema' + , 'table' + , 'foreign table' + , 'view' + , 'materialized view' + , 'function' + , 'trigger' + , 'type' + , 'rule' + ) + AND obj.is_temporary IS false -- no pg_temp objects + THEN + NOTIFY pgrst, 'reload schema'; + END IF; + END LOOP; + END; $$ LANGUAGE plpgsql; + + CREATE EVENT TRIGGER pgrst_ddl_watch + ON ddl_command_end + EXECUTE PROCEDURE pgrst_ddl_watch(); + + CREATE EVENT TRIGGER pgrst_drop_watch + ON sql_drop + EXECUTE PROCEDURE pgrst_drop_watch(); diff --git a/docs/schema_structure.rst b/docs/schema_structure.rst new file mode 100644 index 0000000000..d21b9332ef --- /dev/null +++ b/docs/schema_structure.rst @@ -0,0 +1,96 @@ + +.. note:: + + This page is a work in progress. + +.. _schema_isolation: + +Schema Isolation +================ + +A PostgREST instance exposes all the tables, views, and stored procedures of a single `PostgreSQL schema `_ (a namespace of database objects). This means private data or implementation details can go inside different private schemas and be invisible to HTTP clients. + +It is recommended that you don't expose tables on your API schema. Instead expose views and stored procedures which insulate the internal details from the outside world. +This allows you to change the internals of your schema and maintain backwards compatibility. It also keeps your code easier to refactor, and provides a natural way to do API versioning. + +.. image:: _static/db.png + +.. _func_privs: + +Functions +========= + +By default, when a function is created, the privilege to execute it is not restricted by role. The function access is ``PUBLIC`` — executable by all roles (more details at `PostgreSQL Privileges page `_). This is not ideal for an API schema. To disable this behavior, you can run the following SQL statement: + +.. code-block:: postgres + + ALTER DEFAULT PRIVILEGES REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC; + +This will change the privileges for all functions created in the future in all schemas. Currently there is no way to limit it to a single schema. In our opinion it's a good practice anyway. + +.. note:: + + It is however possible to limit the effect of this clause only to functions you define. You can put the above statement at the beginning of the API schema definition, and then at the end reverse it with: + + .. code-block:: postgres + + ALTER DEFAULT PRIVILEGES GRANT EXECUTE ON FUNCTIONS TO PUBLIC; + + This will work because the :code:`alter default privileges` statement has effect on function created *after* it is executed. See `PostgreSQL alter default privileges `_ for more details. + +After that, you'll need to grant EXECUTE privileges on functions explicitly: + +.. code-block:: postgres + + GRANT EXECUTE ON FUNCTION login TO anonymous; + GRANT EXECUTE ON FUNCTION signup TO anonymous; + +You can also grant execute on all functions in a schema to a higher privileged role: + +.. code-block:: postgres + + GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA api TO web_user; + +Security definer +---------------- + +A function is executed with the privileges of the user who calls it. This means that the user has to have all permissions to do the operations the procedure performs. +If the function accesses private database objects, your :ref:`API roles ` won't be able to successfully execute the function. + +Another option is to define the function with the :code:`SECURITY DEFINER` option. Then only one permission check will take place, the permission to call the function, and the operations in the function will have the authority of the user who owns the function itself. + +.. code-block:: postgres + + -- login as a user wich has privileges on the private schemas + + -- create a sample function + create or replace function login(email text, pass text) returns jwt_token as $$ + begin + -- access to a private schema called 'auth' + select auth.user_role(email, pass) into _role; + -- other operations + -- ... + end; + $$ language plpgsql security definer; + +Note the ``SECURITY DEFINER`` keywords at the end of the function. See `PostgreSQL documentation `_ for more details. + +Views +===== + +Views are invoked with the privileges of the view owner, much like stored procedures with the ``SECURITY DEFINER`` option. When created by a SUPERUSER role, all `row-level security `_ will be bypassed unless a different, non-SUPERUSER owner is specified. + +For changing this, we can create a non-SUPERUSER role and make this role the view's owner. + +.. code-block:: postgres + + CREATE ROLE api_views_owner NOINHERIT; + ALTER VIEW sample_view OWNER TO api_views_owner; + +Rules +----- + +Insertion on views with complex `rules `_ might not work out of the box with PostgREST. +It's recommended that you `use triggers instead of rules `_. +If you want to keep using rules, a workaround is to wrap the view insertion in a stored procedure and call it through the :ref:`s_procs` interface. +For more details, see this `github issue `_. diff --git a/docs/shell.nix b/docs/shell.nix new file mode 100644 index 0000000000..7c048643f6 --- /dev/null +++ b/docs/shell.nix @@ -0,0 +1,17 @@ +let + docs = + import ./default.nix; + + inherit (docs) pkgs; +in +pkgs.mkShell { + name = "postgrest-docs"; + + buildInputs = [ + docs.build + docs.serve + docs.spellcheck + docs.dictcheck + docs.linkcheck + ]; +} diff --git a/docs/tutorials/tut0.rst b/docs/tutorials/tut0.rst new file mode 100644 index 0000000000..7b2ce647c3 --- /dev/null +++ b/docs/tutorials/tut0.rst @@ -0,0 +1,231 @@ +.. _tut0: + +Tutorial 0 - Get it Running +=========================== + +:author: `begriffs `_ + +Welcome to PostgREST! In this pre-tutorial we're going to get things running so you can create your first simple API. + +PostgREST is a standalone web server which turns a PostgreSQL database into a RESTful API. It serves an API that is customized based on the structure of the underlying database. + +.. image:: ../_static/tuts/tut0-request-flow.png + +To make an API we'll simply be building a database. All the endpoints and permissions come from database objects like tables, views, roles, and stored procedures. These tutorials will cover a number of common scenarios and how to model them in the database. + +By the end of this tutorial you'll have a working database, PostgREST server, and a simple single-user todo list API. + +Step 1. Relax, we'll help +------------------------- + +As you begin the tutorial, pop open the project `chat room `_ in another tab. There are a nice group of people active in the project and we'll help you out if you get stuck. + +Step 2. Install PostgreSQL +-------------------------- + +If you're already familiar with using PostgreSQL and have it installed on your system you can use the existing installation (see :ref:`pg-dependency` for minimum requirements). For this tutorial we'll describe how to use the database in Docker because database configuration is otherwise too complicated for a simple tutorial. + +If Docker is not installed, you can get it `here `_. Next, let's pull and start the database image: + +.. code-block:: bash + + sudo docker run --name tutorial -p 5433:5432 \ + -e POSTGRES_PASSWORD=mysecretpassword \ + -d postgres + +This will run the Docker instance as a daemon and expose port 5433 to the host system so that it looks like an ordinary PostgreSQL server to the rest of the system. + +Step 3. Install PostgREST +------------------------- + +PostgREST is distributed as a single binary, with versions compiled for major distributions of Linux/BSD/Windows. Visit the `latest release `_ for a list of downloads. In the event that your platform is not among those already pre-built, see :ref:`build_source` for instructions how to build it yourself. Also let us know to add your platform in the next release. + +The pre-built binaries for download are :code:`.tar.xz` compressed files (except Windows which is a zip file). To extract the binary, go into the terminal and run + +.. code-block:: bash + + # download from https://github.com/PostgREST/postgrest/releases/latest + + tar xJf postgrest--.tar.xz + +The result will be a file named simply :code:`postgrest` (or :code:`postgrest.exe` on Windows). At this point try running it with + +.. code-block:: bash + + ./postgrest -h + +If everything is working correctly it will print out its version and the available options. You can continue to run this binary from where you downloaded it, or copy it to a system directory like :code:`/usr/local/bin` on Linux so that you will be able to run it from any directory. + +.. note:: + + PostgREST requires libpq, the PostgreSQL C library, to be installed on your system. Without the library you'll get an error like "error while loading shared libraries: libpq.so.5." Here's how to fix it: + + .. raw:: html + +

+

+ Ubuntu or Debian +
+
sudo apt-get install libpq-dev
+
+
+
+ Fedora, CentOS, or Red Hat +
+
sudo yum install postgresql-libs
+
+
+
+ OS X +
+
brew install postgresql
+
+
+
+ Windows +

All of the DLL files that are required to run PostgREST are available in the windows installation of PostgreSQL server. + Once installed they are found in the BIN folder, e.g: C:\Program Files\PostgreSQL\10\bin. Add this directory to your PATH + variable. Run the following from an administrative command prompt (adjusting the actual BIN path as necessary of course) +

setx /m PATH "%PATH%;C:\Program Files\PostgreSQL\10\bin"
+

+
+

+ +Step 4. Create Database for API +------------------------------- + +Connect to the SQL console (psql) inside the container. To do so, run this from your command line: + +.. code-block:: bash + + sudo docker exec -it tutorial psql -U postgres + +You should see the psql command prompt: + +:: + + psql (9.6.3) + Type "help" for help. + + postgres=# + +The first thing we'll do is create a `named schema `_ for the database objects which will be exposed in the API. We can choose any name we like, so how about "api." Execute this and the other SQL statements inside the psql prompt you started. + +.. code-block:: postgres + + create schema api; + +Our API will have one endpoint, :code:`/todos`, which will come from a table. + +.. code-block:: postgres + + create table api.todos ( + id serial primary key, + done boolean not null default false, + task text not null, + due timestamptz + ); + + insert into api.todos (task) values + ('finish tutorial 0'), ('pat self on back'); + +Next make a role to use for anonymous web requests. When a request comes in, PostgREST will switch into this role in the database to run queries. + +.. code-block:: postgres + + create role web_anon nologin; + + grant usage on schema api to web_anon; + grant select on api.todos to web_anon; + +The :code:`web_anon` role has permission to access things in the :code:`api` schema, and to read rows in the :code:`todos` table. + +It's a good practice to create a dedicated role for connecting to the database, instead of using the highly privileged ``postgres`` role. So we'll do that, name the role ``authenticator`` and also grant it the ability to switch to the ``web_anon`` role : + +.. code-block:: postgres + + create role authenticator noinherit login password 'mysecretpassword'; + grant web_anon to authenticator; + + +Now quit out of psql; it's time to start the API! + +.. code-block:: psql + + \q + +Step 5. Run PostgREST +--------------------- + +PostgREST can use a configuration file to tell it how to connect to the database. Create a file :code:`tutorial.conf` with this inside: + +.. code-block:: ini + + db-uri = "postgres://authenticator:mysecretpassword@localhost:5433/postgres" + db-schemas = "api" + db-anon-role = "web_anon" + +The configuration file has other :doc:`options <../configuration>`, but this is all we need. +If you are not using Docker, make sure that your port number is correct and replace `postgres` with the name of the database where you added the todos table. + +Now run the server: + +.. code-block:: bash + + ./postgrest tutorial.conf + +You should see + +.. code-block:: text + + Listening on port 3000 + Attempting to connect to the database... + Connection successful + +It's now ready to serve web requests. There are many nice graphical API exploration tools you can use, but for this tutorial we'll use :code:`curl` because it's likely to be installed on your system already. Open a new terminal (leaving the one open that PostgREST is running inside). Try doing an HTTP request for the todos. + +.. code-block:: bash + + curl http://localhost:3000/todos + +The API replies: + +.. code-block:: json + + [ + { + "id": 1, + "done": false, + "task": "finish tutorial 0", + "due": null + }, + { + "id": 2, + "done": false, + "task": "pat self on back", + "due": null + } + ] + +With the current role permissions, anonymous requests have read-only access to the :code:`todos` table. If we try to add a new todo we are not able. + +.. code-block:: bash + + curl http://localhost:3000/todos -X POST \ + -H "Content-Type: application/json" \ + -d '{"task": "do bad thing"}' + +Response is 401 Unauthorized: + +.. code-block:: json + + { + "hint": null, + "details": null, + "code": "42501", + "message": "permission denied for table todos" + } + +There we have it, a basic API on top of the database! In the next tutorials we will see how to extend the example with more sophisticated user access controls, and more tables and queries. + +Now that you have PostgREST running, try the next tutorial, :ref:`tut1` diff --git a/docs/tutorials/tut1.rst b/docs/tutorials/tut1.rst new file mode 100644 index 0000000000..33ef764e73 --- /dev/null +++ b/docs/tutorials/tut1.rst @@ -0,0 +1,263 @@ +.. _tut1: + +Tutorial 1 - The Golden Key +=========================== + +:author: `begriffs `_ + +In :ref:`tut0` we created a read-only API with a single endpoint to list todos. There are many directions we can go to make this API more interesting, but one good place to start would be allowing some users to change data in addition to reading it. + +Step 1. Add a Trusted User +-------------------------- + +The previous tutorial created a :code:`web_anon` role in the database with which to execute anonymous web requests. Let's make a role called :code:`todo_user` for users who authenticate with the API. This role will have the authority to do anything to the todo list. + +.. code-block:: postgres + + -- run this in psql using the database created + -- in the previous tutorial + + create role todo_user nologin; + grant todo_user to authenticator; + + grant usage on schema api to todo_user; + grant all on api.todos to todo_user; + grant usage, select on sequence api.todos_id_seq to todo_user; + +Step 2. Make a Secret +--------------------- + +Clients authenticate with the API using JSON Web Tokens. These are JSON objects which are cryptographically signed using a password known to only us and the server. Because clients do not know the password, they cannot tamper with the contents of their tokens. PostgREST will detect counterfeit tokens and will reject them. + +Let's create a password and provide it to PostgREST. Think of a nice long one, or use a tool to generate it. **Your password must be at least 32 characters long.** + +.. note:: + + Unix tools can generate a nice password for you: + + .. code-block:: bash + + # Allow "tr" to process non-utf8 byte sequences + export LC_CTYPE=C + + # read random bytes and keep only alphanumerics + < /dev/urandom tr -dc A-Za-z0-9 | head -c32 + +Open the :code:`tutorial.conf` (created in the previous tutorial) and add a line with the password: + +.. code-block:: ini + + # PASSWORD MUST BE AT LEAST 32 CHARS LONG + # add this line to tutorial.conf: + + jwt-secret = "" + +If the PostgREST server is still running from the previous tutorial, restart it to load the updated configuration file. + +Step 3. Sign a Token +-------------------- + +Ordinarily your own code in the database or in another server will create and sign authentication tokens, but for this tutorial we will make one "by hand." Go to `jwt.io `_ and fill in the fields like this: + +.. figure:: ../_static/tuts/tut1-jwt-io.png + :alt: jwt.io interface + + How to create a token at https://jwt.io + +**Remember to fill in the password you generated rather than the word "secret".** After you have filled in the password and payload, the encoded data on the left will update. Copy the encoded token. + +.. note:: + + While the token may look well obscured, it's easy to reverse engineer the payload. The token is merely signed, not encrypted, so don't put things inside that you don't want a determined client to see. + +Step 4. Make a Request +---------------------- + +Back in the terminal, let's use :code:`curl` to add a todo. The request will include an HTTP header containing the authentication token. + +.. code-block:: bash + + export TOKEN="" + + curl http://localhost:3000/todos -X POST \ + -H "Authorization: Bearer $TOKEN" \ + -H "Content-Type: application/json" \ + -d '{"task": "learn how to auth"}' + +And now we have completed all three items in our todo list, so let's set :code:`done` to true for them all with a :code:`PATCH` request. + +.. code-block:: bash + + curl http://localhost:3000/todos -X PATCH \ + -H "Authorization: Bearer $TOKEN" \ + -H "Content-Type: application/json" \ + -d '{"done": true}' + +A request for the todos shows three of them, and all completed. + +.. code-block:: bash + + curl http://localhost:3000/todos + +.. code-block:: json + + [ + { + "id": 1, + "done": true, + "task": "finish tutorial 0", + "due": null + }, + { + "id": 2, + "done": true, + "task": "pat self on back", + "due": null + }, + { + "id": 3, + "done": true, + "task": "learn how to auth", + "due": null + } + ] + +Step 5. Add Expiration +---------------------- + +Currently our authentication token is valid for all eternity. The server, as long as it continues using the same JWT password, will honor the token. + +It's better policy to include an expiration timestamp for tokens using the :code:`exp` claim. This is one of two JWT claims that PostgREST treats specially. + ++--------------+----------------------------------------------------------------+ +| Claim | Interpretation | ++==============+================================================================+ +| :code:`role` | The database role under which to execute SQL for API request | ++--------------+----------------------------------------------------------------+ +| :code:`exp` | Expiration timestamp for token, expressed in "Unix epoch time" | ++--------------+----------------------------------------------------------------+ + +.. note:: + + Epoch time is defined as the number of seconds that have elapsed since 00:00:00 Coordinated Universal Time (UTC), January 1st 1970, minus the number of leap seconds that have taken place since then. + +To observe expiration in action, we'll add an :code:`exp` claim of five minutes in the future to our previous token. First find the epoch value of five minutes from now. In psql run this: + +.. code-block:: postgres + + select extract(epoch from now() + '5 minutes'::interval) :: integer; + +Go back to jwt.io and change the payload to + +.. code-block:: json + + { + "role": "todo_user", + "exp": 123456789 + } + +**NOTE**: Don't forget to change the dummy epoch value :code:`123456789` in the snippet above to the epoch value returned by the psql command. + +Copy the updated token as before, and save it as a new environment variable. + +.. code-block:: bash + + export NEW_TOKEN="" + +Try issuing this request in curl before and after the expiration time: + +.. code-block:: bash + + curl http://localhost:3000/todos \ + -H "Authorization: Bearer $NEW_TOKEN" + +After expiration, the API returns HTTP 401 Unauthorized: + +.. code-block:: json + + { + "hint": null, + "details": null, + "code": "PGRST301", + "message": "JWT expired" + } + +Bonus Topic: Immediate Revocation +--------------------------------- + +Even with token expiration there are times when you may want to immediately revoke access for a specific token. For instance, suppose you learn that a disgruntled employee is up to no good and his token is still valid. + +To revoke a specific token we need a way to tell it apart from others. Let's add a custom :code:`email` claim that matches the email of the client issued the token. + +Go ahead and make a new token with the payload + +.. code-block:: json + + { + "role": "todo_user", + "email": "disgruntled@mycompany.com" + } + +Save it to an environment variable: + +.. code-block:: bash + + export WAYWARD_TOKEN="" + +PostgREST allows us to specify a stored procedure to run during attempted authentication. The function can do whatever it likes, including raising an exception to terminate the request. + +First make a new schema and add the function: + +.. code-block:: plpgsql + + create schema auth; + grant usage on schema auth to web_anon, todo_user; + + create or replace function auth.check_token() returns void + language plpgsql + as $$ + begin + if current_setting('request.jwt.claims', true)::json->>'email' = + 'disgruntled@mycompany.com' then + raise insufficient_privilege + using hint = 'Nope, we are on to you'; + end if; + end + $$; + +Next update :code:`tutorial.conf` and specify the new function: + +.. code-block:: ini + + # add this line to tutorial.conf + + db-pre-request = "auth.check_token" + +Restart PostgREST for the change to take effect. Next try making a request with our original token and then with the revoked one. + +.. code-block:: bash + + # this request still works + + curl http://localhost:3000/todos -X PATCH \ + -H "Authorization: Bearer $TOKEN" \ + -H "Content-Type: application/json" \ + -d '{"done": true}' + + # this one is rejected + + curl http://localhost:3000/todos -X PATCH \ + -H "Authorization: Bearer $WAYWARD_TOKEN" \ + -H "Content-Type: application/json" \ + -d '{"task": "AAAHHHH!", "done": false}' + +The server responds with 403 Forbidden: + +.. code-block:: json + + { + "hint": "Nope, we are on to you", + "details": null, + "code": "42501", + "message": "insufficient_privilege" + }