Promises/A+ interface for PostgreSQL.
- Supporting Promise, Bluebird, When, Q, etc.
- Transactions, functions, flexible query formatting;
- Automatic database connections;
- Strict query result filters.
- About
- Installing
- Getting Started
- Testing
- Usage
- Advanced
- History
- License
Built on top of node-postgres and its connection pool, this library translates their callback interface into one based on Promises/A+, while extending the protocol to a higher level, with automated connections and transactions management.
In addition, the library provides:
- its own, more flexible query formatting;
- event reporting for connectivity, errors, queries and transactions;
- support for all popular promise libraries + ES6 generators;
- declarative approach to controlling query results.
$ npm install pg-promise
Loading and initializing the library with Initialization Options:
var pgp = require('pg-promise')({
// Initialization Options
});
− or without Initialization Options:
var pgp = require('pg-promise')();
Create your Database object from the connection details:
var db = pgp(connection);
The connection
parameter can be any of the following:
Object db
represents the Database protocol with lazy connection, i.e. only the actual query methods acquire
and release the connection. You should create only one global/shared db
object per connection details.
- Learn by Example - the quickest way to get started with this library
- Protocol API - all the latest protocol documentation
- Wiki Pages - all the documentation references
- TypeScript declarations for the library
- Clone the repository (or download, if you prefer):
$ git clone https://github.com/vitaly-t/pg-promise
- Install the library's DEV dependencies:
$ npm install
-
Make sure all tests can connect to your local test database, using the connection details in test/db/header.js. Either set up your test database accordingly or change the connection details in that file.
-
Initialize the database with some test data:
$ node test/db/init.js
- To run all tests:
$ npm test
- To run all tests with coverage:
$ npm run coverage
Every connection context of the library shares the same query protocol, starting with generic method query
,
defined as shown below:
function query(query, values, qrm);
query
(required) - a string with support for three types of formatting, depending on thevalues
passed:- format
$1
(single variable), ifvalues
is of typestring
,boolean
,number
,Date
,function
,null
or QueryFile; - format
$1, $2, etc..
, ifvalues
is an array; - format
$*propName*
, ifvalues
is an object (notnull
and notDate
), where*
is any of the supported open-close pairs:{}
,()
,<>
,[]
,//
;
- format
values
(optional) - value/array/object to replace the variables in the query;qrm
- (optional) Query Result Mask, as explained below. When not passed, it defaults topgp.queryResult.any
.
When a value/property inside array/object is an array, it is treated as a PostgreSQL Array Type,
converted into the array constructor format of array[]
, the same as calling method pgp.as.array()
.
When a value/property inside array/object is of type object
(except for null
, Date
or Buffer
), it is automatically
serialized into JSON, the same as calling method pgp.as.json()
, except the latter would convert anything to JSON.
For the most current SQL formatting support see method as.format
Raw-text values can be injected by ending the variable name with ^
or :raw
:
$1^, $2^, etc...
, $*varName^*
, where *
is any of the supported open-close pairs: {}
, ()
, <>
, []
, //
Raw text is injected without any pre-processing, which means:
- No proper escaping (replacing each single-quote symbol
'
with two); - No wrapping text into single quotes.
Unlike regular variables, value for raw-text variables cannot be null
or undefined
, because of the ambiguous meaning
in this case. If such values are passed in, the formatter will throw error Values null/undefined cannot be used as raw text.
Special syntax this^
within the Named Parameters refers to the formatting object itself, to be injected
as a raw-text JSON-formatted string.
For the most current SQL formatting support see method as.format
Open values simplify concatenation of string values within a query, primarily for such special cases as LIKE
/ILIKE
filters.
Names for open-value variables end with either :value
or symbol #
, and it means that such a value is to be properly
formatted and escaped, but not to be wrapped in quotes when it is a text.
Similar to raw-text variables, open-value variables are also not allowed to be null
or undefined
, or they will throw
error Open values cannot be null or undefined.
And the difference is that raw-text variables are not escaped, while
open-value variables are properly escaped.
Below is an example of formatting LIKE
filter that ends with a second name:
// using $1# or $1:value syntax:
query("...WHERE name LIKE '%$1#'", "O'Connor");
query("...WHERE name LIKE '%$1:value'", "O'Connor");
//=> ...WHERE name LIKE '%O''Connor'
// using ${propName#} or ${propName:value} syntax:
query("...WHERE name LIKE '%${filter#}'", {filter: "O'Connor"});
query("...WHERE name LIKE '%${filter:value}'", {filter: "O'Connor"});
//=> ...WHERE name LIKE '%O''Connor'
See also: method as.value.
When a variable ends with ~
(tilde) or :name
, it represents an SQL name or identifier, which must be a text
string of at least 1 character long. Such name is then properly escaped and wrapped in double quotes.
query('INSERT INTO $1~($2~) VALUES(...)', ['Table Name', 'Column Name']);
//=> INSERT INTO "Table Name"("Column Name") VALUES(...)
// A mixed example for a dynamic column list:
var columns = ['id', 'message'];
query('SELECT ${columns^} FROM ${table~}', {
columns: columns.map(pgp.as.name).join(),
table: 'Table Name'
});
//=> SELECT "id","message" FROM "Table Name"
Version 5.2.1 and later supports extended syntax for ${this~}
and for method as.name:
var obj = {
one: 1,
two: 2
};
format("INSERT INTO table(${this~}) VALUES(${one}, ${two})", obj);
//=>INSERT INTO table("one","two") VALUES(1, 2)
Relying on this type of formatting for sql names and identifiers, along with regular variable formatting makes your application impervious to sql injection.
See method as.name for the latest API.
In order to eliminate the chances of unexpected query results and thus make the code more robust,
method query
uses parameter qrm
(Query Result Mask):
///////////////////////////////////////////////////////
// Query Result Mask flags;
//
// Any combination is supported, except for one + many.
var queryResult = {
/** Single row is expected. */
one: 1,
/** One or more rows expected. */
many: 2,
/** Expecting no rows. */
none: 4,
/** many|none - any result is expected. */
any: 6
};
In the following generic-query example we indicate that the call can return anything:
db.query('select * from users');
which is equivalent to making one of the following calls:
var qrm = pgp.queryResult;
db.query('select * from users', undefined, qrm.many | qrm.none);
db.query('select * from users', undefined, qrm.any);
db.manyOrNone('select * from users');
db.any('select * from users');
This usage pattern is facilitated through result-specific methods that can be used instead of the generic query:
db.many(query, values); // expects one or more rows
db.one(query, values); // expects a single row
db.none(query, values); // expects no rows
db.any(query, values); // expects anything, same as `manyOrNone`
db.oneOrNone(query, values); // expects 1 or 0 rows
db.manyOrNone(query, values); // expects anything, same as `any`
There is however one specific method result(query, values)
to bypass any result verification, and instead resolve
with the original Result object passed from the PG library.
You can also add your own methods and properties to this protocol via the extend event.
Each query function resolves its data according to the qrm
that was used:
none
- data isnull
. If the query returns any kind of data, it is rejected.one
- data is a single object. If the query returns no data or more than one row of data, it is rejected.many
- data is an array of objects. If the query returns no rows, it is rejected.one
|none
- data isnull
, if no data was returned; or a single object, if one row was returned. If the query returns more than one row of data, the query is rejected.many
|none
- data is an array of objects. When no rows are returned, data is an empty array.
If you try to specify one
|many
in the same query, such query will be rejected without executing it, telling you that such mask is invalid.
If qrm
is not specified when calling generic query
method, it is assumed to be many
|none
= any
, i.e. any kind of data expected.
This is all about writing robust code, when the client specifies what kind of data it is ready to handle on the declarative level, leaving the burden of all extra checks to the library.
The library supports named parameters in query formatting, with the syntax of $*propName*
, where *
is any of the following open-close
pairs: {}
, ()
, <>
, []
, //
db.query('select * from users where name=${name} and active=$/active/', {
name: 'John',
active: true
});
The same goes for all types of query methods as well as method as.format, where values
can also be an object whose properties can be
referred to by name from within the query.
A valid property name consists of any combination of letters, digits, underscores or $
, and they are case-sensitive.
Leading and trailing spaces around property names are ignored.
It is important to know that while property values null
and undefined
are both formatted as null
,
an error is thrown when the property doesn't exist at all (except for partial
replacements - see below).
You can also use partial
replacements within method as.format, to ignore variables that do not exist in the formatting object.
Property this
is a reference to the formatting object itself, so it can be inserted as a JSON-formatted string, alongside its properties.
${this}
- inserts the object itself as a JSON-formatted string;${this^}
- inserts the object itself as a raw-text JSON-formatted string.
example:
var doc = {
id: 123,
body: "some text"
};
db.none("INSERT INTO documents(id, doc) VALUES(${id}, ${this})", doc)
.then(function () {
// success;
})
.catch(function (error) {
// error;
});
which will execute:
INSERT INTO documents(id, doc) VALUES(123, '{"id":123,"body":"some text"}')
Version 3.2.1 and later allows syntax :json
as an alternative to formatting the value as a JSON string.
NOTE: Technically, it is possible in javascript, though not recommended, for an object to contain a property
with name this
. And in such cases the property's value will be used instead.
In PostgreSQL stored procedures are just functions that usually do not return anything.
Suppose we want to call function findAudit to find audit records by user_id
and maximum timestamp.
We can make such call as shown below:
db.func('findAudit', [123, new Date()])
.then(function (data) {
console.log(data); // printing the data returned
})
.catch(function (error) {
console.log(error); // printing the error
});
We passed it user_id = 123
, plus current Date/Time as the timestamp. We assume that the function signature matches
the parameters that we passed. All values passed are serialized automatically to comply with PostgreSQL type formats.
Method func
accepts optional third parameter - qrm
(Query Result Mask), the same as method query
.
And when you are not expecting any return results, call db.proc
instead. Both methods return a Promise object,
but db.proc
doesn't take a qrm
parameter, always assuming it is one
|none
.
Summary for supporting procedures and functions:
func(query, values, qrm)
- expects the result according toqrm
proc(query, values)
- callsfunc(query, values, qrm.one | qrm.none)
The library provides several helper functions to convert javascript types into their proper PostgreSQL presentation that can be passed directly into queries or functions as parameters. All of such helper functions are located within namespace pgp.as, and each function returns a formatted string when successful or throws an error when it fails.
When we pass values
as a single parameter or inside an array, it is verified to be an object
that supports function formatDBType
, as either its own or inherited. And if the function exists,
its return result overrides both the actual value and the formatting syntax for parameter query
.
This allows usage of your own custom types as formatting parameters for the queries, as well as
overriding formatting for standard object types, such as Date
and Array
.
Example: your own type formatting
function Money(m) {
this.amount = m;
this.formatDBType = function () {
// return a string with 2 decimal points;
return this.amount.toFixed(2);
}
}
Example: overriding standard types
Date.prototype.formatDBType = function () {
// format Date as a local timestamp;
return this.getTime();
};
Function formatDBType
is allowed to return absolutely anything, including:
- instance of another object that supports its own custom formatting;
- instance of another object that doesn't have its own custom formatting;
- another function, with recursion of any depth;
Please note that the return result from formatDBType
may even affect the
formatting syntax expected within parameter query
, as explained below.
If you pass in values
as an object that has function formatDBType
,
and that function returns an array, then your query
is expected to use
$1, $2
as the formatting syntax.
And if formatDBType
in that case returns a custom-type object that doesn't support
custom formatting, then query
will be expected to use $*propName*
as the formatting syntax.
This features allows overriding raw
flag for the values returned from custom types.
Any custom type or standard type that implements function formatDBType
can also set
property _rawDBType = true
to force raw variable formatting on the returned value.
This makes the custom type formatting ultimately flexible, as there is no limitation as to how a custom type can format its value.
For example, some special types, like UUID, do not have natural presentation in JavaScript,
so they have to be converted into text strings when passed into the query formatting.
For an array of UUID-s, for instance, you would have to explicitly cast the formatted value
with ::uuid[]
appended at the end of the variable.
You can implement your own presentation for UUID that does not require extra casting:
function UUID(value) {
this.uuid = value;
this._rawDBType = true; // force raw format on output;
this.formatDBType = function () {
// alternatively, you can set flag
// _rawDBType during this call:
// this._rawDBType = true;
return this.uuid;
};
}
When you chain one custom-formatting type to return another one, please note that
setting _rawDBType
on any level will set the flag for the entire chain.
Use of external SQL files (via QueryFile) offers many advantages:
- Much cleaner JavaScript code, with all SQL kept in external files;
- Much easier to write large and well-formatted SQL, with comments and whole revisions;
- Changes in external SQL can be automatically re-loaded (option
debug
), without restarting the app; - Pre-formatting SQL upon loading (option
params
), making a two-step SQL formatting very easy; - Parsing and minifying SQL (options
minify
/compress
), for early error detection and smaller queries.
Example:
// Helper for linking to external query files:
function sql(file) {
// consider using here: path.join(__dirname, file)
return new pgp.QueryFile(file, {minify: true});
}
// Create QueryFile globally, once per file:
var sqlFindUser = sql('./sql/findUser.sql');
db.one(sqlFindUser, {id: 123})
.then(user=> {
console.log(user);
})
.catch(error=> {
if (error instanceof pgp.errors.QueryFileError) {
// => the error is related to our QueryFile
}
});
File findUser.sql
:
/*
multi-line comment
*/
SELECT name, dob -- single-line comment
FROM Users
WHERE id = ${id}
Every query method of the library can accept type QueryFile as its query
parameter.
The type never throws any error, leaving it for query methods to reject with QueryFileError.
You should only create a single instance of QueryFile per file, and then reuse that instance throughout the application.
Notable features of QueryFile:
debug
mode, to make every query request check if the file has changed since it was last read, and if so - read it afresh. This way you can write sql queries and see immediate updates without having to restart your application.- Option
params
is for static SQL pre-formatting, to inject certain values only once, like a schema name or a configurable table name.
In version 5.2.0, support for type QueryFile was also integrated into the query formatting engine. See method as.format.
The library supports promise-chained queries on shared and detached connections. Choosing which one to use depends on the situation and personal preferences.
Queries in a detached promise chain maintain connection independently, they each acquire a connection from the pool, execute the query and then release the connection back to the pool.
db.one('select * from users where id = $1', 123) // find the user from id;
.then(function (data) {
// find 'login' records for the user found:
return db.query('select * from audit where event=$1 and userId=$2',
['login', data.id]);
})
.then(function (data) {
console.log(data); // display found audit records;
})
.catch(function (error) {
console.log(error); // display the error;
});
In a situation where a single request is to be made against the database, a detached chain is the only one that makes sense. And even if you intend to execute multiple queries in a chain, keep in mind that even though each will use its own connection, such will be used from a connection pool, so effectively you end up with the same connection, without any performance penalty.
NOTE: With the addition of Tasks, use of shared connections directly is considered obsolete. It is recommended that you use Tasks instead, as they are much easier and safer to use.
A promise chain with a shared connection starts with connect()
, which acquires a connection from the pool to be shared
with all the queries down the promise chain. The connection must be released back to the pool when no longer needed.
var sco; // shared connection object;
db.connect()
.then(function (obj) {
sco = obj; // save the connection object;
// find active users created before today:
return sco.query('select * from users where active=$1 and created < $2',
[true, new Date()]);
})
.then(function (data) {
console.log(data); // display all the user details;
})
.catch(function (error) {
console.log(error); // display the error;
})
.finally(function () {
if (sco) {
sco.done(); // release the connection, if it was successful;
}
});
Shared-connection chaining is when you want absolute control over the connection, either because you want to execute lots of queries in one go, or because you like squeezing every bit of performance out of your code. Other than that, the author hasn't seen any performance difference from the detached-connection chaining. And besides, any long sequence of queries normally resides inside a task or transaction, which always uses shared-connection chaining automatically.
A task represents a shared connection to be used within a callback function. The callback can be either a regular function or an ES6 generator.
A transaction, for example, is just a special type of task, wrapped in CONNECT->COMMIT/ROLLBACK
.
db.task(function (t) {
// t = this;
// execute a chain of queries;
})
.then(function (data) {
// success;
})
.catch(function (error) {
// failed;
});
The purpose of tasks is simply to provide a shared connection context within the callback function to execute and return a promise chain, and then automatically release the connection.
In other words, it is to simplify the use of shared connections, so instead of calling connect
in the beginning
and done
in the end (if it was connected successfully), one can call db.task
instead, execute all queries within
the callback and return the result.
Transactions can be executed within both shared and detached promise chains in the same way, performing the following actions:
- Acquires a new connection (detached chains only);
- Executes
BEGIN
command; - Invokes your callback function (or generator) with the connection object;
- Executes
COMMIT
, if the callback resolves, orROLLBACK
, if the callback rejects; - Releases the connection (detached chains only);
- Resolves with the callback result, if success; rejects with the reason, if failed.
db.tx(function (t) {
// t = this;
// creating a sequence of transaction queries:
var q1 = this.none('update users set active=$1 where id=$2', [true, 123]);
var q2 = this.one('insert into audit(entity, id) values($1, $2) returning id',
['users', 123]);
// returning a promise that determines a successful transaction:
return this.batch([q1, q2]); // all of the queries are to be resolved;
})
.then(function (data) {
console.log(data); // printing successful transaction output;
})
.catch(function (error) {
console.log(error); // printing the error;
});
A detached transaction acquires a connection and exposes object t
=this
to let all containing queries
execute on the same connection.
NOTE: Use of shared-connection transactions is no longer necessary. When a transaction needs to use the connection from its container, you should execute it inside a task instead.
var sco; // shared connection object;
db.connect()
.then(function (obj) {
sco = obj;
return sco.oneOrNone('select * from users where active=$1 and id=$1', [true, 123]);
})
.then(function (data) {
return sco.tx(function (t) {
// t = this;
var q1 = this.none('update users set active=$1 where id=$2', [false, data.id]);
var q2 = this.one('insert into audit(entity, id) values($1, $2) returning id',
['users', 123]);
// returning a promise that determines a successful transaction:
return this.batch([q1, q2]); // all of the queries are to be resolved;
});
})
.catch(function (error) {
console.log(error); // printing the error;
})
.finally(function () {
if (sco) {
sco.done(); // release the connection, if it was successful;
}
});
If you need to execute just one transaction, the detached transaction pattern is all you need. But even if you need to combine it with other queries in a detached chain, it will work the same. As stated earlier, choosing a shared chain over a detached one is mostly a matter of special requirements and/or personal preference.
Similar to the shared-connection transactions, nested transactions automatically share the connection between all levels. This library sets no limitation as to the depth (nesting levels) of transactions supported.
Example:
db.tx(function (t) {
// t = this;
var queries = [
this.none('drop table users;'),
this.none('create table users(id serial not null, name text not null)')
];
for (var i = 1; i <= 100; i++) {
queries.push(this.none('insert into users(name) values($1)', 'name-' + i));
}
queries.push(
this.tx(function (t1) {
// t1 = this != t;
return this.tx(function (t2) {
// t2 = this != t1 != t;
return this.one('select count(*) from users');
});
}));
return this.batch(queries);
})
.then(function (data) {
console.log(data); // printing transaction result;
})
.catch(function (error) {
console.log(error); // printing the error;
});
It is important to know that PostgreSQL doesn't have proper support for nested transactions, it only supports partial rollbacks via savepoints inside transactions. The difference between the two techniques is huge, as explained further.
Proper support for nested transactions means that the result of a successful sub-transaction isn't rolled back when its parent transaction is rolled back. But with PostgreSQL save-points, if you roll-back the top-level transaction, the result of all inner save-points is also rolled back.
Save-points are only good for partial rollbacks, i.e. you can roll-back results of sub-transactions, with yet successful commit for the top-level transaction. Using promises it is easy to construct your transaction so it would utilize that logic. This library automatically provides a transaction on the top level, and save-points for all sub-transactions.
A regular task/transaction with a set of independent queries relies on method batch to resolve all queries asynchronously.
However, when it comes to executing a significant number of queries during a bulk INSERT
or UPDATE
,
such approach is no longer practical. For one thing, it implies that all requests have been created as promise objects,
which isn't possible when dealing with a huge number of queries, due to memory limitations imposed by NodeJS.
And for another, when one query fails, the rest will continue trying to execute, due to their promise nature,
as being asynchronous.
This is why within each task/transaction we have method sequence, to be able to execute a strict sequence of queries one by one, and if one fails - the rest won't try to execute.
function source(index, data, delay) {
// must create and return a promise object dynamically,
// based on the index of the sequence;
switch (index) {
case 0:
return this.query('select 0');
case 1:
return this.query('select 1');
case 2:
return this.query('select 2');
}
// returning or resolving with undefined ends the sequence;
// throwing an error will result in a reject;
}
db.tx(function (t) {
// t = this;
return this.sequence(source);
})
.then(function (data) {
console.log(data); // print result;
})
.catch(function (error) {
console.log(error); // print the error;
});
Sequence is based on implementation of spex.sequence.
In order to be able to fine-tune database requests in a highly asynchronous environment, PostgreSQL supports Transaction Snapshots, plus 3 ways of configuring a transaction:
- SET TRANSACTION, to configure the current transaction, which your can execute as the very first query in your transaction function;
SET SESSION CHARACTERISTICS AS TRANSACTION
- setting default transaction properties for the entire session;- BEGIN +
Transaction Mode
- initiates a pre-configured transaction.
The first method is quite usable, but that means you have to start every transaction with an initial query to configure the transaction, which can be a bit awkward.
The second approach isn't very usable within a database framework as this one, which relies on a connection pool, so you don't really know when a new connection is created.
The last method is not usable, because transactions in this library are automatic, executing BEGIN
without your control, or so it was until Transaction Mode type was added (read further).
Transaction Mode extends the BEGIN
command in your transaction with a complete set of configuration parameters.
var TransactionMode = pgp.txMode.TransactionMode;
var isolationLevel = pgp.txMode.isolationLevel;
// Create a reusable transaction mode (serializable + read-only + deferrable):
var tmSRD = new TransactionMode({
tiLevel: isolationLevel.serializable,
readOnly: true,
deferrable: true
});
function myTransaction() {
return this.query('SELECT * FROM table');
}
myTransaction.txMode = tmSRD; // assign transaction mode;
db.tx(myTransaction)
.then(function(){
// success;
});
Instead of the default BEGIN
, such transaction will initiate with the following command:
BEGIN ISOLATION LEVEL SERIALIZABLE READ ONLY DEFERRABLE
Transaction Mode is set via property txMode
on the transaction function.
This is the most efficient and best-performing way of configuring transactions. In combination with Transaction Snapshots you can make the most out of transactions in terms of performance and concurrency.
If you prefer writing asynchronous code in a synchronous manner, you can implement your tasks and transactions as generators.
function * getUser(t) {
// t = this;
let user = yield this.oneOrNone('select * from users where id = $1', 123);
return yield user || this.one('insert into users(name) values($1) returning *', 'John');
}
db.task(getUser)
.then(function (user) {
// success;
})
.catch(function (error) {
// error;
});
The library verifies whether the callback function is a generator, and executes it accordingly.
When initializing the library, you can pass object options
with a set of global properties.
See API / options for complete list of supported options.
By default, pg-promise provides its own implementation of the query formatting, as explained in Queries and Parameters.
If, however, you want your queries formatted by the PG library, set parameter pgFormatting
to be true
when initializing the library, and every query formatting will redirect to the PG's implementation.
Although this has a huge implication to the library's functionality, it is not within the scope of this project to detail. For any further reference you should use documentation of the PG library.
Below is just some of the query-formatting features implemented by pg-promise that are not in node-postgres:
- Custom Type Formatting
- Single-value formatting: pg-promise doesn't require use of an array when passing a single value;
- Raw-Text support: injecting raw/pre-formatted text values into the query;
- Functions as formatting parameters, with the actual values returned from the callbacks;
- PostgreSQL Array Constructors are used when formatting arrays, not the old string syntax;
- Automatic conversion of numeric
NaN
,+Infinity
and-Infinity
into their string presentation; - Support for this reference;
- Automatic QueryFile support
NOTE: Formatting parameters for calling functions (methods func
and proc
) is not affected by this override.
When needed, use the generic query
instead to invoke functions with redirected query formatting.
By default, pg-promise uses ES6 Promise. If your version of NodeJS doesn't support ES6 Promise, or you want a different promise library to be used, set this property to the library's instance.
Example of switching over to Bluebird:
var promise = require('bluebird');
var options = {
promiseLib: promise
};
var pgp = require('pg-promise')(options);
Promises/A+ libraries that implement a recognizable promise signature and work automatically:
- ES6 Promise - used by default, though it doesn't have
done()
orfinally()
. - Bluebird - best alternative all around;
- Promise - very solid library;
- When - quite old, not the best support;
- Q - most widely used;
- RSVP - doesn't have
done()
, usefinally/catch
instead - Lie - doesn't have
done()
.
If you pass in a library that doesn't implement a recognizable promise signature, pg-promise will
throw error Invalid promise library specified.
during initialization.
For such libraries you can use Promise Adapter to make them compatible with pg-promise, mostly needed by smaller and simplified Conformant Implementations.
When exiting your application, you can optionally call pgp.end:
pgp.end(); // terminate the database connection pool
This will release pg connection pool globally and make sure that the process terminates without any delay. If you do not call it, your process may be waiting for 30 seconds (default for poolIdleTimeout), waiting for the connection to expire in the pool.
If, however you normally exit your application by killing the NodeJS process, then you don't need to use it.
For the list of all changes see the history log.
Copyright (c) 2016 Vitaly Tomilov ([email protected])
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.