-
Notifications
You must be signed in to change notification settings - Fork 65
/
.readme-partials.yml
149 lines (115 loc) · 7.5 KB
/
.readme-partials.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
introduction: |-
[Google Cloud Logging](https://cloud.google.com/logging/docs) allows you to store, search, analyze,
monitor, and alert on log data and events from Google Cloud Platform and Amazon Web Services.
If you require lightweight dependencies, an experimental, minified version of
this library is available at [@google-cloud/logging-min](https://www.npmjs.com/package/@google-cloud/logging-min).
Note: `logging-min` is experimental, and its feature surface is subject to change.
To install `@google-cloud/logging-min` library run the following command:
```bash
npm install @google-cloud/logging-min
```
For an interactive tutorial on using the client library in a Node.js application, click Guide Me:
[![Guide Me](https://raw.githubusercontent.com/googleapis/nodejs-logging/main/_static/guide-me.svg)](https://console.cloud.google.com/?walkthrough_id=logging__logging-nodejs)
body: |-
## Batching Writes
High throughput applications should avoid awaiting calls to the logger:
```js
await log.write(logEntry1);
await log.write(logEntry2);
```
Rather, applications should use a _fire and forget_ approach:
```js
log.write(logEntry1);
log.write(logEntry2);
```
The `@google-cloud/logging` library will handle batching and dispatching
these log lines to the API.
## Writing to Stdout
The `LogSync` class helps users easily write context-rich structured logs to
`stdout` or any custom transport. It extracts additional log properties like
trace context from HTTP headers and can be used as an on/off toggle between
writing to the API or to `stdout` during local development.
Logs written to `stdout` are then picked up, out-of-process, by a Logging
agent in the respective GCP environment. Logging agents can add more
properties to each entry before streaming it to the Logging API.
Read more about [Logging agents](https://cloud.google.com/logging/docs/agent/logging).
Serverless applications like Cloud Functions, Cloud Run, and App Engine
are highly recommended to use the `LogSync` class as async logs may be dropped
due to lack of CPU.
Read more about [structured logging](https://cloud.google.com/logging/docs/structured-logging).
```js
// Optional: Create and configure a client
const logging = new Logging();
await logging.setProjectId()
await logging.setDetectedResource()
// Create a LogSync transport, defaulting to `process.stdout`
const log = logging.logSync(logname);
const meta = { // optional field overrides here };
const entry = log.entry(meta, 'Your log message');
log.write(entry);
// Syntax sugar for logging at a specific severity
log.alert(entry);
log.warning(entry);
```
## Populating Http request metadata
Metadata about Http request is a part of the [structured log info](https://cloud.google.com/logging/docs/structured-logging)
that can be captured within each log entry. It can provide a context for the application logs and
is used to group multiple log entries under the load balancer request logs. See the [sample](https://github.com/googleapis/nodejs-logging/blob/master/samples/http-request.js)
how to populate the Http request metadata for log entries.
If you already have a "raw" Http `request` object you can assign it to `entry.metadata.httpRequest` directly. More information about
how the `request` is interpreted as raw can be found in the [code](https://github.com/googleapis/nodejs-logging/blob/15849160116a814ab71113138cb211c2e0c2d4b4/src/entry.ts#L224-L238).
## Automatic Trace/Span ID Extraction
Cloud Logging libraries use [trace fields within LogEntry](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#FIELDS.trace) to capture trace contexts, which enables the [correlation of logs and traces](https://cloud.google.com/logging/docs/view/correlate-logs), and distributed tracing troubleshooting.
These tracing fields, including [trace](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#FIELDS.trace), [spanId](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#FIELDS.span_id), and [traceSampled](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#FIELDS.trace_sampled), define the trace context for a `LogEntry`.
If not provided explicitly in a LogEntry, the Cloud Logging library automatically populates `trace`, `span_id`, and `trace_sampled` fields from detected OpenTelemetry span contexts, or from HTTP request headers.
### Extracting Trace/Span ID from OpenTelemetry Context
If you are using OpenTelemetry and there is an active span in the OpenTelemetry Context, the `trace`, `span_id`, and `trace_sampled` fields in the log entry are automatically populated from the active span. More information about OpenTelemetry can be found [here](https://opentelemetry.io/docs/languages/js/).
### Extracting Trace/Span ID from HTTP Headers
If tracing fields are not provided explicitly and no OpenTelemetry context is detected, the `trace` / `span_id` fields are extracted automatically from HTTP headers.
Trace information can be automatically populated from either the [W3C Traceparent](https://www.w3.org/TR/trace-context) or [X-Cloud-Trace-Context](https://cloud.google.com/trace/docs/trace-context#legacy-http-header) headers.
## Error handling with logs written or deleted asynchronously
The `Log` class provide users the ability to write and delete logs asynchronously. However, there are cases when log entries
cannot be written or deleted and error is thrown - if error is not handled properly, it could crash the application.
One possible way to catch the error is to `await` the log write/delete calls and wrap it with `try/catch` like in example below:
```js
// Write log entry and and catch any errors
try {
await log.write(entry);
} catch (err) {
console.log('Error is: ' + err);
}
```
However, awaiting for every `log.write` or `log.delete` calls may introduce delays which could be avoided by
simply adding a callback like in the example below. This way the log entry can be queued for processing and code
execution will continue without further delays. The callback will be called once the operation is complete:
```js
// Asynchronously write the log entry and handle respone or any errors in provided callback
log.write(entry, err => {
if (err) {
// The log entry was not written.
console.log(err.message);
} else {
console.log('No error in write callback!');
}
});
```
Adding a callback to every `log.write` or `log.delete` calls could be a burden, especially if code
handling the error is always the same. For this purpose we introduced an ability to provide a default callback
for `Log` class which could be set through `LogOptions` passed to `Log` constructor as in example below - this
way you can define a global callback once for all `log.write` and `log.delete` calls and be able to handle errors:
```js
const {Logging} = require('@google-cloud/logging');
const logging = new Logging();
// Create options with default callback to be called on every write/delete response or error
const options = {
defaultWriteDeleteCallback: function (err) {
if (err) {
console.log('Error is: ' + err);
} else {
console.log('No error, all is good!');
}
},
};
const log = logging.log('my-log', options);
```
See the full sample in `writeLogWithCallback` function [here](https://github.com/googleapis/nodejs-logging/blob/master/samples/logs.js).