-
Notifications
You must be signed in to change notification settings - Fork 372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Firestore] Error adding document: Error: 4 DEADLINE_EXCEEDED: Deadline exceeded after 269.549s,metadata filters: 0.002s,LB pick: 0.002s,remote_addr=142.251.16.95:443 #2655
Comments
I found a few problems with this issue:
|
Drive-by comment: If the problematic query contains an "or" clause, it could be related to googleapis/nodejs-firestore#2055, for which a backend fix will be rolling out in August 2024. |
@dconeybe it is mostly happening in write queries. I am pasting a sample here
While the AB object is probably a few bytes I am not sure it is big enough to cause any issues in writes |
Hi @Drpepper78 , thank you for reporting this issue. Could you please confirm:
This could be a similar issue as #2495, which is fixed in newer SDK version. Please try upgrading the SDK and see if the issue persists. |
Additional data point if it helps - this db is in native mode based on google cloud and not datastore mode. I am speculating here but is there some sort of limit on per document operation? If a document is being written to, can it still be read at the same time on another connection? Are there collection level limitations on writes per second? Also, are there any limitations on the number of connections a client can form? In our console, I can see no more than 10 connections were used at peak. If I am blocked on the number of connections, I can try to spawn more clients. Based on the issue you linked I will add these env variables
|
Hi @milaGGL another update is that I just noticed in the stack trace of my read API error that the error stack seems to be from a write request invoked elsewhere. This seems to be a vercel issue where logs are assigned incorrectly. I have opened an issue with them but until I get a response we can assume that issue mainly lies with writes and we are well below the write limits. |
Have you tried upgrading the SDK to the newest version? This could be due to the broken grpc library. #2495 (comment) Also, is it possible share the stack trace of the error? It might tell us something. |
Hi @milaGGL , I am getting same error. We might have the same issue.
EDIT: Another information that might help solve this case is this only happens not 100% of the time. If I retried with the same request, It goes through without error. Here's the trace. This might help Drpepper too:
|
[REQUIRED]
I am using firebase admin sdk for this but I have attached the version number for both-
"firebase": "10.11.1",
"firebase-admin": "12.1.0",
Step 3: I have a serverless webapp which has a mostly modest write rate i.e. less than 3-4k per day. The hourly peak for writes is about 500 and for reads it is about 38k. Each of my vercel function is deployed separately but I am not sure if the DAO setup is shared between all of them or if they all get their own instances.
In dao files, I initialize the admin app and use getFirestore to generate a db client to query.
database = getFirestore(app);
It is weird that these errors report anywhere from 90 seconds to 200 + seconds of delay since our db is fairly small and the number of queries is also low enough that they shouldn't be blocked for this long. Most of these are user driven actions so updates of individual documents or collections will have a few seconds of gaps between them. I am nowhere close to the limits to be facing this.
I am also curious if there is a limitation on the number of connections a single database client can make with firestore db using getFirestore(app) since our connection usage peaks at 10.
Steps to reproduce:
There is no standard pattern to this. I get it regularly but only intermittently for the same queries.
The text was updated successfully, but these errors were encountered: