You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One minor thing to worry about: implementing this check by having the server talk to S3 means that the server needs to be able to access the target S3 bucket (to verify its existence). In contrast, right now the server only needs permission to launch lambdas, and then the lambda role associated with the function being launched needs access to S3.
Implementing this check with a "trial balloon" worker is a possibility, but that's a big change given the way mu launches lambdas.
There are probably other ways to handle this, too.
One more thought: this doesn't need to be the worker'scoordinator's responsibility. Before launching a job, a separate "checking" stage can catch errors like this. This is probably the best approach because it cleanly separates two tasks that do not need to be connected.
Example:
If a __server.py binary runs with a bucket that does not exist (-b option). Errors are thrown _after* all the lambdas are configured.
SERVER HANDLING (1) FAIL(retrieving norun:sintel-1k-png16/00000007.png->/tmp/lambda_Sc4R9q/00000007.png from s3:
Traceback (most recent call last):
Might be better to catch bad configurations early on before we start invoking any lambdas.
The text was updated successfully, but these errors were encountered: