You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a monorepo with many, many lambda functions that I deploy to AWS with Circle CI, terraform, and serverless. My deployment pipeline calls serverless webpack which webpacks all the functions, then it uploads them to aws:
In my .circleci/config.yml:
- run:
name: Test and build
command: |
source $HOME/circleci/lambdas_custom_image/env/nvm_init.sh
export DEPLOY="true" # This makes webpack create zips with index.js in the root of the deployment package.
yarn
yarn build
if echo $CIRCLE_TAG | grep -E "$PROD_TAG_PREFIX*" || echo $CIRCLE_BRANCH | grep -P "^$QA_BRANCH\$" > /dev/null; then
echo "Uploading deployment packages to S3..."
for FUNCTION in $FUNCTIONS; do
aws s3 cp app/lambdas/$FUNCTION.zip s3://$LAMBDA_S3_BUCKET/$S3_APP_FOLDER/$FUNCTION.zip
done
fi
yarn build is this script in my package.json:
"build": "serverless webpack",
This works great. However, the more functions I add, the longer deployment takes. I have concurrency set to 3 as to not have my docker container run out of memory, so serverless webpack will webpack 3 functions simultaneously. This is great, but I have like 25 functions right now. The plan is to have hundreds. So as I add more functions, the deployment time increases. Hundreds of functions would result in hours of webpacking, then uploading.
The solution my Amazing DevOps Expert has recommended is to webpack and upload functions in batches, each in their own Circle CI docker container, so we could have many docker containers doing the grunt work simultaneously, meaning the deployment time would be tied to how many functions we put in a batch, not how many functions we have total.
The Problem is, I dont know how to accomplish this with serverless-webpack. I see configuration options around Packaging functions separately (https://www.serverless.com/framework/docs/providers/aws/guide/packaging), but I'm not sure how to modify my serverless config, webpack config, and Cirlce CI deployment script to accomplish this. My Amazing DevOps Expert will handle spinning all the containers up, but they have tasked me with simply figuring out how to webpack my lambda function code in batches. Please help!
The text was updated successfully, but these errors were encountered:
I have a monorepo with many, many lambda functions that I deploy to AWS with Circle CI, terraform, and serverless. My deployment pipeline calls
serverless webpack
which webpacks all the functions, then it uploads them to aws:In my
.circleci/config.yml
:yarn build
is this script in mypackage.json
:"build": "serverless webpack",
This works great. However, the more functions I add, the longer deployment takes. I have concurrency set to 3 as to not have my docker container run out of memory, so
serverless webpack
will webpack 3 functions simultaneously. This is great, but I have like 25 functions right now. The plan is to have hundreds. So as I add more functions, the deployment time increases. Hundreds of functions would result in hours of webpacking, then uploading.The solution my Amazing DevOps Expert has recommended is to webpack and upload functions in batches, each in their own Circle CI docker container, so we could have many docker containers doing the grunt work simultaneously, meaning the deployment time would be tied to how many functions we put in a batch, not how many functions we have total.
The Problem is, I dont know how to accomplish this with
serverless-webpack
. I see configuration options aroundPackaging functions separately
(https://www.serverless.com/framework/docs/providers/aws/guide/packaging), but I'm not sure how to modify my serverless config, webpack config, and Cirlce CI deployment script to accomplish this. My Amazing DevOps Expert will handle spinning all the containers up, but they have tasked me with simply figuring out how to webpack my lambda function code in batches. Please help!The text was updated successfully, but these errors were encountered: