-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: AWS SQS Example #297
base: main
Are you sure you want to change the base?
Conversation
michaelangrivera seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
AWS_ACCOUNT_ID, | ||
QUEUE_NAME, | ||
} | ||
const result = envSchema.safeParse(envs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I included this logic so the average user gets an error if they have not configured their AWS env variables, which is needed to make this sample run.
Thank you very much for this sample project. I'll review next week, but in the mean time, that looks like a very interesting example. My only concern at this point is either we really want to maintain this as part of this samples repo. We've been recently discussing about restructuring the repo so that it is both easier to maintain and better fulfill its purpose. But let me check. |
logger.info({ message }, 'Starting workflow') | ||
await client.workflow.start(helloWorld, { | ||
args: [message], | ||
workflowId: nanoid(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you deterministically generate a workflowId from the message data, so that when there are duplicate messages, multiple workflows aren't created?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you can! This example is FIFO based though, which typically is already de-duplicated. We can try using the message Id that comes from this data
Co-authored-by: Loren☺️ <[email protected]>
@michaelangeloio amazing example, thanks! Been wanting to implement something similar lately. What are your thoughts on using |
Can you provide some context? You can theoretically use worker_threads, however, drawbacks include having to pass a file (that's compiled from TS to js) or string into the worker thread. |
I'm about to port an existing SQS worker to our already existing Temporal infrastructure. I had a couple strategies in my mind:
|
What was changed
This PR introduces a sample showcasing the integration of Temporal with AWS SQS, specifically focusing on FIFO queues.
Why?
Using Temporal IO alongside an AWS FIFO queue (or AWS queue in general) can be beneficial for several reasons:
Decoupling of Services: In environments where two services cannot communicate over the same local network due to being in different clusters or policy constraints, a queue acts as a mediator. AWS SQS can store messages until they're consumed, ensuring that messages aren't lost even if the consuming service isn't immediately available.
Integration with Existing Architecture: For organizations already invested in AWS and using SQS queues, integrating Temporal can enhance the processing capabilities without a complete overhaul. Temporal can be introduced to handle the business logic, retries, and workflows, while SQS continues to act as the message broker.
Ordered Processing: FIFO queues ensure that messages are processed in the order they are sent, which is crucial in scenarios like financial transactions or data synchronization.
Also, customers will be happy to see it's possible to integrate temporal into their existing stack 😉
Checklist
Closes [Feature Request] AWS SQS Sample #296
How was this tested:
Follow the steps for spinning up the stack, and everything should work smoothly (should you provide the correct AWS inputs)!
I can write some unit tests; however, I doubt this would be beneficial as this is simply a sample.
Included a README as part of this PR.
I'm open to any feedback! Just let me know! 👍