Ingest Server-Sent Events (SSE) using Amazon Managed Service for Apache Flink (formerly Amazon Kinesis Data Analytics)
🚨 August 30, 2023: Amazon Kinesis Data Analytics has been renamed to Amazon Managed Service for Apache Flink.
When dealing with real-time data it is often required to send that data over the internet to various sources. Various technologies have enabled this such as web sockets and long polling. Recently server-sent events (SSE) has become a popular technology to push updates to clients. Ingesting this type of data source into AWS requires a client to be running continuously to receive those events. This sample shows how to connect to an SSE endpoint using an Amazon Kinesis Data Analytics application using Apache Flink. As the events arrive they are published to an Amazon Kinesis Data Streams stream then in this sample we simply store the event data in Amazon S3.
- An Amazon Kinesis Data Analytics application using Apache Flink is used to create a connection to the server-sent events HTTP endpoint.
- Each event that is ingested is published to the Amazon Kinesis Data Streams stream. The application must be placed in a private subnet of a virtual private cloud (VPC) to allow for outbound connections to be made.
- The Amazon Kinesis Data Firehose receives the event payload from the data stream
- The Amazon Simple Storage Service bucket is used to store the events for future analysis
- https://developers.facebook.com/docs/graph-api/server-sent-events/
- Facebook uses SSE to send out updates for live video comments and reactions
- https://wikitech.wikimedia.org/wiki/Event_Platform/EventStreams
- Wikimedia uses SSE to send all changes to wiki sites
- https://iexcloud.io/docs/api/#sse-streaming
- IEX uses SSE to stream realtime stock quotes
- https://www.mbta.com/developers/v3-api/streaming
- MBTA uses SSE to stream realtime transportation predictions
- Apache Maven 3.5 or greater installed
- Java 11 or greater installed
- AWS Cloud Development Kit (CDK)
- Git clone this repository
- Run the maven package command "mvn package"
- Upload the compiled jar file ('amazon-kinesis-data-analytics-apache-flink-server-sent-events-{version}.jar') to an S3 bucket in the account you plan to run.
- The CDK will produce two different CloudFormation Templates
- amazon-kinesis-data-analytics-apache-flink-server-sent-events-create-vpc.template
- This template will create a VPC for you with private and public subnets
- amazon-kinesis-data-analytics-apache-flink-server-sent-events-use-existing-vpc.template
- This template will allow you to select existing security group Ids and subnet Ids to use with the application
- The subnet Ids selected must be private to allow outbound connections from within an Amazon Kinesis Data Analytics Apache Flink application.
- The security group Ids selected must allow outbound connections on the HTTP port required. In this sample we create an outbound connection on the standard HTTP port 80.
- amazon-kinesis-data-analytics-apache-flink-server-sent-events-create-vpc.template
- Use one of the generated CloudFormation Templates from the console or CLI
- Fill in the required parameters:
- S3Bucket - The S3 bucket where the Amazon Kinesis Data Analytics application gets your application's JAR file
- S3StorageBucket - The S3 bucket name used to store the server-sent events data
- S3StorageBucketPrefix - The prefix used when storing server-sent events data into the S3 bucket
- S3StorageBucketErrorPrefix - The prefix used when storing error events into the S3 bucket
- FlinkApplication - The Apache Flink application jar filename located in the S3 bucket
- *Subnets - The subnets used for the Amazon Kinesis Data Analytics application (When using an existing VPC template)
- *SecurityGroups - The security groups used for the Amazon Kinesis Data Analytics application (When using an existing VPC template)
- The sample included will set up the Amazon Kinesis Data Analytics application to connect to the wikimedia events streams recent changes SSE endpoint.
- Run the Kinesis Data Analytics Application from the console or cli
- Once the application is running, navigate to the S3 bucket supplied in the CloudFormation properties and view the event data records
To connect to a different endpoint you can edit the Amazon Kinesis Data Analytics Runtime Properties. The following properties are available:
- Property Group: ProducerConfigProperties
- The key value pair properties in this group are fed directly into the FlinkKinesisProducer, please reference these properties
- The key AggregationEnabled should be set to false when sending data to endpoints which cannot disaggregate the data. For example if AggregationEnabled is true and the data stream is ingested by an AWS Lambda function you will be required to disaggregate the data yourself inside the function.
- Property Group: OutputStreamProperties
- DefaultStream - The value of this property should be the Amazon Kinesis Data Streams stream that this application will output events to. Please be aware that the security role for the Amazon Kinesis Data Analytics application will require permissions to this stream.
- Property Group: SSESourceProperties
- url (required) - The SSE endpoint to connect
- headers - A pipe delimited set of key/value pair headers. For example "x-api-key|demo|Accept|text/html" is broken into key: x-api-key with value: demo and key: Accept with value: text/html
- types - A pipe delimited set of SSE event types to send to the data stream.
- readTimeoutMS - The read timeout value in milliseconds. Note in most cases adjusting this to anything but zero will cause the system to not connect.
- reportMessagesReceivedMS - How often to log the number of messages received over a given period of time in milliseconds.
- Delete the CloudFormation stack
- Remove the jar file from the S3 bucket
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.