layout | |||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
In this hands-on example we send end-to-end encrypted messages through Confluent Cloud.
Ockam encrypts messages from a Producer all-of-the-way to a specific Consumer. Only that specific Consumer can decrypt these messages. This guarantees that your data cannot be observed or tampered with as it passes through Confluent Cloud or the network where it is hosted. The operators of Confluent Cloud can only see encrypted data in the network and in service that they operate. Thus, a compromise of the operator's infrastructure will not compromise the data stream's security, privacy, or integrity.
To learn how end-to-end trust is established, please read: “How does Ockam work?”
This example requires Bash, Confluent CLI, JQ, Git, Curl, Docker, and Docker Compose. Please set up these tools for your operating system, login to Confluent using your Confluent CLI so that clusters can be created and deleted, then run the following commands:
# Clone the Ockam repo from Github.
git clone --depth 1 https://github.com/build-trust/ockam && cd ockam
# Navigate to this example’s directory.
cd examples/command/portals/kafka/confluent/
# Run the example, use Ctrl-C to exit at any point.
./run.sh
If everything runs as expected, you'll see the message: The example run was successful 🥳
The run.sh script, that you ran above, and its accompanying files are full of comments and meant to be read. The example setup is only a few simple steps, so please take some time to read and explore.
- The run.sh script calls the run function which invokes the enroll command to create an new identity, sign in to Ockam Orchestrator, set up a new Ockam project, make you the administrator of this project, and get a project membership credential.
- The run function then creates a new Kafka cluster using the Confluent CLI.
- An Ockam relay is then started using the ockam confluent addon which creates an encrypted relay that transmits Kafka messages over a secure portal.
- We then generate two new enrollment tickets, each valid for 10 minutes, and can be redeemed only once. The two tickets are meant for the Consumer and Producer, in the Ockam node that will run in Application Team’s network.
- In a typical production setup, an administrator or provisioning pipeline generates enrollment tickets and gives them to nodes that are being provisioned. In our example, the run function is acting on your behalf as the administrator of the Ockam project. It creates a Kafka relay using a pre-baked Ockam confluent addon which will host the Confluent Kafka server and Application Team’s network, passing them their tickets using environment variables.
- For the Application team, the run function takes the enrollment tickets, sets them as the value of an environment variable, passes the Confluent authentication variables and invokes docker-compose to create the Application Teams’s networks.
# Create a dedicated and isolated virtual network for application_team.
networks:
application_team:
driver: bridge
- Application Teams’s docker-compose configuration is used when run.sh invokes docker-compose. It creates an isolated virtual network for Application Teams. In this network, docker compose starts a Kafka Consumer container and a Kafka Producer container.
- The Kafka consumer node container is created using this dockerfile and this entrypoint script. The consumer enrollment ticket from run.sh is passed to the container via environment variable.
- When the Kafka consumer node container starts in the Application Teams network, it runs its entrypoint. The entrypoint enrolls with your project and then calls the Ockam kafka-consumer command which starts the Kafka inlet and listens and traffic connection on localhost port 9092 through Ockam relay.
- Next, the entrypoint at the end executes the command present in the docker-compose configuration, which launches a Kafka consumer waiting for messages in the demo topic. Once the messages are received, they are printed out.
- In the producer container, the process is analogous, once the Ockam kafka-producer inlet is set up, the command within docker-compose configuration launches a Kafka producer that sends messages.
- You can view the Confluent website to see the encrypted messages as they are being sent by the producer.
We sent end-to-end encrypted messages through Confluent cloud.
Messages are encrypted with strong forward secrecy as soon as they leave a Producer, and only the intended Consumer can decrypt those messages. Confluent Cloud and other Consumers can only see encrypted messages.
All communication is mutually authenticated and authorized. Keys and credentials are automatically rotated. Access can be easily revoked.
To delete all containers and images:
./run.sh cleanup