![]() The business logic for the Kafka consumer, which is a Java Maven project, is in the consumer directory. amazon-msk-java-app-cdk/lambda contains code for TransactionHandler, which publishes messages to a Kafka topic, as well as code for KafkaTopicHandler, which is responsible for creating Kafka topic. In amazon-msk-java-app-cdk/bin, you can find the main AWS CDK app where all of the stacks are instantiated. All AWS CDK stacks are located in the amazon-msk-java-app-cdk/lib directory. Let’s look at the project directory structure. For more information, see Working with the AWS CDK. At the time of writing, AWS CDK supports Python, TypeScript, Java. However if you prefer you can use CDK with other languages. I’ve chosen Typescript as my language here mainly because of personal preference. It’s implemented using a set of AWS CDK stacks and constructs. Let’s start with exploring the infrastructure and deployment definition. The project consists of three main parts: the infrastructure (including Kafka cluster and Amazon DynamoDB), a Spring Boot Java consumer application, and Lambda producer code. Project structure and infrastructure definition The AWS Command Line Interface (AWS CLI) version 2.To follow along with this post, you need the following prerequisites: Both the Lambda function and the consumer application publish logs to Amazon CloudWatch. The KafkaTopicHandler Lambda function is called once during deployment to create Kafka topic. The application is packaged in a container and deployed to ECS Fargate, consumes messages from the Kafka topic, processes them, and stores the results in an Amazon DynamoDB table. Triggering the TransactionHandler Lambda function publishes messages to an Apache Kafka topic. The following diagram illustrates our overall architecture. For more information, see the Developer Guide, AWS CDK Intro Workshop, and the AWS CDK Examples GitHub repo.Īll the code presented in this post is open sourced and available on GitHub. The AWS CDK is an open-source software development framework to define your cloud application resources using familiar programming languages. I use the AWS CDK to automate infrastructure creation and application deployment. We also look at the implementation details and how we can create Kafka topics in Amazon MSK cluster as well as send and receive messages from Apache Kafka using services such as AWS Lambda and AWS Fargate. Next, I show you how to shape your project structure and package your application for deployment. Then you see how with just a few lines of code you can set up an Apache Kafka cluster using Amazon Managed Streaming for Apache Kafka (Amazon MSK) and the AWS Cloud Development Kit (AWS CDK). We start with a brief architecture overview and an infrastructure definition. In this post, I walk you through the process of creating a simple end-to-end data processing application using AWS tools and services as well as other industry standard techniques. You can use it as a message queue or an event bus, as well as a way to improve resilience and reproducibility of events occurring inside of the application. Event-driven and microservices architectures, for example, often rely on Apache Kafka for data streaming and component decoupling. ![]() Using a Java application to process data queued in Apache Kafka is a common use case across many industries. Piotr Chotkowski, Cloud Application Development Consultant, AWS Professional Services Building an Apache Kafka data processing Java application using the AWS CDK
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |