๐ป Implementing Event-Driven Microservices — Java & Kafka in Action
Now that we’ve covered the design principles of event-driven microservices, it’s time to implement them in Java using Spring Boot and Apache Kafka. This post walks through creating producers, consumers, handling retries, and ensuring messages are processed reliably.
1. ๐ Setting Up Kafka with Spring Boot
Spring Boot provides excellent support for Kafka via spring-kafka. First, add the dependency:
org.springframework.kafka
spring-kafka
3.0.0
Then configure the producer and consumer in application.yml:
spring:
kafka:
bootstrap-servers: localhost:9092
consumer:
group-id: orders-group
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
2. ๐จ Creating a Kafka Producer
@Service
public class OrderProducer {
private final KafkaTemplate kafkaTemplate;
public OrderProducer(KafkaTemplate kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
public void publishOrder(OrderCreatedEvent event) {
kafkaTemplate.send("orders-topic", event.getOrderId(), event)
.addCallback(
result -> System.out.println("Event sent: " + event.getOrderId()),
ex -> System.err.println("Send failed: " + ex.getMessage())
);
}
}
✅ Notes:
- Always specify the topic and key for partitioning.
- Add callbacks to handle success/failure.
3. ๐ฉ Creating a Kafka Consumer
@Component
public class OrderConsumer {
@KafkaListener(topics = "orders-topic", groupId = "orders-group")
public void consume(OrderCreatedEvent event) {
System.out.println("Received order: " + event.getOrderId());
// process event: update inventory, billing, etc.
}
}
✅ Best Practices:
- Consumers should be idempotent to handle retries.
- Use dead-letter topics for failed messages.
- Prefer async processing when possible for long-running tasks.
4. ๐ Handling Retries and Failures
Spring Kafka provides SeekToCurrentErrorHandler and retry templates:
@Bean
public ConcurrentKafkaListenerContainerFactory
kafkaListenerContainerFactory(ConsumerFactory factory) {
ConcurrentKafkaListenerContainerFactory factoryConfig =
new ConcurrentKafkaListenerContainerFactory<>();
factoryConfig.setConsumerFactory(factory);
// Retry up to 3 times with fixed interval
factoryConfig.setErrorHandler(new SeekToCurrentErrorHandler(
(record, ex) -> System.err.println("Failed record: " + record),
3
));
return factoryConfig;
}
5. ๐งช Testing Event-Driven Services
Use embedded Kafka for integration tests:
@SpringBootTest
@EmbeddedKafka(partitions = 1, topics = { "orders-topic" })
class OrderServiceTest {
@Autowired
private OrderProducer producer;
@Autowired
private OrderConsumer consumer;
@Test
void testOrderFlow() {
OrderCreatedEvent event = new OrderCreatedEvent("123", "user-1", List.of());
producer.publishOrder(event);
// verify consumer processed the event
}
}
- Test both producer and consumer end-to-end.
- Use
EmbeddedKafkafor isolated, repeatable tests.
6. ๐ Putting It All Together
With a producer, consumer, retries, and proper testing, your event-driven microservices are ready to process events reliably. In the next post, we will explore observability and reliability in production systems, including metrics, logging, tracing, and fault injection.
Next in this series: Observability & Reliability in Event-Driven Microservices
Labels: Java, Kafka, Microservices, Event-Driven Architecture, Distributed Systems, Spring Boot, Implementation, Testing, Retry, Consumer, Producer
Comments
Post a Comment