Event Driven Architecture
5 min read Jun 29, 2018
Event Driven Architecture (EDA)
Event Driven Architecture (EDA) is an emerging architectural pattern that enables applications to respond to external events in real time. It is an evolution of the traditional event-driven programming model, where the application is triggered to take action when an event occurs. EDA is becoming increasingly popular as organizations move towards a more distributed and service-oriented architecture. It enables applications to respond to events in real time, allowing for more agility and scalability. At its core, EDA is based on event-driven programming, where an application is triggered to take action when an event occurs. An event is defined as any change in state or activity that has an impact on the application. In EDA, the event is typically generated by an external source and is broadcasted, or published, to all interested components.
EDA provides a number of advantages over traditional architectures. It enables applications to respond quickly to events, allowing for faster and more dynamic responses to changing conditions. It also allows for scalability, as applications can respond to events without needing to be constantly monitored. Additionally, it reduces the need for manual intervention, as applications can respond to events autonomously.The key elements of EDA are event producers, event consumers, and event brokers. Event producers generate events and broadcast them to interested components. Event consumers are the components that receive and process the events. Event brokers are the middleware components that route events to the appropriate consumers.
EDA is becoming increasingly popular as organizations move towards a more distributed and service-oriented architecture. It enables applications to respond to events in real time, allowing for more agility and scalability. It also reduces the need for manual intervention, as applications can respond to events autonomously. As organizations continue to look for ways to respond to changing conditions faster and more efficiently, event driven architecture is likely to become an increasingly important part of their technology stack.
The pros of an event-driven architecture include real-time updates and notifications, easy implementation, and flexibility. The cons of an event-driven architecture include difficulty in maintaining the event log, potential for data inconsistency, and difficulty in scaling the architecture.
Event Driven Architecture Techonlogies
AWS Lambda: https://aws.amazon.com/lambda/
Azure Functions: https://azure.microsoft.com/en-us/services/functions/
Google Cloud Functions: https://cloud.google.com/functions/
Apache OpenWhisk: https://openwhisk.apache.org/
Apache Flink: https://flink.apache.org/
Apache Beam: https://beam.apache.org/
Apache Kafka Streams: https://kafka.apache.org/documentation/streams/
Confluent: https://www.confluent.io/apache-kafka-vs-confluent/
Akka: https://akka.io/
RabbitMQ: https://www.rabbitmq.com/
NATS: https://nats.io/
Fully vs Self Managed Message Broker Service
MSK (Amazon Managed Streaming for Apache Kafka) is a fully managed message broker service that runs Apache Kafka. It is designed to make it easy for developers to build and run highly available, secure, and scalable streaming applications without having to manage the underlying infrastructure. MSK is a fully managed service and is priced based on the number of brokers, storage, throughput, and other factors. Prices can range from $0.20/hour for smaller configurations to $3.50/hour for larger configurations.
Self-managed message broker is a type of message broker that is hosted and managed by the user. It requires the user to configure, maintain, and monitor the message broker. Users have full control over the configuration and monitoring of the broker. Self-managed message brokers are usually more expensive and require more maintenance than managed message brokers like MSK. Self-managed message brokers require the user to pay for the underlying infrastructure. The cost of the infrastructure depends on the user’s requirements and can vary. Additionally, users must factor in the cost of monitoring, managing, and maintaining the message broker.
Observability
Datadog is a monitoring and analytics platform for cloud applications and infrastructure. It provides a single platform for collecting, processing, and visualizing metrics and logs from multiple sources. Datadog also includes powerful alerting capabilities, and is used to monitor applications and infrastructure performance in real-time.
Splunk is a log analysis tool that can be used to collect, store and analyze log data. It has powerful search capabilities, making it easier to query and analyze data across multiple sources. Splunk is used to identify and troubleshoot system issues, monitor performance, and detect security threats.
Both Splunk and Datadog are popular observability platforms used to monitor and analyze the performance of applications and IT infrastructure. Ultimately, the choice between Splunk and Datadog will depend on the specific needs of your organization. Both tools offer powerful observability capabilities, but Splunk may be a better choice if you need to analyze large amounts of log data, while Datadog may be more suitable for monitoring cloud applications and infrastructure in real-time.
References
[1] Event-driven architecture vs. event sourcing: What's the difference?. (2018, October 26). Retrieved March 25, 2021, from https://www.oreilly.com/radar/event-driven-architecture-vs-event-sourcing-whats-the-difference/
[2] Event-driven architecture: Pros and cons. (2020, May 15). Retrieved March 25, 2021, from https://www.oreilly.com/radar/event-driven-architecture-pros-and-cons/
[3] Datadog vs. Splunk: Enterprise Analytics Software Comparison, Retrieved December 29, 2021, from https://technologyadvice.com/blog/information-technology/datadog-vs-splunk/