Recently, all types of businesses are beginning to use automated systems to perform their functions effectively. With ever-growing and changing business requirements, there is a need for modern, robust applications that can constantly adapt to business needs. To create such applications successfully, you should choose the right architecture. One of the most efficient modern architectures is event-driven architecture. In this article, we will explain in detail how it works, what are the ways to implement it, and its main advantages.
Why the old architecture does not meet modern requirements?
Modern applications are very different from those that were ten years ago. Now there are opportunities to store data in the cloud, split data storage into parts, and move data in real-time between different parts of the globe. Modern applications need to run continuously and seamlessly and be elastic, global, and cloud-based.
Legacy architectures cannot efficiently perform the tasks that come with today’s business requirements. Today, ever-growing businesses use microservices, IoT, event hubs, cloud computing, machine learning, and more to meet their needs.
However, to build modern, robust, scalable applications that can manage large amounts of data in real-time, you need to start from the ground up and choose the right modern application development architecture. Event-driven architecture is the perfect choice for this.
Applications built on the event-driven architecture are more flexible, scalable, contextual, and responsive.
What is an event?
An event is any significant occurrence or change in the state of some system. Events exist everywhere and occur all the time. They can be triggered by the user (such as a mouse click or keystroke), an external source (such as the output of a sensor), or originate from within the system (such as when a program is loaded).
Examples of events are client requests, sensor readings, packages delivered to the destination, denial of unauthorized access attempts, sending an email to a user, blocking an account, etc.
An application that runs on an event-driven architecture allows sending information about an event to all interested people and systems as soon as the event occurs. This allows the business to react faster and benefit from the event. For example, you can attract potential users before other competitors, change production, reallocate resources, etc. Therefore, an event-driven architecture is a better architectural approach than other ones that wait for the system to periodically request updates.
An event can have two parts:
- The title of the event includes information such as the name of the event, the timestamp for the event, and the type of the event.
- The event body provides information about the detected event state change.
Event flow levels
An event-driven architecture can be built on four logical layers. Below we will describe in detail what actions occur on each layer.
The event producer captures the fact of an event and presents it as an event message. Event producers can be users, various programs, or physical sensors. Therefore, an important task when designing and implementing an event producer is to transform data collected from a diverse set of data sources into a single, standardized form of data.
The second logical layer is the event channel. It is a mechanism for propagating information collected from an event emitter to an event handler or receiver. An example of an event channel could be a TCP/IP connection or any type of input file.
Multiple event channels can be open at the same time and read asynchronously. After that, events are stored in a queue and waiting for them to be processed using the event handling mechanism.
Event Handling Mechanism
The event handling mechanism performs event identification, selection, and execution of the appropriate reaction or a series of reactions.
Downstream Event-Driven Activity
This logical level shows the consequences of executing an event. This can happen in different ways, for example, an application can display some kind of notification on the screen.
Depending on the level of automation provided by the event handling mechanism, this level may not be required.
There are several different ways to handle events: simple, streaming, complex, or online. They can be used individually. However, often in a mature event-driven architecture, they are used together. Let’s take a closer look at each of these event-handling methods.
Simple Event Handling
When an event occurs that changes a particular measurable state of the system, simple event handling is performed. The occurrence of an observable event triggers follow-up actions.
Simple event handling is typically used to control the flow of work in real-time. This reduces delay times and costs.
When an event stream is processed, both normal and observable events occur. Ordinary events are checked for visibility and communicated to information subscribers. Event stream processing is typically used to manage the flow of real-time information within an enterprise. This allows the company to make timely decisions. Event stream processing is possible when a streaming database is used. The inclusion of a streaming database enables scalable architectures because event produces don’t need to know about consumers. Both producers and consumers communicate with the streaming database. Examples of streaming databases include Memphis.dev and Apache Kafka. Note that a streaming database can also act as an event sourcing database because event sourcing only involves data reads while streaming involves data broadcasting.
Complex Event Processing
Integrated event processing is used to detect and respond to business anomalies, threats, and opportunities. It allows you to evaluate the totality of events, and then make further decisions.
Complex event processing allows you to look at visible and normal event patterns to infer that a complex event has occurred. This way of handling an event requires the use of sophisticated event interpreters, event template definition and matching, and correlation methods.
Online Event Processing
This method uses asynchronous distributed event logs to process complex events and manage persistent data. It allows you to compose related events of the same scenario in dissimilar systems. Online event processing provides high consistency and flexible distribution patterns with high scalability.
What is event-driven architecture?
Event-driven architecture is a modern software application design architecture. It uses events to start and communicate between unrelated services. The core structure of the solution in this architecture is the collection, transmission, processing, and saving of events.
An event-driven architecture is loosely coupled because event producers don’t know which event consumers are listening for the event, and the event doesn’t know what the consequences of raising it are. It provides minimal communication between services, making it a good choice for modern distributed application architectures. It is often used in modern applications built with microservices.
Unlike traditional architectures, which treat data as packets of information to be inserted and updated at intervals, and respond to user-initiated requests rather than new information, the event-driven architecture allows applications to respond to events as they occur.
Event-driven architecture is versatile and works well with unpredictable non-linear events. Therefore, it allows you to create and react to a large number of events in real-time.
This architecture is used by a lot of modern applications, such as customer interaction systems that need to use real-time customer data.
How does it work?
The event-driven architecture consists of three key components:
- Event producers collect event data and store it in storage.
- Event routers filter and send events to consumers.
- Event consumers receive an event and, if necessary, respond to it.
Event producers do not know about the consumers or the results of event processing. Once an event is detected, it is propagated from the event producer to event consumers via event channels. The event processing router processes the event asynchronously.
When an event has occurred, event consumers are informed about it. They may handle the event or may only be affected by it. The event processing router responds to the event and dispatches the action to the appropriate consumers.
Event-driven architecture models
An event-driven architecture can use a publish/subscribe model or an event stream model.
When an event is published, it is sent to each subscriber thanks to the messaging infrastructure that keeps track of subscriptions. After that, it cannot be replayed, and new subscribers do not see it.
Event Streaming Model
In this model, events are logged. They are strictly ordered and durable. Clients do not subscribe to a stream but can read events from any part of the stream. Unlike the publish/subscribe model, the client can join at any time and can replay events.
There are several variations of event handling:
- Simple event handling. The event immediately fires an action in the consumer.
- Complex event processing. The consumer processes a series of events looks for patterns in them and performs some actions under certain conditions. It uses the following technologies: Azure Stream Analytics or Apache Storm.
- Processing the flow of events. Stream processors process or transform a stream of events. To do this, streaming platforms such as Memphis.dev, Azure IoT Hub or Apache Kafka are used.
When implementing an application based on an event-driven architecture, you should use specific design patterns. Below, we will look at the main patterns associated with event-driven architecture.
This pattern is used in applications that need to keep a history of business facts.
Traditional domain-specific implementations keep only the last committed state while replacing the previous one. Thus, if you need to know how data has changed over time, you need to add historical records. in order to do this, you should create a log table.
An event source stores the state of an object as a sequence of state-changing events that are ordered in time. When the state of the system changes, the application fires a state change notification event. The state change event is stored in the event log in chronological order. The event log is often used by business analysts to gain insight into the operation of a business.
The event log stores three pieces of information:
- The type of event or collection.
- Event sequence number.
- Data as a serialized object.
If you need to port a monolithic application to microservices, then you should use this pattern. It allows you to gradually replace the feature set of a monolithic application with microservices without having to rewrite applications from scratch. At the same time, you can keep the microservices and the application running in parallel.
One problem that arises when implementing this pattern is to determine where writes and reads occur and how data should be replicated between contexts.
Decompose by subdomain
When implementing an application, you will have a question about how to divide it into microservices. Therefore, a good option for identifying and classifying business functions and, accordingly, microservices is to use Domain-Driven Design subdomains.
DDD treats an application as a domain, which consists of several subdomains. Each subdomain corresponds to a separate part of the application.
Examples of subdomains are product catalog, inventory management, order management, delivery management, etc.
Database per service
The database template for service allows each service to store data privately and be accessible only through its API. The services are loosely coupled, which limits the impact on other services when the database schema changes. The database technology is selected based on business requirements.
There are several different ways to keep the persistent data of service private. As an example, if you are using a relational database then you can use one of the following options:
- Private-tables-per-service. Each service owns a set of tables that only that service should have access to.
- Schema-for-each-service. Each service has a database schema that is private to that service.
- Database-server-for-service. Each service has its database server.
In addition, you can create additional barriers. For example, assign different database user IDs to each service and use a database access control mechanism such as grants.
Command Query Responsibility Segregation (CQRS)
A domain model encapsulates and structures domain data and maintains the correctness of that data as it changes. That being said, it can be overwhelmed by the management of complex aggregated objects, concurrent updates, and multiple end-to-end views. Therefore, there is a need to reorganize this model. The CQRS pattern can be used to refactor a model to decouple aspects of data usage.
The CQRS pattern allows you to separate data read operations from data update operations. An operation can read data or it can write data, but not both. This makes it easier to implement read and write operations and makes them independent.
The full CQRS template uses separate databases and APIs for reading and writing.
You can implement this pattern step by step:
- Stage 0. Typical application data access.
- Stage 1. Separate APIs for reading and writing.
- Stage 2. Separate reading and writing models.
- Stage 3. Separate databases for reading and writing.
The Saga pattern allows you to split long-running transactions into sub-transactions that can be interleaved with other transactions. Using this pattern solves the problem of having one origin per microservice when dealing with long-running transactions.
A saga is a sequence of local transactions where each transaction updates data within a single service, so each next step can be triggered by the completion of the previous event.
Dead letter queue
When using event-driven microservices, in some cases it is necessary to call the service using an HTTP or RPC call. The call may not go through. Subsequently, you can use the dead-letter queue pattern to retry and handle failed calls.
Often there is a need for a service to update the database and publish a message at the same time. Hence, the question arises, how to do this reliably, to avoid data inconsistencies and errors? The transactional outbox template is suitable for this.
To implement this pattern, you can use a service that updates the database. For relational databases, it inserts the message into the outgoing message table as part of a local transaction. Alternatively, or non-relational ones, it adds a message to the attribute of the record being updated. A separate message relay process is used to publish messages to the message broker.
When should you use event-driven architecture?
The event-driven architecture is often used in modern applications that use microservices and in applications with disconnected components. Thus, it allows you to increase flexibility and speed of data movement.
To use this architecture effectively, your application must meet several important criteria.
- To ensure that each event is processed, you must have a reliable event source that can arrange for the delivery of each event.
- Since the number of incoming events can be very large, to improve the efficiency of the application, you need to be able to process them asynchronously.
- In order to restore the system state, your event source must be deduplicated and ordered.
There are types of software where an event-driven architecture makes the most sense. Below we will list some of them.
- An event-driven architecture is effective for coordinating systems across teams working and deploying across regions and accounts. You can use an event router to transfer data between systems. This allows you to design, scale, and deploy services independently of other teams.
- You can use an event-driven architecture to monitor the status of resources and send notifications. It allows you to monitor and receive alerts about any anomalies, changes, and updates.
- This architecture can be used when a lot of systems need to operate in response to an event. It allows you to fork and process events in parallel without having to write your code to send to each consumer. The router will send the event to systems, each of which can process the event in parallel for different purposes.
- An event-driven architecture can be used to exchange information between systems that run on different stacks without coupling. The event router establishes indirection and communication between systems so they can exchange messages and data while remaining independent.
- This architecture is effective when multiple subsystems need to process the same events.
- Using an event-driven architecture allows you to create applications that can process real-time data with minimal latency.
- If your system must perform complex event handling, then you should choose this architecture. Examples of such event handling might be pattern matching or aggregation over time windows.
- The event-driven architecture allows the implementation of applications that transfer large amounts of data at high data rates, such as the Internet of Things.
Benefits of using event-driven architecture
Using an event-driven architecture in systems and applications allows organizations to improve the scalability and responsiveness of applications and access to the data and context needed to make effective decisions.
In this architecture, events are captured as they occur, allowing event producers and event consumers to exchange information in real-time.
You can create a flexible system that can adapt to change and make real-time decisions with an event-driven architecture. This allows you to make manual and automated decisions using all available data that reflects the current state of your systems.
There are also some other advantages of using event-driven architecture. Let’s list the main ones.
- You can separate your services so that they are not connected, but work only with the event router. This allows you to create compatible services, which don’t depend on each other. If one service goes down, the rest will continue to run. In this case, the event router acts as a buffer that can withstand workload spikes.
- The event router allows you to automatically filter and send events to consumers. It also eliminates the need for tight coordination between producer and consumer services. This reduces the amount of code that needs to be written to poll, filter, and route events, as well as speeds up the development process.
- The event router can be used to audit your application and define policies. These policies can determine which users and systems have permission to access your data and can publish and subscribe to the router. You can also encrypt events.
- Using an event-driven architecture allows you to reduce network bandwidth consumption, use less CPU and other resources since all events occur on demand, and you do not need to pay for continuous polling to check for an event. This allows you to cut costs.
- In this architecture, producers and consumers are not connected. In addition, it allows you to easily add new consumers to the system since there is no point in integration.
In this article, we talked about why traditional architectures do not meet the requirements for modern business applications. Therefore, a new event-driven architecture has emerged that represents new opportunities for technology and business.
In the new architecture, we drop the event-command pattern and just emit events. Specifically, this approach allows you to create flexible systems that are easily scalable, can work around the clock without interruption, and are easy to evolve.