Join our live demo!Here
We are all familiar with drifting in-app configuration and IaC. We’re starting with a specific configuration, backed with IaC files. Soon after, we are facing a “drift” or a change between what is actually configured in our infrastructure and our files. The same behavior happens in data. Schema starts in a certain shape. As data ingestion grows and scales to different sources, we get a schema drift, a messy, unstructured database with UI crashes, and an analytical layer that keeps failing due to a bad schema.
We will learn how to deal with the scenario and how to work with dynamic schemas.
A schema defines the structure of the data format.
Keys/Values/Formats/Types, a combination of all, results in a defined structure or simply – schema.
Developers and data engineers, have you ever needed to recreate a NoSQL collection or recreate an object layout on a bucket because of different documents with different keys or structures?
You probably did.
An unaligned record structure across your data ingestion will crash your visualization, analysis jobs, and backend, and it is an ever-ending chase to fix it.
Schema drift is the case where your sources often change metadata. Fields, keys, columns, and types can be added, removed, or altered on the fly.
Your data flow becomes vulnerable to upstream data source changes without handling schema drift. Typical ETL patterns fail when incoming columns and fields change because they tend to be tied to those sources. Stream requires a different toolset.
The following article will explain the existing solutions and strategies to mitigate the challenge and avoid schema drift, including data versioning using lakeFS.
Confluent Schema Registry provides a serving layer for your metadata. It provides a RESTful interface for storing and retrieving your Avro®, JSON Schema, and Protobuf schemas. It stores a versioned history of all schemas based on a specified subject name strategy, provides multiple compatibility settings, and allows the evolution of schemas according to the configured compatibility settings and expanded support for these schema types. It provides serializers that plug into Apache Kafka® clients. They handle schema storage and retrieval for Kafka messages that are sent in any of the supported formats.
Schema registry overview. Confluent Documentation
Memphis Schemaverse provides a robust schema store and schema management layer on top of the memphis broker without a standalone compute or dedicated resources. With a unique and modern UI and programmatic approach, technical and non-technical users can create and define different schemas, attach the schema to multiple stations and choose if the schema should be enforced or not.
Memphis’ low-code approach removes the serialization part as it is embedded within the producer library.
Schema Verse supports versioning, GitOps methodologies, and schema evolution.
Avoid schema drifting means that you want to enforce a particular schema on a topic or station and validate each produced data. This tutorial assumes that you are using Confluent cloud with registry configured. If not, here is a Confluent article on how to install the self-managed version (without enhanced features).
To make sure you are not drifting and maintaining a single standard or schema structure in our topic, you will need to
1. Copy ccloud-stack config file to $HOME/.confluent/java.config.
2. Create a topic.
3. Define a schema. For example –
4. Enable schema validation over the newly created topic.
5. Configure Avro/Protobuf both in the app and with the registry.
Example producer code in Maven
Because the pom.xml includes avro-maven-plugin, the Payment class is automatically generated during compile.
In case a producer will try to produce a message that is not struct according to the defined schema, the message will not get ingested.
Head to your station, and on the top-left corner, click on “+ Attach schema”
Memphis abstracts the need for external serialization functions and embeds them within the SDK.
Producer (Protobuf example)
Consumer (Requires .proto file to decode messages)
Now that we know the many ways to enforce schema using Confluent Or Memphis, let us understand how a versioning tool like lakeFS can seal the deal for you.
What is lakeFS?
lakeFS is a data versioning engine that allows you to manage data, like code. Through the Git-like branching, committing, merging, and reverting operations, managing the data and in turn the schema over the entire data life cycle is made simpler.
How to achieve schema enforcement with lakeFS?
By leveraging the lakeFS branching feature and webhooks, you can implement a robust schema enforcement mechanism on your data lake.
lakeFS hooks allow automating and ensuring a given set of checks and validations run before important life-cycle events. They are similar conceptually to Git Hooks, but unlike Git, lakeFS hooks trigger a remote server which runs tests and so it is guaranteed to happen.
You can configure lakeFS hooks to check for specific table schema while merging data from development or test data branches to production. That is, a pre-merge hook can be configured on the production branch for schema validation. If the hook run fails, it causes lakeFS to block the merge operation from happening.
This extremely powerful guarantee can help implement schema enforcement and automate the rules and practices that all data sources and producers should adhere to.
Here is a sample actions.yaml file for schema enforcement on master branch
6. Suppose the schema of incoming data is different from that of production data, the configured lakeFS hooks will be triggered on pre-merge condition, and hooks will fail the merge operation to master branch.
This way, you can use lakeFS hooks to enforce the schema, validate your data to avoid PII columns leakage in your environment, and add data quality checks as required.
To understand more about configuring hooks and effectively using them, refer to our lakeFS, git for data – documentation.