Apache Kafka is a well-known open-source event store and stream processing platform and has grown to become the de facto standard for data streaming. In this article, developer Michael Burgess provides an insight into the concept of schemas and schema management as a way to add value to your event-driven applications on the fully managed Kafka service, IBM Event Streams on IBM Cloud®.
What is a schema?
A schema describes the structure of data.
For example:
A simple Java class modelling an order of some product from an online store might start with fields like:
public class Order{
private String productName
private String productCode
private int quantity
[…]
}
If order objects were being created using this class, and sent to a topic in Kafka, we could describe the structure of those records using a schema such as this Avro schema:
{
“type”: “record”,
“name”: “Order”,
“fields”: [
{“name”: “productName”, “type”: “string”},
{“name”: “productCode”, “type”: “string”},
{“name”: “quantity”, “type”: “int”}
]
}
Why should you use a schema?
Apache Kafka transfers data without validating the information in the messages. It does not have any visibility of what kind of data are being sent and received, or what data types it might contain. Kafka does not examine the metadata of your messages.
One of the functions of Kafka is to decouple consuming and producing applications, so that they communicate via a Kafka topic rather than directly. This allows them to each work at their own speed, but they still need to agree upon the same data structure; otherwise, the consuming applications have no way to deserialize the data they receive back into something with meaning. The applications all need to share the same assumptions about the structure of the data.
In the scope of Kafka, a schema describes the structure of the data in a message. It defines the fields that need to be present in each message and the types of each field.
This means a schema forms a well-defined contract between a producing application and a consuming application, allowing consuming applications to parse and interpret the data in the messages they receive correctly.
What is a schema registry?
A schema registry supports your Kafka cluster by providing a repository for managing and validating schemas within that cluster. It acts as a database for storing your schemas and provides an interface for managing the schema lifecycle and retrieving schemas. A schema registry also validates evolution of schemas.
Optimize your Kafka environment by using a schema registry.
A schema registry is essentially an agreement of the structure of your data within your Kafka environment. By having a consistent store of the data formats in your applications, you avoid common mistakes that can occur when building applications such as poor data quality, and inconsistencies between your producing and consuming applications that may eventually lead to data corruption. Having a well-managed schema registry is not just a technical necessity but also contributes to the strategic goals of treating data as a valuable product and helps tremendously on your data-as-a-product journey.
Using a schema registry increases the quality of your data and ensures data remain consistent, by enforcing rules for schema evolution. So as well as ensuring data consistency between produced and consumed messages, a schema registry ensures that your messages will remain compatible as schema versions change over time. Over the lifetime of a business, it is very likely that the format of the messages exchanged by the applications supporting the business will need to change. For example, the Order class in the example schema we used earlier might gain a new status field—the product code field might be replaced by a combination of department number and product number, or changes the like. The result is that the schema of the objects in our business domain is continually evolving, and so you need to be able to ensure agreement on the schema of messages in any particular topic at any given time.
There are various patterns for schema evolution:
Forward Compatibility: where the producing applications can be updated to a new version of the schema, and all consuming applications will be able to continue to consume messages while waiting to be migrated to the new version.
Backward Compatibility: where consuming applications can be migrated to a new version of the schema first, and are able to continue to consume messages produced in the old format while producing applications are migrated.
Full Compatibility: when schemas are both forward and backward compatible.
A schema registry is able to enforce rules for schema evolution, allowing you to guarantee either forward, backward or full compatibility of new schema versions, preventing incompatible schema versions being introduced.
By providing a repository of versions of schemas used within a Kafka cluster, past and present, a schema registry simplifies adherence to data governance and data quality policies, since it provides a convenient way to track and audit changes to your topic data formats.
What’s next?
In summary, a schema registry plays a crucial role in managing schema evolution, versioning and the consistency of data in distributed systems, ultimately supporting interoperability between different components. Event Streams on IBM Cloud provides a Schema Registry as part of its Enterprise plan. Ensure your environment is optimized by utilizing this feature on the fully managed Kafka offering on IBM Cloud to build intelligent and responsive applications that react to events in real time.
Provision an instance of Event Streams on IBM Cloud here.
Learn how to use the Event Streams Schema Registry here.
Learn more about Kafka and its use cases here.
For any challenges in set up, see our Getting Started Guide and FAQs.