Logo

    schema registry

    Explore "schema registry" with insightful episodes like "How to use Data Contracts for Long-Term Schema Management" and "Schema Registry Made Simple by Confluent Cloud ft. Magesh Nandakumar" from podcasts like ""Streaming Audio: Apache Kafka® & Real-Time Data" and "Streaming Audio: Apache Kafka® & Real-Time Data"" and more!

    Episodes (2)

    How to use Data Contracts for Long-Term Schema Management

    How to use Data Contracts for Long-Term Schema Management

    Have you ever struggled with managing data long term, especially as the schema changes over time? In order to manage and leverage data across an organization, it’s essential to have well-defined guidelines and standards in place around data quality, enforcement, and data transfer. To get started, Abraham Leal (Customer Success Technical Architect, Confluent) suggests that organizations associate their Apache Kafka® data with a data contract (schema). A data contract is an agreement between a service provider and data consumers. It defines the management and intended usage of data within an organization. In this episode, Abraham talks to Kris about how to use data contracts and schema enforcement to ensure long-term data management.

    When an organization sends and stores critical and valuable data in Kafka, more often than not it would like to leverage that data in various valuable ways for multiple business units. Kafka is particularly suited for this use case, but it can be problematic later on if the governance rules aren’t established up front.

    With schema registry, evolution is easy due to its robust security guarantees. When managing data pipelines, you can also use GitOps automation features for an extra control layer. It allows you to be creative with topic versioning, upcasting/downcasting the data collected, and adding quality assurance steps at the end of each run to ensure your project remains reliable.

    Abraham explains that Protobuf and Avro are the best formats to use rather than XML or JSON because they are built to handle schema evolution. In addition, they have a much lower overhead per-record, so you can save bandwidth and data storage costs by adopting them.

    There’s so much more to consider, but if you are thinking about implementing or integrating with your data quality team, Abraham suggests that you use schema registry heavily from the beginning.

    If you have more questions, Kris invites you to join the conversation. You can also watch the KOR Financial Current talk Abraham mentions or take Danica Fine’s free course on how to use schema registry on Confluent Developer.

    EPISODE LINKS

    Schema Registry Made Simple by Confluent Cloud ft. Magesh Nandakumar

    Schema Registry Made Simple by Confluent Cloud ft. Magesh Nandakumar

    Tim Berglund and Magesh Nandakumar (Software Engineer, Confluent) discuss why schemas matter for building systems on Apache Kafka®, and how Confluent Schema Registry helps with the problem. They talk about how Schema Registry works, how you can collaborate around schema change through `avsc` files, and what it means for this to be available in Confluent Cloud today.

    EPISODE LINKS

    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io