Read more
Overcome challenges in building transactional guarantees on rapidly changing data by using Apache Hudi. With this practical guide, data engineers, data architects, and software architects will discover how to seamlessly build an interoperable lakehouse from disparate data sources and deliver faster insights using their query engine of choice. Authors Shiyan Xu, Prashant Wason, Sudha Saktheeswaran, and Rebecca Bilbro provide practical examples and insights to help you unlock the full potential of data lakehouses for different levels of analytics, from batch to interactive to streaming. You'll also learn how to evaluate storage choices and leverage built-in automated table optimizations to build, maintain, and operate production data applications. This book helps you:
- Understand the need for transactional data lakehouses and the challenges associated with building them
- Get up to speed with Apache Hudi and learn how it makes building data lakehouses easy
- Explore data ecosystem support provided by Apache Hudi for popular data sources and query engines
- Perform different write and read operations on Apache Hudi tables and effectively use them for various use cases, including batch and stream applications
- Implement data engineering techniques to operate and manage Apache Hudi tables
- Apply different storage techniques and considerations, such as indexing and clustering to maximize your lakehouse performance
- Build end-to-end incremental data pipelines using Apache Hudi for faster ingestion and fresher analytics
About the author
Shiyan Xu is a Founding Engineer at Onehouse and currently working as an Open Source Engineer. He has been an active contributor to Apache Hudi since 2019, and is serving as a PMC member of the project since 2021. Prior to joining Onehouse, Shiyan worked as a tech lead manager at Zendesk, leading the development of a large-scale data lake platform using Apache Hudi. He is passionate about open source development and engaging with community users.