Fr. 215.00

Big Memory Systems

English · Hardback

Will be released 11.02.2026

Description

Read more

This book presents groundbreaking research schemes that are shaping the future of big memory systems. Large-scale persistent and byte-addressable memory eliminates traditional storage bottlenecks by unifying memory and storage hierarchies. For engineers and architects building next-generation databases and distributed systems, these innovations deliver transformative performance: faster persistent writes through optimized flush mechanisms, the reduced contention via lock-free designs, and high throughput using GPU acceleration. Each chapter presents fully implemented architectures in real-world environments, from learned indexes to RDMA-powered transactions, together with deployable code patterns and performance benchmarks. This book shows new devices are leveraging big memory to achieve low-latency persistence, hardware-accelerated data processing, and linear scalability in distributed environments.

Designed for professionals with operating systems fundamentals, this book bridges cutting-edge research with practical implementation, where big memory's unique characteristics (persistence at DRAM speeds, massive capacity, and fine-grained access) demand fundamentally new architectural approaches. Learn how to achieve faster queries with learned indexes in the disaggregated memory, how to optimize cuckoo hashing for persistent memory's asymmetric costs, and why the latest GPUs incorporate these persistence techniques. This book also provides efficient and useful toolkits: the RDMA protocols have been adopted in storage tiers, while the lock-free designs improve real-time recommendation systems. Whether building cloud-native databases, low-latency recommendation systems, or memory-driven AI services, these solutions will help exploit the full potential of the big memory.

List of contents

"1.Write Optimized and High Performance Persistence in Big Memory".- "2.Lock free Concurrent Level Design for Persistent Memory".- "3.GPU enabled Byte Granularity Persistence for Big Memory".- "4.Scalable Learned Key Value Store for Disaggregated Memory".- "5.Fast One sided RDMA based Transactions for Disaggregated Memory".- "6.Multi Versioning Design for Distributed Transactions on Big Memory".- "7.Fast and Cost Efficient Hashing Index Schemes for Cloud Systems".- "8.Mitigating Asymmetric Read and Write Costs in Cuckoo based Designs".

About the author

Dr.Yu Hua is a Professor in Huazhong University of Science and Technology. His research interests include cloud storage systems, file systems, non-volatile memory architectures, big memory, etc. His papers have been published in major conferences and journals, including OSDI, FAST, MICRO, ASPLOS, VLDB, USENIX ATC, HPCA. He is the Associate Editor in ACM Transactions on Storage (TOS) (2023-). He serves as PC (vice) Chairs in ICDCS 2021, ACM APSys 2019, and ICPADS 2016, as well as PC in OSDI, SIGCOMM, FAST, NSDI, ASPLOS, MICRO. He received the Best Paper Awards in FAST 2023, IEEE/ACM IWQoS 2023 and IEEE HPCC 2021. He is the distinguished member of CCF, and senior member of ACM and IEEE. He has been selected as the Distinguished Speaker of ACM and CCF

Summary

This book presents groundbreaking research schemes that are shaping the future of big memory systems. Large-scale persistent and byte-addressable memory eliminates traditional storage bottlenecks by unifying memory and storage hierarchies. For engineers and architects building next-generation databases and distributed systems, these innovations deliver transformative performance: faster persistent writes through optimized flush mechanisms, the reduced contention via lock-free designs, and high throughput using GPU acceleration. Each chapter presents fully implemented architectures in real-world environments, from learned indexes to RDMA-powered transactions, together with deployable code patterns and performance benchmarks. This book shows new devices are leveraging big memory to achieve low-latency persistence, hardware-accelerated data processing, and linear scalability in distributed environments.

Designed for professionals with operating systems fundamentals, this book bridges cutting-edge research with practical implementation, where big memory's unique characteristics (persistence at DRAM speeds, massive capacity, and fine-grained access) demand fundamentally new architectural approaches. Learn how to achieve faster queries with learned indexes in the disaggregated memory, how to optimize cuckoo hashing for persistent memory's asymmetric costs, and why the latest GPUs incorporate these persistence techniques. This book also provides efficient and useful toolkits: the RDMA protocols have been adopted in storage tiers, while the lock-free designs improve real-time recommendation systems. Whether building cloud-native databases, low-latency recommendation systems, or memory-driven AI services, these solutions will help exploit the full potential of the big memory.

Customer reviews

No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.

Write a review

Thumbs up or thumbs down? Write your own review.

For messages to CeDe.ch please use the contact form.

The input fields marked * are obligatory

By submitting this form you agree to our data privacy statement.