Read more
This book is a practical guide to harnessing Hugging Face's powerful transformers library, unlocking access to the largest open-source LLMs. By simplifying complex NLP concepts and emphasizing practical application, it empowers data scientists, machine learning engineers, and NLP practitioners to build robust solutions without delving into theoretical complexities.
The book is structured into three parts to facilitate a step-by-step learning journey. Part One covers building production-ready LLM solutions introduces the Hugging Face library and equips readers to solve most of the common NLP challenges without requiring deep knowledge of transformer internals. Part Two focuses on empowering LLMs with RAG and intelligent agents exploring Retrieval-Augmented Generation (RAG) models, demonstrating how to enhance answer quality and develop intelligent agents. Part Three covers LLM advances focusing on expert topics such as model training, principles of transformer architecture and other cutting-edge techniques related to the practical application of language models.
Each chapter includes practical examples, code snippets, and hands-on projects to ensure applicability to real-world scenarios. This book bridges the gap between theory and practice, providing professionals with the tools and insights to develop practical and efficient LLM solutions.
What you will learn:
- What are the different types of tasks modern LLMs can solve
- How to select the most suitable pre-trained LLM for specific tasks
- How to enrich LLM with a custom knowledge base and build intelligent systems
- What are the core principles of Language Models, and how to tune them
- How to build robust LLM-based AI Applications
List of contents
Part I: LLM Basics.- Chapter 1. Discovering Transformers.- Chapter 2. LLM Basics: Internals, Deployment and Evaluation.- Chapter 3. Improving Chat Model Responses.- Part II: Empowering LLMs Applications with RAG and Intelligent Agents.- Chapter 4. Enriching the Model’s Knowledge with Retrieval Augmented Generation.- Chapter 5. Building Agent Systems.- Part III: LLM Advances.- Chapter 6. Mastering Model Training.- Chapter 7. Unpacking the Transformers Architecture.
About the author
Ivan Gridin is a machine-learning expert, researcher, and author with extensive experience in building high-performance distributed systems and applying advanced machine-learning techniques in real-world scenarios. His expertise includes natural language processing (NLP), predictive time series modeling, automated machine learning (AutoML), reinforcement learning, and neural architecture search. He also has a strong foundation in mathematics, including stochastic processes, probability theory, optimization, and deep learning. His hands-on experience with Hugging Face tools and large language models (LLMs) has grown significantly, and he is passionate about developing intelligent, real-world applications powered by natural language processing.