Fr. 84.00

Essentials of Generative AI

English · Hardback

Shipping usually within 6 to 7 weeks

Description

Read more

This book provides a concise yet comprehensive introduction to generative artificial intelligence.
The first part explains the foundational technologies and architectures that support the realization of generative models. It covers evolved and deepened elements, word embeddings as a representative example of representation learning, and the Transformer as a network foundation, along with its underlying attention mechanism. Reinforcement learning, which became essential for elevating large-scale language models to language generation models, is also discussed in detail, focusing on essential aspects.
The second part deals with language generation. It starts by elucidating language models
and introduces large-scale language models with broad applications as the foundational architecture of language processing, further discussing language generation models as their evolution. Though not common terminology, in this book, models such as ChatGPT and Llama 2, which are large-scale language models fine-tuned using reinforcement learning, are referred to as generative language models.
The third part addresses image generation, discussing variational autoencoders and the remarkable diffusion models. Additionally, it explains Generative Adversarial Networks(GAN). Although GAN poses challenges due to unstable learning, their conceptual framework is widely applicable, especially Wasserstein GAN seems suitable for introducing optimal trans- port distance, which is utilized in various scenarios.
This book primarily serves as a companion for researchers or graduate students in machine learning, aiming to help them understand the essence of generative AI and lay the groundwork for advancing their own research.

List of contents

Introduction.- Basics.- Language generation model.- Image generation model.

About the author

Takeshi Okadome is a Japanese computer scientist. He is a Professor of Artificial Intelligent and Mechanical Engineering at Kwansei Gakuin University and a director of

the Artificial Intelligent Center of Kwansei Gakuin University.

He obtained a Bachelor and Master of degrees of Computer Science, and later a PhD in computer science from the University of Tokyo supervised by Hisao Yamada. After obtaining his PhD In 1988, he became a researcher at NTT. He conducts research in the field of machine learning. He is a member of ACM.

Summary

This book provides a concise yet comprehensive introduction to generative artificial intelligence.
The first part explains the foundational technologies and architectures that support the realization of generative models. It covers evolved and deepened elements, word embeddings as a representative example of representation learning, and the Transformer as a network foundation, along with its underlying attention mechanism. Reinforcement learning, which became essential for elevating large-scale language models to language generation models, is also discussed in detail, focusing on essential aspects.
The second part deals with language generation. It starts by elucidating language models
and introduces large-scale language models with broad applications as the foundational architecture of language processing, further discussing language generation models as their evolution. Though not common terminology, in this book, models such as ChatGPT and Llama 2, which are large-scale language models fine-tuned using reinforcement learning, are referred to as generative language models.
The third part addresses image generation, discussing variational autoencoders and the remarkable diffusion models. Additionally, it explains Generative Adversarial Networks(GAN). Although GAN poses challenges due to unstable learning, their conceptual framework is widely applicable, especially Wasserstein GAN seems suitable for introducing optimal trans- port distance, which is utilized in various scenarios.
This book primarily serves as a companion for researchers or graduate students in machine learning, aiming to help them understand the essence of generative AI and lay the groundwork for advancing their own research.

Product details

Authors Takeshi Okadome
Publisher Springer, Berlin
 
Languages English
Product format Hardback
Released 13.01.2025
 
EAN 9789819600281
ISBN 978-981-9600-28-1
No. of pages 232
Illustrations XII, 232 p. 123 illus., 10 illus. in color.
Subjects Natural sciences, medicine, IT, technology > IT, data processing > IT

Artificial Intelligence, Deep Learning, Large Language Model, Generative AI, generative adversarial network

Customer reviews

No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.

Write a review

Thumbs up or thumbs down? Write your own review.

For messages to CeDe.ch please use the contact form.

The input fields marked * are obligatory

By submitting this form you agree to our data privacy statement.