Fr. 235.00

Challenges and Applications of Generative Large Language Models

English · Paperback / Softback

Will be released 01.01.2026

Description

Read more










Large Language Models (LLMs) are a form of generative AI, based on Deep Learning, that rely on very large textual datasets, and are composed of hundreds of millions (or even billions) of parameters. LLMs can be trained and then refined to perform several NLP tasks like generation of text, summarization, translation, prediction, and more. Challenges and Applications of Generative Large Language Models assists readers in understanding LLMs, their applications in various sectors, challenges that need to be encountered while developing them, open issues, and ethical concerns. LLMs are just one approach in the huge set of methodologies provided by AI. The book, describing strengths and weaknesses of such models, enables researchers and software developers to decide whether an LLM is the right choice for the problem they are trying to solve. AI is the new buzzword, in particular Generative AI for human language (LLMs). As such, an overwhelming amount of hype is obfuscating and giving a distorted view about AI in general, and LLMs in particular. Thus, trying to provide an objective description of LLMs is useful to any person (researcher, professional, student) who is starting to work with human language. The risk, otherwise, is to forget the whole set of methodologies developed by AI in the last decades, sticking with only one model which, although very powerful, has known weaknesses and risks. Given the high level of hype around such models, Challenges and Applications of Generative Large Language Models (LLMs) enables readers to clarify and understand their scope and limitations.

List of contents










1. Generative AI and its application to human language
2. Anatomy of a Large Language Model (LLM)
3. LLM-based chatbots
4. Application of LLMs: zero-shot learning
5. Use of LLMs in education, healthcare
6. LLMs in interpreting legal documents
7. Trustworthiness of LLMs: hallucinations
8. Privacy and security in LLM models and data
9. Scaling down LLMs
10. Ethical issues with LLMs
11. LLMs: future directions

About the author

Dr. Anitha S. Pillai is a Professor in the School of Computing Sciences, Hindustan University, Chennai, India. She has 26 years of teaching and research experience. Her main areas of research are Artificial intelligence, Machine Learning, Natural Language Processing and Healthcare Analytics. She has authored/co-authored more than 90 papers in international journals and book chapters. She is the founder of AtINeu http://atineu.org/ Research Labs, which focusses on the use of Machine Learning/Deep Learning, Virtual Reality, and Augmented Reality in Healthcare. Dr. Pillai is also the co-editor of the book Virtual and Augmented Reality in Education, Art and Museums published by IGI Global,USA and Extended Reality Usage during COVID 19 Pandemic published by Springer Nature, Switzerland.Dr. Roberto Tedesco currently works as a researcher at the University of Applied Sciences and Arts of Southern Switzerland (SUPSI). His research interests include in Natural Language Processing and Accessibility.
Dr. Vincenzo Scotti studied Computer Science and Engineering at the Politecnico di Milano University, Italy, where he earned the B.Sc., the M.Sc. (Computer Science and Engineering), and the Ph.D. (Information Technology, Computer Science and Engineering area) respectively. He is currently a post-doc researcher in the Department of Electronics, Information, and Bioengineering (DEIB) of Politecnico di Milano. His research interests include Natural Language Processing (NLP), Deep Learning, and Artificial Intelligence (AI).

Customer reviews

No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.

Write a review

Thumbs up or thumbs down? Write your own review.

For messages to CeDe.ch please use the contact form.

The input fields marked * are obligatory

By submitting this form you agree to our data privacy statement.