Fr. 63.00

Generative AI in Cybersecurity

English · Paperback / Softback

Will be released 14.01.2026

Description

Read more

This book explores the most common generative AI (GenAI) tools and techniques used by malicious actors for hacking and cyber-deception, along with the security risks of large language models (LLMs). It also covers how LLM deployment and use can be secured, and how generative AI can be utilized in SOC automation.

The rapid advancements and growing variety of publicly available generative AI tools enables cybersecurity use cases for threat modeling, security awareness support, web application scanning, actionable insights, and alert fatigue prevention. However, they also came with a steep rise in the number of offensive/rogue/malicious generative AI applications. With large language models, social engineering tactics can reach new heights in the efficiency of phishing campaigns and cyber-deception via synthetic media generation (misleading deepfake images and videos, faceswapping, morphs, and voice clones). The result is a new era of cybersecurity that necessitates innovative approaches to detect and mitigate sophisticated cyberattacks, and to prevent hyper-realistic cyber-deception.

This work provides a starting point for researchers and students diving into malicious chatbot use, system administrators trying to harden the security of GenAI deployments, and organizations prone to sensitive data leak through shadow AI. It also benefits SOC analysts considering generative AI for partially automating incident detection and response, and GenAI vendors working on security guardrails against malicious prompting.

List of contents

.- Defensive Generative AI.
.- Offensive Generative AI: From Criminal LLMs to Deepfake-Based
Deception.
.- Emerging Countermeasures Against Offensive Generative AI.
.- Securing GenAI Deployments and Preventing Misuse.
.- Case Studies.

About the author

Leslie F. Sikos, Ph.D., is a computer scientist specializing in cybersecurity applications powered by artificial intelligence and data science. He holds two Ph.D. degrees and 20+ industry certificates, coupled with industry experience in enterprise ICT infrastructures. He is an active member of the research community as an author, editor, reviewer, conference organizer, and speaker; a senior member of the IEEE, and a certified professional of the Australian Computer Society. He is an invited reviewer of major academic publishers such as Springer and Taylor & Francis, as well as EU research proposals, and interviewed as a subject matter expert by the United Nations and media outlets such as ABC News and 7NEWS. Dr. Sikos is a prolific author who published, beyond numerous journal papers and conference papers, more than 20 books, including textbooks, monographs, and edited volumes.

Summary

This book explores the most common generative AI (GenAI) tools and techniques used by malicious actors for hacking and cyber-deception, along with the security risks of large language models (LLMs). It also covers how LLM deployment and use can be secured, and how generative AI can be utilized in SOC automation.

The rapid advancements and growing variety of publicly available generative AI tools enables cybersecurity use cases for threat modeling, security awareness support, web application scanning, actionable insights, and alert fatigue prevention. However, they also came with a steep rise in the number of offensive/rogue/malicious generative AI applications. With large language models, social engineering tactics can reach new heights in the efficiency of phishing campaigns and cyber-deception via synthetic media generation (misleading deepfake images and videos, faceswapping, morphs, and voice clones). The result is a new era of cybersecurity that necessitates innovative approaches to detect and mitigate sophisticated cyberattacks, and to prevent hyper-realistic cyber-deception.

This work provides a starting point for researchers and students diving into malicious chatbot use, system administrators trying to harden the security of GenAI deployments, and organizations prone to sensitive data leak through shadow AI. It also benefits SOC analysts considering generative AI for partially automating incident detection and response, and GenAI vendors working on security guardrails against malicious prompting.

Customer reviews

No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.

Write a review

Thumbs up or thumbs down? Write your own review.

For messages to CeDe.ch please use the contact form.

The input fields marked * are obligatory

By submitting this form you agree to our data privacy statement.