Fr. 198.00

Intelligent and Efficient Video Moment Localization

English, German · Hardback

Will be released 25.06.2025

Description

Read more

This book provides a comprehensive exploration of video moment localization, a rapidly emerging research field focused on enabling precise retrieval of specific moments within untrimmed, unsegmented videos. With the rapid growth of digital content and the rise of video-sharing platforms, users face significant challenges when searching for particular content across vast video archives. This book addresses how video moment localization uses natural language queries to bridge the gap between video content and semantic understanding, offering an intuitive solution for locating specific moments across diverse domains like surveillance, education, and entertainment.
 This book explores the latest advancements in video moment localization, addressing key issues such as accuracy, efficiency, and scalability. It presents innovative techniques for contextual understanding and cross-modal semantic alignment, including attention mechanisms and dynamic query decomposition. Additionally, the book discusses solutions for enhancing computational efficiency and scalability, such as semantic pruning and efficient hashing, while introducing frameworks for better integration between visual and textual data. It also examines weakly-supervised learning approaches to reduce annotation costs without sacrificing performance. Finally, the book covers real-world applications and offers insights into future research directions.

List of contents

Chapter 1: Introduction.- Chapter 2: Semantic Enhanced Video Moment Localization.- Chapter 3: Semantic Alignment Video Moment Localization.- Chapter 4: Semantic Pruning Video Moment Localization.- Chapter 5: Semantic Collaborative Video Moment Localization.- Chapter 6: Weakly-Supervised Video Moment Localization.- Chapter 7: Efficient Hashing based Video Moment Localization.- Chapter 8: Research Frontiers.

About the author

Meng Liu is a Professor in the School of Computer Science and Technology at Shandong Jianzhu University. She received her PhD from Shandong University, China, in 2019. Her research interests include multimedia computing and information retrieval. She has co-authored over 70 papers in leading conferences and journals, including ICML, CVPR, IEEE TPAMI, and IEEE TIP. She has also served as a reviewer and area chair for conferences such as ICLR, CVPR, AAAI, ICME, and ACM MM, as well as a reviewer for journals such as IEEE TIP and IEEE TMM.
Yupeng Hu is currently an Associate Professor in the School of Software at Shandong University. He received his Ph.D. in Software Engineering from Shandong University, Jinan, China, in 2018. His research interests include information retrieval, data mining, and explainable AI. He has published his work in leading journals and conferences, such as IEEE TIP and ACM MM. He has also served as a reviewer for ACM MM, ACL, and AAAI, and as a reviewer for IEEE TKDE and IEEE TMM. 
Weili Guan is currently a Professor in the School of Electronics and Information Engineering at Harbin Institute of Technology (Shenzhen), China. She received her Master’s degree from the National University of Singapore and her Ph.D. from Monash University. She has approximately six years of experience working in industry. Her research interests include multimedia computing and information retrieval. She has published over 40 papers in top-tier conferences and journals, including ACM MM, SIGIR, IEEE TPAMI, and IEEE TIP.
Liqiang Nie is a Professor in the School of Computer Science and Technology at Harbin Institute of Technology (Shenzhen). He received his B.Eng. from Xi’an Jiaotong University and his Ph.D. from the National University of Singapore. His research interests focus primarily on multimedia computing and information retrieval. Dr. Nie has co-authored over 100 papers and four books, amassing more than 15,000 citations on Google Scholar. He serves as an Associate Editor for IEEE TKDE, IEEE TMM, IEEE TCSVT, and ACM ToMM, and regularly acts as an Area Chair for ACM MM, NeurIPS, IJCAI, and AAAI. He has received numerous honors, including an Honorable Mention for Best Paper at both ACM MM and SIGIR in 2019, the SIGMM Rising Star award in 2020, TR35 China 2020, DAMO Academy Young Fellow in 2020, and the SIGIR Best Student Paper award in 2021.

Summary

This book provides a comprehensive exploration of video moment localization, a rapidly emerging research field focused on enabling precise retrieval of specific moments within untrimmed, unsegmented videos. With the rapid growth of digital content and the rise of video-sharing platforms, users face significant challenges when searching for particular content across vast video archives. This book addresses how video moment localization uses natural language queries to bridge the gap between video content and semantic understanding, offering an intuitive solution for locating specific moments across diverse domains like surveillance, education, and entertainment.
 This book explores the latest advancements in video moment localization, addressing key issues such as accuracy, efficiency, and scalability. It presents innovative techniques for contextual understanding and cross-modal semantic alignment, including attention mechanisms and dynamic query decomposition. Additionally, the book discusses solutions for enhancing computational efficiency and scalability, such as semantic pruning and efficient hashing, while introducing frameworks for better integration between visual and textual data. It also examines weakly-supervised learning approaches to reduce annotation costs without sacrificing performance. Finally, the book covers real-world applications and offers insights into future research directions.

Customer reviews

No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.

Write a review

Thumbs up or thumbs down? Write your own review.

For messages to CeDe.ch please use the contact form.

The input fields marked * are obligatory

By submitting this form you agree to our data privacy statement.