Fr. 112.00

Computer Vision - ECCV 2024 - 18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part LXXX

English · Paperback / Softback

Shipping usually within 1 to 2 weeks (title will be printed to order)

Description

Read more

The multi-volume set of LNCS books with volume numbers 15059 up to 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29-October 4, 2024.
The 2387 papers presented in these proceedings were carefully reviewed and selected from a total of 8585 submissions. They deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; motion estimation.

List of contents

Ex2Eg-MAE: A Framework for Adaptation of Exocentric Video Masked Autoencoders for Egocentric Social Role Understanding.- Self-Supervised Audio-Visual Soundscape Stylization.- SAVE: Protagonist Diversification with Structure Agnostic Video Editing.- VideoAgent: Long-form Video Understanding with Large Language Model as Agent.- Meta-optimized Angular Margin Contrastive Framework for Video-Language Representation Learning.- Source-Free Domain-Invariant Performance Prediction.- Improving Robustness to Model Inversion Attacks via Sparse Coding Architectures.- Constructing Concept-based Models to Mitigate Spurious Correlations with Minimal Human Effort.- Direct Distillation between Different Domains.- Contrastive ground-level image and remote sensing pre-training improves  representation learning for natural world imagery.- V-Trans4Style: Visual Transition Recommendation for Video Production Style Adaptation.- GRiT: A Generative Region-to-text Transformer for Object Understanding.- LRSLAM: Low-rank Representation of Signed Distance Fields in Dense Visual SLAM System.- Learning Representation for Multitask Learning through Self-Supervised Auxiliary Learning.- Neural Poisson Solver: A Universal and Continuous Framework for Natural Signal Blending.- Geometry Fidelity for Spherical Images.- BAGS: Blur Agnostic Gaussian Splatting through Multi-Scale Kernel Modeling.- CroMo-Mixup: Augmenting Cross-Model Representations for Continual Self-Supervised Learning.- WoVoGen: World Volume-aware Diffusion for Controllable Multi-camera Driving Scene Generation.- Benchmarking Spurious Bias in Few-Shot Image Classifiers.- TurboEdit: Real-time text-based disentangled real image editing.- Soft Shadow Diffusion (SSD): Physics-inspired Learning for 3D Computational Periscopy.- Augmented Neural Fine-tuning for Efficient Backdoor Purification.- REDIR: Refocus-free Event-based De-occlusion Image Reconstruction.- Free-Editor: Zero-shot Text-driven 3D Scene Editing.- DPA-Net: Structured 3D Abstraction from Sparse Views via Differentiable Primitive Assembly.- An Empirical Study and Analysis of Text-to-Image Generation Using Large Language Model-Powered Textual Representation.

Product details

Assisted by Ale¿ Leonardis (Editor), Ales Leonardis (Editor), Elisa Ricci (Editor), Stefan Roth (Editor), Stefan Roth et al (Editor), Olga Russakovsky (Editor), Torsten Sattler (Editor), Gül Varol (Editor)
Publisher Springer, Berlin
 
Languages English
Product format Paperback / Softback
Released 26.10.2024
 
EAN 9783031729881
ISBN 978-3-0-3172988-1
No. of pages 493
Dimensions 155 mm x 31 mm x 235 mm
Weight 867 g
Illustrations LXXXV, 493 p. 166 illus., 164 illus. in color.
Series Lecture Notes in Computer Science
Subject Natural sciences, medicine, IT, technology > IT, data processing > Application software

Customer reviews

No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.

Write a review

Thumbs up or thumbs down? Write your own review.

For messages to CeDe.ch please use the contact form.

The input fields marked * are obligatory

By submitting this form you agree to our data privacy statement.