Fr. 102.00

Computer Vision - ECCV 2024 - 18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part XLIX

English · Paperback / Softback

Shipping usually within 1 to 2 weeks (title will be printed to order)

Description

Read more

The multi-volume set of LNCS books with volume numbers 15059 up to 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29-October 4, 2024.
The 2387 papers presented in these proceedings were carefully reviewed and selected from a total of 8585 submissions. The papers deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; motion estimation.

List of contents

Real-time Holistic Robot Pose Estimation with Unknown States.- CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning.- A Simple Baseline for Spoken Language to Sign Language Translation with 3D Avatars.- An accurate detection is not all you need to combat label noise in web-noisy datasets.- Online Vectorized HD Map Construction using Geometry.- Image-adaptive 3D Lookup Tables for Real-time Image Enhancement with Bilateral Grids.- Learned HDR Image Compression for Perceptually Optimal Storage and Display.- Sparse Beats Dense: Rethinking Supervision in Radar-Camera Depth Completion.- Non-Exemplar Domain Incremental Learning via Cross-Domain Concept Integration.- Free-VSC: Free Semantics from Visual Foundation Models for Unsupervised Video Semantic Compression.- Improving Virtual Try-On with Garment-focused Diffusion Models.- Ray Denoising: Depth-aware Hard Negative Sampling for Multi-view 3D Object Detection.- Disentangled Generation and Aggregation for Robust Radiance Fields.- UNIKD: UNcertainty-Filtered Incremental Knowledge Distillation for Neural Implicit Representation.- Subspace Prototype Guidance for Mitigating Class Imbalance in Point Cloud Semantic Segmentation.- MoAI: Mixture of All Intelligence for Large Language and Vision Models.- Semantic-guided Robustness Tuning for Few-Shot Transfer Across Extreme Domain Shift.- Revisit Event Generation Model: Self-Supervised Learning of Event-to-Video Reconstruction with Implicit Neural Representations.- SDPT: Synchronous Dual Prompt Tuning for Fusion-based Visual-Language Pre-trained Models.- Open-World Dynamic Prompt and Continual Visual Representation Learning.- Learning Video Context as Interleaved Multimodal Sequences.- Learning Unsigned Distance Functions from Multi-view Images with Volume Rendering Priors.- Dense Multimodal Alignment for Open-Vocabulary 3D Scene Understanding.- Deep Feature Surgery: Towards Accurate and Efficient Multi-Exit Networks.- Multi-scale Cross Distillation for Object Detection in Aerial Images.- Progressive Proxy Anchor Propagation for Unsupervised Semantic Segmentation.- Within the Dynamic Context: Inertia-aware 3D Human Modeling with Pose Sequence.

Product details

Assisted by Ale¿ Leonardis (Editor), Ales Leonardis (Editor), Elisa Ricci (Editor), Stefan Roth (Editor), Olga Russakovsky (Editor), Torsten Sattler (Editor), Gül Varol (Editor)
Publisher Springer, Berlin
 
Languages English
Product format Paperback / Softback
Released 03.11.2024
 
EAN 9783031729669
ISBN 978-3-0-3172966-9
No. of pages 511
Dimensions 155 mm x 32 mm x 235 mm
Weight 897 g
Illustrations LXXXV, 511 p. 199 illus., 193 illus. in color.
Series Lecture Notes in Computer Science
Subject Natural sciences, medicine, IT, technology > IT, data processing > Application software

Customer reviews

No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.

Write a review

Thumbs up or thumbs down? Write your own review.

For messages to CeDe.ch please use the contact form.

The input fields marked * are obligatory

By submitting this form you agree to our data privacy statement.