Fr. 112.00

Computer Vision - ECCV 2024 - 18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part IV

Inglese · Tascabile

Spedizione di solito entro 1 a 2 settimane (il titolo viene stampato sull'ordine)

Descrizione

Ulteriori informazioni

The multi-volume set of LNCS books with volume numbers 15059 up to 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29-October 4, 2024.
The 2387 papers presented in these proceedings were carefully reviewed and selected from a total of 8585 submissions. The papers deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; motion estimation.

Sommario

LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation.- Mahalanobis Distance-based Multi-view Optimal Transport for Multi-view Crowd Localization.- RAW-Adapter: Adapting Pretrained Visual Model to Camera RAW Images.- SLEDGE: Synthesizing Driving Environments with Generative Models and Rule-Based Traffic.- AFreeCA: Annotation-Free Counting for All.- Adversarially Robust Distillation by Reducing the Student-Teacher Variance Gap.- LN3Diff: Scalable Latent Neural Fields Diffusion for Speedy 3D Generation.- Hierarchical Temporal Context Learning for Camera-based Semantic Scene Completion.- Equi-GSPR: Equivariant SE(3) Graph Network Model for Sparse Point Cloud Registration.- GTP-4o: Modality-prompted Heterogeneous Graph Learning for Omni-modal Biomedical Representation.- PromptCCD: Learning Gaussian Mixture Prompt Pool for Continual Category Discovery.- Sapiens: Foundation for Human Vision Models.- Linearly Controllable GAN: Unsupervised Feature Categorization and Decomposition for Image Generation and Manipulation.- Generating Human Interaction Motions in Scenes with Text Control.- NOVUM: Neural Object Volumes for Robust Object Classification.- Align before Collaborate: Mitigating Feature Misalignment for Robust Multi-Agent Perception.- HIMO: A New Benchmark for Full-Body Human Interacting with Multiple Objects.- SAIR: Learning Semantic-aware Implicit Representation.- ColorMNet: A Memory-based Deep Spatial-Temporal Feature Propagation Network for Video Colorization.- UNIC: Universal Classification Models via Multi-teacher Distillation.- Instance-dependent Noisy-label Learning with Graphical Model Based Noise-rate Estimation.- Eliminating Warping Shakes for Unsupervised Online Video Stitching.- Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models.- Merlin: Empowering Multimodal LLMs with Foresight Minds.- ViC-MAE: Self-Supervised Representation Learning from Images and Video with Contrastive Masked Autoencoders.- E.T. the Exceptional Trajectory: Text-to-camera-trajectory generation with character awareness.- OphNet: A Large-Scale Video Benchmark for Ophthalmic Surgical Workflow Understanding.

Dettagli sul prodotto

Con la collaborazione di Ale¿ Leonardis (Editore), Ales Leonardis (Editore), Elisa Ricci (Editore), Stefan Roth (Editore), Olga Russakovsky (Editore), Torsten Sattler (Editore), Gül Varol (Editore)
Editore Springer, Berlin
 
Lingue Inglese
Formato Tascabile
Pubblicazione 30.09.2024
 
EAN 9783031732348
ISBN 978-3-0-3173234-8
Pagine 502
Dimensioni 155 mm x 31 mm x 235 mm
Peso 879 g
Illustrazioni LXXXV, 502 p. 298 illus., 166 illus. in color.
Serie Lecture Notes in Computer Science
Categoria Scienze naturali, medicina, informatica, tecnica > Informatica, EDP > Software applicativo

Recensioni dei clienti

Per questo articolo non c'è ancora nessuna recensione. Scrivi la prima recensione e aiuta gli altri utenti a scegliere.

Scrivi una recensione

Top o flop? Scrivi la tua recensione.

Per i messaggi a CeDe.ch si prega di utilizzare il modulo di contatto.

I campi contrassegnati da * sono obbligatori.

Inviando questo modulo si accetta la nostra dichiarazione protezione dati.