Fr. 102.00

Computer Vision - ECCV 2024 - 18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part XLII

Inglese · Tascabile

Spedizione di solito entro 1 a 2 settimane (il titolo viene stampato sull'ordine)

Descrizione

Ulteriori informazioni

The multi-volume set of LNCS books with volume numbers 15059 up to 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29-October 4, 2024.
The 2387 papers presented in these proceedings were carefully reviewed and selected from a total of 8585 submissions. The papers deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; motion estimation.
 

Sommario

Open-Set Recognition in the Age of Vision-Language Models.- Unsqueeze [CLS] Bottleneck to Learn Rich Representations.- Robust Multimodal Learning via Representation Decoupling.- Object-Conditioned Energy-Based  Attention Map Alignment in Text-to-Image Diffusion Models.- WiMANS: A Benchmark Dataset for WiFi-based Multi-user Activity Sensing.- Embedding-Free Transformer with Inference Spatial Reduction for Efficient Semantic Segmentation.- VeCLIP: Improving CLIP Training via Visual-enriched Captions.- Three Things We Need to Know About Transferring Stable Diffusion to Visual Dense Prediciton Tasks.- Learning Representations from Foundation Models for Domain Generalized Stereo Matching.- Spike-Temporal Latent Representation for Energy-Efficient Event-to-Video Reconstruction.- Effective Lymph Nodes Detection in CT Scans Using Location Debiased Query Selection and Contrastive Query Representation in Transformer.- Chat-Edit-3D: Interactive 3D Scene Editing via Text Prompts.- Event-Adapted Video Super-Resolution.- Look Hear: Gaze Prediction for Speech-directed Human Attention.- Raising the Ceiling: Conflict-Free Local Feature Matching with Dynamic View Switching.- Q&A Prompts: Discovering Rich Visual Clues through Mining Question-Answer Prompts for VQA requiring Diverse World Knowledge.- Catastrophic Overfitting: A Potential Blessing in Disguise.- Long-range Turbulence Mitigation: A Large-scale Dataset and A Coarse-to-fine Framework.- SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models.- Visual Alignment Pre-training for Sign Language Translation.- Parrot Captions Teach CLIP to Spot Text.- Solving Motion Planning Tasks with a Scalable Generative Model.- Griffon: Spelling out All Object Locations at Any Granularity with Large Language Models.- Vision-Language Action Knowledge Learning for Semantic-Aware Action Quality Assessment.- Knowledge Transfer with Simulated Inter-Image Erasing for Weakly Supervised Semantic Segmentation.- BurstM: Deep Burst Multi-scale SR using Fourier Space with Optical Flow.- Diffusion Reward: Learning Rewards via Conditional Video Diffusion.

Dettagli sul prodotto

Con la collaborazione di Ale¿ Leonardis (Editore), Ales Leonardis (Editore), Elisa Ricci (Editore), Stefan Roth (Editore), Olga Russakovsky (Editore), Torsten Sattler (Editore), Gül Varol (Editore)
Editore Springer, Berlin
 
Lingue Inglese
Formato Tascabile
Pubblicazione 02.10.2024
 
EAN 9783031729454
ISBN 978-3-0-3172945-4
Pagine 499
Dimensioni 155 mm x 31 mm x 235 mm
Peso 879 g
Illustrazioni LXXXV, 499 p. 171 illus., 166 illus. in color.
Serie Lecture Notes in Computer Science
Categoria Scienze naturali, medicina, informatica, tecnica > Informatica, EDP > Software applicativo

Recensioni dei clienti

Per questo articolo non c'è ancora nessuna recensione. Scrivi la prima recensione e aiuta gli altri utenti a scegliere.

Scrivi una recensione

Top o flop? Scrivi la tua recensione.

Per i messaggi a CeDe.ch si prega di utilizzare il modulo di contatto.

I campi contrassegnati da * sono obbligatori.

Inviando questo modulo si accetta la nostra dichiarazione protezione dati.