title | layout | header | excerpt | intro | feature_row | organizers_row | speakers_row | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
<span class='header-marking'>4th Visual Inductive Priors for Data-Efficient Deep Learning Workshop</span> |
splash |
|
<span class='header-marking'>ICCV 2023 @ Room E03 (Poster room W02) </span><br/><span class='header-marking'>Monday October 2nd 2023, 8:45 - 13:00</span> |
|
|
|
|
{% include feature_row id="intro" type="center" %}
{% include feature_row %}
Data is fueling deep learning, yet it is costly to gather and to annotate. Training on massive datasets has a huge energy consumption adding to our carbon footprint. In addition, there are only a select few deep learning behemoths which have billions of data points and thousands of expensive deep learning hardware GPUs at their disposal. This workshop focuses on how to pre-wire deep networks with generic visual inductive innate knowledge structures, which allows to incorporate hard won existing generic knowledge. Visual inductive priors are data efficient: what is built-in no longer has to be learned, saving valuable training data.
Excellent recent research investigates data efficiency in deep networks by exploiting other data sources through unsupervised learning, re-using existing datasets, or synthesizing artificial training data. However, not enough attention is given on how to overcome the data dependency by adding prior knowledge to deep nets. As a consequence, all knowledge has to be (re-)learned implicitly from data, making deep networks hard to understand black boxes which are susceptible to dataset bias requiring huge datasets and compute resources. This workshop aims to remedy this gap by investigating how to flexibly pre-wire deep networks with generic visual innate knowledge structures, which allows to incorporate hard won existing knowledge from physics such as light reflection or geometry.
The great power of deep neural networks is their incredible flexibility to learn. The direct consequence of such power, is that small datasets can simply be memorized and the network will likely not generalize to unseen data. Regularization aims to prevent such over-fitting by adding constraints to the learning process. Much work is done on regularization of internal network properties and architectures. In this workshop we focus on regularization methods based on innate priors. There is strong evidence that an innate prior benefits deep nets: adding convolution to deep networks yields a convolutional deep neural network (CNN) which is hugely successful and has permeated the entire field. While convolution was initially applied on images, it is now generalized to graph networks, speech, language, 3D data, video, etc. Convolution models translation invariance in images: an object may occur anywhere in the image, and thus instead of learning parameters at each location in the image, convolution allows to only consider local relations, yet, share parameters over all image locations. This allows a strong reduction in both number of parameters and examples to learn from. This workshop aims to further the great success of convolution, exploiting innate regularizing structures yielding a significant reduction of training data.
Location: ICCV 2023 @ Room E03 (Poster room W02)
| CEST | | | -- | -- | -- | -- | -- | | 8:45 | Opening | Announcing challenge winners. | | 9:00 | Invited talk: Erik Bekkers | Grounded representation learning through equivariant deep learning | | 9:45 | Oral presentation #1: Jayaraman J. Thiagarajan | InterAug: A Tuning-Free Augmentation Policy for Data-Efficient and Robust Object Detection | | 9:55 | Oral presentation #2: Yeskendir Koishekenov | Geometric Contrastive Learning | | 10:05 | Oral presentation #3: Ombretta Strafforello | Video BagNet: Short Temporal Receptive Fields Increase Robustness in Long-Term Action Recognition | | 10:15 | Oral presentation #4: Pranjay Shyam | Adversarial Auto-Augmentation for Data-Efficient Single Image Dehazing | | 10:25 | Coffee break | | | 10:35 | Poster session | Accepted posters | | 11:30 | Invited talk: Subhransu Maji | Learning representations by convex decompositions | | 12:15 | Invited talk: Stephan Alaniz | Seeking simple explanations through shape priors | | 13:00 | Closing remarks | |
{% include feature_row id="speakers_row" %}
{% include feature_row id="organizers_row" %}
Email us at vipriors-ewi AT tudelft DOT nl