Domain adaptation deals with the problem that in many learning scenarios, training and test distribution differ from each other. For example, learning a classifier for standard office objects from internet images is difficult due to their different characteristics of images uploaded on the web. Furthermore, the human ability to learn difficult object categories from just a few views is often explained by an extensive use of knowledge from related classes. Our research in this area also aims at exploiting this additional information source by developing new techniques for transfer learning.


Instance-weighted Transfer Learning of Active Appearance Models.
Daniel Haase and Erik Rodner and Joachim Denzler.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1426-1433. 2014. BibTeX pdf more ...

Abstract: There has been a lot of work on face modeling, analysis, and landmark detection, with Active Appearance Models being one of the most successful techniques. A major drawback of these models is the large number of detailed annotated training examples needed for learning. Therefore, we present a transfer learning method that is able to learn from related training data using an instance-weighted transfer technique. Our method is derived using a generalization of importance sampling and in contrast to previous work we explicitly try to tackle the transfer already during learning instead of adapting the fitting process. In our studied application of face landmark detection, we efficiently transfer facial expressions from other human individuals and are thus able to learn a precise face Active Appearance Model only from neutral faces of a single individual. Our approach is evaluated on two common face datasets and outperforms previous transfer methods.
Part Detector Discovery in Deep Convolutional Neural Networks.
Marcel Simon and Erik Rodner and Joachim Denzler.
Asian Conference on Computer Vision (ACCV). 162-177. 2014. BibTeX pdf code more ...

Abstract: Current fine-grained classification approaches often rely on a robust localization of object parts to extract localized feature representations suitable for discrimination. However, part localization is a challenging task due to the large variation of appearance and pose. In this paper, we show how pre-trained convolutional neural networks can be used for robust and efficient object part discovery and localization without the necessity to actually train the network on the current dataset. Our approach called part detector discovery (PDD) is based on analyzing the gradient maps of the network outputs and finding activation centers spatially related to annotated semantic parts or bounding boxes. This allows us not just to obtain excellent performance on the CUB200-2011 dataset, but in contrast to previous approaches also to perform detection and bird classification jointly without requiring a given bounding box annotation during testing and ground-truth parts during training.
Asymmetric and Category Invariant Feature Transformations for Domain Adaptation.
Judy Hoffman and Erik Rodner and Jeff Donahue and Brian Kulis and Kate Saenko.
International Journal of Computer Vision (IJCV). 109(1-2): 28-41. 2014. BibTeX pdf www more ...

Abstract: We address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce a unified flexible model for both supervised and semi-supervised learning that allows us to learn transformations between domains. Additionally, we present two instantiations of the model, one for general feature adaptation/alignment, and one specifically designed for classification. First, we show how to extend metric learning methods for domain adaptation, allowing for learning metrics independent of the domain shift and the final classifier used. Furthermore, we go beyond classical metric learning by extending the method to asymmetric, category independent transformations. Our framework can adapt features even when the target domain does not have any labeled examples for some categories, and when the target and source features have different dimensions. Finally, we develop a joint learning framework for adaptive classifiers, which outperforms competing methods in terms of multi-class accuracy and scalability. We demonstrate the ability of our approach to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types, and codebooks. The experiments show its strong performance compared to previous approaches and its applicability to large-scale scenarios.
ARTOS -- Adaptive Real-Time Object Detection System.
Björn Barz and Erik Rodner and Joachim Denzler.
arXiv preprint arXiv:1407.2721. 2014. BibTeX pdf www code more ...

Abstract: ARTOS is all about creating, tuning, and applying object detection models with just a few clicks. In particular, ARTOS facilitates learning of models for visual object detection by eliminating the burden of having to collect and annotate a large set of positive and negative samples manually and in addition it implements a fast learning technique to reduce the time needed for the learning step. A clean and friendly GUI guides the user through the process of model creation, adaptation of learned models to different domains using in-situ images, and object detection on both offline images and images from a video stream. A library written in C++ provides the main functionality of ARTOS with a C-style procedural interface, so that it can be easily integrated with any other project.
Part Localization by Exploiting Deep Convolutional Networks.
Marcel Simon and Erik Rodner and Joachim Denzler.
ECCV Workshop on Parts and Attributes (ECCV-WS). 2014. BibTeX pdf www
Interactive Adaptation of Real-Time Object Detectors.
Daniel Göhring and Judy Hoffman and Erik Rodner and Kate Saenko and Trevor Darrell.
International Conference on Robotics and Automation (ICRA). 1282-1289. 2014. BibTeX pdf www more ...

Abstract: In the following paper, we present a framework for quickly training 2D object detectors for robotic perception. Our method can be used by robotics practitioners to quickly (under 30 seconds per object) build a large-scale real-time perception system. In particular, we show how to create new detectors on the fly using large-scale internet image databases, thus allowing a user to choose among thousands of available categories to build a detection system suitable for the particular robotic application. Furthermore, we show how to adapt these models to the current environment with just a few in-situ images. Experiments on existing 2D benchmarks evaluate the speed, accuracy, and flexibility of our system.


Semi-Supervised Domain Adaptation with Instance Constraints.
Jeff Donahue and Judy Hoffman and Erik Rodner and Kate Saenko and Trevor Darrell.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 668 - 675. 2013. BibTeX pdf
Efficient Learning of Domain-invariant Image Representations.
Judy Hoffman and Erik Rodner and Jeff Donahue and Trevor Darrell and Kate Saenko.
International Conference on Learning Representations (ICLR). 2013. BibTeX pdf
Scalable Transform-based Domain Adaptation.
Erik Rodner and Judy Hoffman and Jeff Donahue and Trevor Darrell and Kate Saenko.
ICCV Workshop on Visual Domain Adaptation (ICCV-WS). 2013. BibTeX pdf
Transform-based Domain Adaptation for Big Data.
Erik Rodner and Judy Hoffman and Jeff Donahue and Trevor Darrell and Kate Saenko.
NIPS Workshop on New Directions in Transfer and Multi-Task Learning (NIPS-WS). 2013. abstract version of arXiv:1308.4200 BibTeX pdf more ...

Abstract: Images seen during test time are often not from the same distribution as images used for learning. This problem, known as domain shift, occurs when training classifiers from object-centric internet image databases and trying to apply them directly to scene understanding tasks. The consequence is often severe performance degradation and is one of the major barriers for the application of classi- fiers in real-world systems. In this paper, we show how to learn transform-based domain adaptation classifiers in a scalable manner. The key idea is to exploit an implicit rank constraint, originated from a max-margin domain adaptation formulation, to make optimization tractable. Experiments show that the transformation between domains can be very efficiently learned from data and easily applied to new categories
Towards Adapting ImageNet to Reality: Scalable Domain Adaptation with Implicit Low-rank Transformations.
Erik Rodner and Judy Hoffman and Jeff Donahue and Trevor Darrell and Kate Saenko.
arXiv preprint arXiv:1308.4200. 2013. BibTeX pdf


Lernen mit wenigen Beispielen für die visuelle Objekterkennung.
Erik Rodner.
Ausgezeichnete Informatikdissertationen 2011. 2012. in german BibTeX pdf www


Learning with Few Examples for Binary and Multiclass Classification Using Regularization of Randomized Trees.
Erik Rodner and Joachim Denzler.
Pattern Recognition Letters. 32(2): 244-251. 2011. BibTeX pdf
Learning from Few Examples for Visual Recognition Problems Erik Rodner. (2011) BibTeX pdf www


One-Shot Learning of Object Categories using Dependent Gaussian Processes.
Erik Rodner and Joachim Denzler.
Annual Symposium of the German Association for Pattern Recognition (DAGM). 232-241. 2010. BibTeX pdf


Learning with Few Examples by Transferring Feature Relevance.
Erik Rodner and Joachim Denzler.
Annual Symposium of the German Association for Pattern Recognition (DAGM). 252-261. 2009. BibTeX pdf


Learning with Few Examples using a Constrained Gaussian Prior on Randomized Trees.
Erik Rodner and Joachim Denzler.
Vision, Modelling, and Visualization Workshop (VMV). 159-168. 2008. BibTeX pdf