Skip to content
This repository has been archived by the owner on Apr 21, 2024. It is now read-only.

Latest commit

 

History

History
105 lines (105 loc) · 66.2 KB

20190723.md

File metadata and controls

105 lines (105 loc) · 66.2 KB

ArXiv cs.CV --Tue, 23 Jul 2019

1.Automatic detection of multiple pathologies in fundus photographs using spin-off learning ⬇️

In the last decades, large datasets of fundus photographs have been collected in diabetic retinopathy (DR) screening networks. Through deep learning, these datasets were used to train automatic detectors for DR and a few other frequent pathologies, with the goal to automate screening. One challenge limits the adoption of such systems so far: automatic detectors ignore rare conditions that ophthalmologists currently detect. To address this limitation, we propose a new machine learning (ML) framework, called spin-off learning, for the automatic detection of rare conditions. This framework extends convolutional neural networks (CNNs), trained for frequent conditions, with an unsupervised probabilistic model for rare condition detection. Spin-off learning is based on the observation that CNNs often perceive photographs containing the same anomalies as similar, even though these CNNs were trained to detect unrelated conditions. This observation was based on the t-SNE visualization tool, which we decided to include in our probabilistic model. Spin-off learning supports heatmap generation, so the detected anomalies can be highlighted in images for decision support. Experiments in a dataset of more than 160,000 screening examinations from the OPHDIAT screening network show that spin-off learning can detect 37 conditions, out of 41, with an area under the ROC curve (AUC) greater than 0.8 (average AUC: 0.938). In particular, spin-off learning significantly outperforms other candidate ML frameworks for detecting rare conditions: multitask learning, transfer learning and one-shot learning. We expect these richer predictions to trigger the adoption of automated eye pathology screening, which will revolutionize clinical practice in ophthalmology.

2.Multi-Class Lane Semantic Segmentation using Efficient Convolutional Networks ⬇️

Lane detection plays an important role in a self-driving vehicle. Several studies leverage a semantic segmentation network to extract robust lane features, but few of them can distinguish different types of lanes. In this paper, we focus on the problem of multi-class lane semantic segmentation. Based on the observation that the lane is a small-size and narrow-width object in a road scene image, we propose two techniques, Feature Size Selection (FSS) and Degressive Dilation Block (DD Block). The FSS allows a network to extract thin lane features using appropriate feature sizes. To acquire fine-grained spatial information, the DD Block is made of a series of dilated convolutions with degressive dilation rates. Experimental results show that the proposed techniques provide obvious improvement in accuracy, while they achieve the same or faster inference speed compared to the baseline system, and can run at real-time on high-resolution images.

3.k-t NEXT: Dynamic MR Image Reconstruction Exploiting Spatio-temporal Correlations ⬇️

Dynamic magnetic resonance imaging (MRI) exhibits high correlations in k-space and time. In order to accelerate the dynamic MR imaging and to exploit k-t correlations from highly undersampled data, here we propose a novel deep learning based approach for dynamic MR image reconstruction, termed k-t NEXT (k-t NEtwork with X-f Transform). In particular, inspired by traditional methods such as k-t BLAST and k-t FOCUSS, we propose to reconstruct the true signals from aliased signals in x-f domain to exploit the spatio-temporal redundancies. Building on that, the proposed method then learns to recover the signals by alternating the reconstruction process between the x-f space and image space in an iterative fashion. This enables the network to effectively capture useful information and jointly exploit spatio-temporal correlations from both complementary domains. Experiments conducted on highly undersampled short-axis cardiac cine MRI scans demonstrate that our proposed method outperforms the current state-of-the-art dynamic MR reconstruction approaches both quantitatively and qualitatively.

4.Satellite-Net: Automatic Extraction of Land Cover Indicators from Satellite Imagery by Deep Learning ⬇️

In this paper we address the challenge of land cover classification for satellite images via Deep Learning (DL). Land Cover aims to detect the physical characteristics of the territory and estimate the percentage of land occupied by a certain category of entities: vegetation, residential buildings, industrial areas, forest areas, rivers, lakes, etc. DL is a new paradigm for Big Data analytics and in particular for Computer Vision. The application of DL in images classification for land cover purposes has a great potential owing to the high degree of automation and computing performance. In particular, the invention of Convolution Neural Networks (CNNs) was a fundament for the advancements in this field. In [1], the Satellite Task Team of the UN Global Working Group describes the results achieved so far with respect to the use of earth observation for Official Statistics. However, in that study, CNNs have not yet been explored for automatic classification of imagery. This work investigates the usage of CNNs for the estimation of land cover indicators, providing evidence of the first promising results. In particular, the paper proposes a customized model, called Satellite-Net, able to reach an accuracy level up to 98% on test sets.

5.A Survey of Deep Learning-based Object Detection ⬇️

Object detection is one of the most important and challenging branches of computer vision, which has been widely applied in peoples life, such as monitoring security, autonomous driving and so on, with the purpose of locating instances of semantic objects of a certain class. With the rapid development of deep learning networks for detection tasks, the performance of object detectors has been greatly improved. In order to understand the main development status of object detection pipeline, thoroughly and deeply, in this survey, we first analyze the methods of existing typical detection models and describe the benchmark datasets. Afterwards and primarily, we provide a comprehensive overview of a variety of object detection methods in a systematic manner, covering the one-stage and two-stage detectors. Moreover, we list the traditional and new applications. Some representative branches of object detection are analyzed as well. Finally, we discuss the architecture of exploiting these object detection methods to build an effective and efficient system and point out a set of development trends to better follow the state-of-the-art algorithms and further research.

6.Deep Learning Approaches for Image Retrieval and Pattern Spotting in Ancient Documents ⬇️

This paper describes two approaches for content-based image retrieval and pattern spotting in document images using deep learning. The first approach uses a pre-trained CNN model to cope with the lack of training data, which is fine-tuned to achieve a compact yet discriminant representation of queries and image candidates. The second approach uses a Siamese Convolution Neural Network trained on a previously prepared subset of image pairs from the ImageNet dataset to provide the similarity-based feature maps. In both methods, the learned representation scheme considers feature maps of different sizes which are evaluated in terms of retrieval performance. A robust experimental protocol using two public datasets (Tobacoo-800 and DocExplore) has shown that the proposed methods compare favorably against state-of-the-art document image retrieval and pattern spotting methods.

7.Markerless Augmented Advertising for Sports Videos ⬇️

Markerless augmented reality can be a challenging computer vision task, especially in live broadcast settings and in the absence of information related to the video capture such as the intrinsic camera parameters. This typically requires the assistance of a skilled artist, along with the use of advanced video editing tools in a post-production environment. We present an automated video augmentation pipeline that identifies textures of interest and overlays an advertisement onto these regions. We constrain the advertisement to be placed in a way that is aesthetic and natural. The aim is to augment the scene such that there is no longer a need for commercial breaks. In order to achieve seamless integration of the advertisement with the original video we build a 3D representation of the scene, place the advertisement in 3D, and then project it back onto the image plane. After successful placement in a single frame, we use homography-based, shape-preserving tracking such that the advertisement appears perspective correct for the duration of a video clip. The tracker is designed to handle smooth camera motion and shot boundaries.

8.Domain-Specific Priors and Meta Learning for Low-shot First-Person Action Recognition ⬇️

The lack of large-scale real datasets with annotationsmakes transfer learning a necessity for video activity under-standing. Within this scope, we aim at developing an effec-tive method for low-shot transfer learning for first-personaction classification. We leverage independently trained lo-cal visual cues to learn representations that can be trans-ferred from a source domain providing primitive action la-bels to a target domain with only a handful of examples.Such visual cues include object-object interactions, handgrasps and motion within regions that are a function of handlocations. We suggest a framework based on meta-learningto appropriately extract the distinctive and domain invari-ant components of the deployed visual cues, so to be able totransfer action classification models across public datasetscaptured with different scene configurations. We thoroughlyevaluate our methodology and report promising results overstate-of-the-art action classification approaches for bothinter-class and inter-dataset transfer.

9.Visualizing the Invisible: Occluded Vehicle Segmentation and Recovery ⬇️

In this paper, we propose a novel iterative multi-task framework to complete the segmentation mask of an occluded vehicle and recover the appearance of its invisible parts. In particular, to improve the quality of the segmentation completion, we present two coupled discriminators and introduce an auxiliary 3D model pool for sampling authentic silhouettes as adversarial samples. In addition, we propose a two-path structure with a shared network to enhance the appearance recovery capability. By iteratively performing the segmentation completion and the appearance recovery, the results will be progressively refined. To evaluate our method, we present a dataset, the Occluded Vehicle dataset, containing synthetic and real-world occluded vehicle images. We conduct comparison experiments on this dataset and demonstrate that our model outperforms the state-of-the-art in tasks of recovering segmentation mask and appearance for occluded vehicles. Moreover, we also demonstrate that our appearance recovery approach can benefit the occluded vehicle tracking in real-world videos.

10.DeepIris: Iris Recognition Using A Deep Learning Approach ⬇️

Iris recognition has been an active research area during last few decades, because of its wide applications in security, from airports to homeland security border control. Different features and algorithms have been proposed for iris recognition in the past. In this paper, we propose an end-to-end deep learning framework for iris recognition based on residual convolutional neural network (CNN), which can jointly learn the feature representation and perform recognition. We train our model on a well-known iris recognition dataset using only a few training images from each class, and show promising results and improvements over previous approaches. We also present a visualization technique which is able to detect the important areas in iris images which can mostly impact the recognition results. We believe this framework can be widely used for other biometrics recognition tasks, helping to have a more scalable and accurate systems.

11.Trends in Integration of Vision and Language Research: A Survey of Tasks, Datasets, and Methods ⬇️

Integration of vision and language tasks has seen a significant growth in the recent times due to surge of interest from multi-disciplinary communities such as deep learning, computer vision, and natural language processing. In this survey, we focus on ten different vision and language integration tasks in terms of their problem formulation, methods, existing datasets, evaluation measures, and comparison of results achieved with the corresponding state-of-the-art methods. This goes beyond earlier surveys which are either task-specific or concentrate only on one type of visual content i.e., image or video. We then conclude the survey by discussing some possible future directions for integration of vision and language research.

12.An Efficient Method of Detection and Recognition in Remote Sensing Image Based on multi-angle Region of Interests ⬇️

Presently, deep learning technology has been widely used in the field of image recognition. However, it mainly aims at the recognition and detection of ordinary pictures and common scenes. As special images, remote sensing images have different shooting angles and shooting methods compared with ordinary ones, which makes remote sensing images play an irreplaceable role in some areas. In this paper, based on a deep convolution neural network for providing multi-level information of images and combines RPN (Region Proposal Network) for generating multi-angle ROIs (Region of Interest), a new model for object detection and recognition in remote sensing images is proposed. In the experiment, it achieves better results than traditional ways, which demonstrate that the model proposed here would have a huge potential application in remote sensing image recognition.

13.A-Phase classification using convolutional neural networks ⬇️

A series of short events, called A-phases, can be observed in the human electroencephalogram during NREM sleep. These events can be classified in three groups (A1, A2 and A3) according to their spectral contents, and are thought to play a role in the transitions between the different sleep stages. A-phase detection and classification is usually performed manually by a trained expert, but it is a tedious and time-consuming task. In the past two decades, various researchers have designed algorithms to automatically detect and classify the A-phases with varying degrees of success, but the problem remains open. In this paper, a different approach is proposed: instead of attempting to design a general classifier for all subjects, we propose to train ad-hoc classifiers for each subject using as little data as possible, in order to drastically reduce the amount of time required from the expert. The proposed classifiers are based on deep convolutional neural networks using the log-spectrogram of the EEG signal as input data. Results are encouraging, achieving average accuracies of 80.31% when discriminating between A-phases and non A-phases, and 71.87% when classifying among A-phase sub-types, with only 25% of the total A-phases used for training. When additional expert-validated data is considered, the sub-type classification accuracy increases to 78.92%. These results show that a semi-automatic annotation system with assistance from an expert could provide a better alternative to fully automatic classifiers.

14.Probabilistic Point Cloud Reconstructions for Vertebral Shape Analysis ⬇️

We propose an auto-encoding network architecture for point clouds (PC) capable of extracting shape signatures without supervision. Building on this, we (i) design a loss function capable of modelling data variance on PCs which are unstructured, and (ii) regularise the latent space as in a variational auto-encoder, both of which increase the auto-encoders' descriptive capacity while making them probabilistic. Evaluating the reconstruction quality of our architectures, we employ them for detecting vertebral fractures without any supervision. By learning to efficiently reconstruct only healthy vertebrae, fractures are detected as anomalous reconstructions. Evaluating on a dataset containing $\sim$1500 vertebrae, we achieve area-under-ROC curve of $>$75%, without using intensity-based features.

15.Quadruplet Selection Methods for Deep Embedding Learning ⬇️

Recognition of objects with subtle differences has been used in many practical applications, such as car model recognition and maritime vessel identification. For discrimination of the objects in fine-grained detail, we focus on deep embedding learning by using a multi-task learning framework, in which the hierarchical labels (coarse and fine labels) of the samples are utilized both for classification and a quadruplet-based loss function. In order to improve the recognition strength of the learned features, we present a novel feature selection method specifically designed for four training samples of a quadruplet. By experiments, it is observed that the selection of very hard negative samples with relatively easy positive ones from the same coarse and fine classes significantly increases some performance metrics in a fine-grained dataset when compared to selecting the quadruplet samples randomly. The feature embedding learned by the proposed method achieves favorable performance against its state-of-the-art counterparts.

16.RGB-D image-based Object Detection: from Traditional Methods to Deep Learning Techniques ⬇️

Object detection from RGB images is a long-standing problem in image processing and computer vision. It has applications in various domains including robotics, surveillance, human-computer interaction, and medical diagnosis. With the availability of low cost 3D scanners, a large number of RGB-D object detection approaches have been proposed in the past years. This chapter provides a comprehensive survey of the recent developments in this field. We structure the chapter into two parts; the focus of the first part is on techniques that are based on hand-crafted features combined with machine learning algorithms. The focus of the second part is on the more recent work, which is based on deep learning. Deep learning techniques, coupled with the availability of large training datasets, have now revolutionized the field of computer vision, including RGB-D object detection, achieving an unprecedented level of performance. We survey the key contributions, summarize the most commonly used pipelines, discuss their benefits and limitations, and highlight some important directions for future research.

17.Adapting Computer Vision Algorithms for Omnidirectional Video ⬇️

Omnidirectional (360°) video has got quite popular because it provides a highly immersive viewing experience. For computer vision algorithms, it poses several challenges, like the special (equirectangular) projection commonly employed and the huge image size. In this work, we give a high-level overview of these challenges and outline strategies how to adapt computer vision algorithm for the specifics of omnidirectional video.

18.A Single Image based Head Pose Estimation Method with Spherical Parameterization ⬇️

Head pose estimation plays a vital role in various applications, e.g., driverassistance systems, human-computer interaction, virtual reality technology, and so on. We propose a novel geometry based algorithm for accurately estimating the head pose from a single 2D face image at a very low computational cost. Specifically, the rectangular coordinates of only four non-coplanar feature points from a predefined 3D facial model as well as the corresponding ones automatically/ manually extracted from a 2D face image are first normalized to exclude the effect of external factors (i.e., scale factor and translation parameters). Then, the four normalized 3D feature points are represented in spherical coordinates with reference to the uniquely determined sphere by themselves. Due to the spherical parameterization, the coordinates of feature points can then be morphed along all the three directions in the rectangular coordinates effectively. Finally, the rotation matrix indicating the head pose is obtained by minimizing the Euclidean distance between the normalized 2D feature points and the 2D re-projections of morphed 3D feature points. Comprehensive experimental results over two popular databases, i.e., Pointing'04 and Biwi Kinect, demonstrate that the proposed algorithm can estimate head poses with higher accuracy and lower run time than state-of-the-art geometry based methods. Even compared with start-of-the-art learning based methods or geometry based methods with additional depth information, our algorithm still produces comparable performance.

19.Image-and-Spatial Transformer Networks for Structure-Guided Image Registration ⬇️

Image registration with deep neural networks has become an active field of research and exciting avenue for a long standing problem in medical imaging. The goal is to learn a complex function that maps the appearance of input image pairs to parameters of a spatial transformation in order to align corresponding anatomical structures. We argue and show that the current direct, non-iterative approaches are sub-optimal, in particular if we seek accurate alignment of Structures-of-Interest (SoI). Information about SoI is often available at training time, for example, in form of segmentations or landmarks. We introduce a novel, generic framework, Image-and-Spatial Transformer Networks (ISTNs), to leverage SoI information allowing us to learn new image representations that are optimised for the downstream registration task. Thanks to these representations we can employ a test-specific, iterative refinement over the transformation parameters which yields highly accurate registration even with very limited training data. Performance is demonstrated on pairwise 3D brain registration and illustrative synthetic data.

20.FD-FCN: 3D Fully Dense and Fully Convolutional Network for Semantic Segmentation of Brain Anatomy ⬇️

In this paper, a 3D patch-based fully dense and fully convolutional network (FD-FCN) is proposed for fast and accurate segmentation of subcortical structures in T1-weighted magnetic resonance images. Developed from the seminal FCN with an end-to-end learning-based approach and constructed by newly designed dense blocks including a dense fully-connected layer, the proposed FD-FCN is different from other FCN-based methods and leads to an outperformance in the perspective of both efficiency and accuracy. Compared with the U-shaped architecture, FD-FCN discards the upsampling path for model fitness. To alleviate the problem of parameter explosion, the inputs of dense blocks are no longer directly passed to subsequent layers. This architecture of FD-FCN brings a great reduction on both memory and time consumption in training process. Although FD-FCN is slimmed down, in model competence it gains better capability of dense inference than other conventional networks. This benefits from the construction of network architecture and the incorporation of redesigned dense blocks. The multi-scale FD-FCN models both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. In addition, dense blocks are rebuilt to enlarge the receptive fields without significantly increasing parameters, and spectral coordinates are exploited for spatial context of the original input patch. The experiments were performed over the IBSR dataset, and FD-FCN produced an accurate segmentation result of overall Dice overlap value of 89.81% for 11 brain structures in 53 seconds, with at least 3.66% absolute improvement of dice accuracy than state-of-the-art 3D FCN-based methods.

21.Polyp Detection and Segmentation using Mask R-CNN: Does a Deeper Feature Extractor CNN Always Perform Better? ⬇️

Automatic polyp detection and segmentation are highly desirable for colon screening due to polyp miss rate by physicians during colonoscopy, which is about 25%. However, this computerization is still an unsolved problem due to various polyp-like structures in the colon and high interclass polyp variations in terms of size, color, shape, and texture. In this paper, we adapt Mask R-CNN and evaluate its performance with different modern convolutional neural networks (CNN) as its feature extractor for polyp detection and segmentation. We investigate the performance improvement of each feature extractor by adding extra polyp images to the training dataset to answer whether we need deeper and more complex CNNs or better dataset for training in automatic polyp detection and segmentation. Finally, we propose an ensemble method for further performance improvement. We evaluate the performance on the 2015 MICCAI polyp detection dataset. The best results achieved are 72.59% recall, 80% precision, 70.42% dice, and 61.24% Jaccard. The model achieved state-of-the-art segmentation performance.

22.Sensor Aware Lidar Odometry ⬇️

A lidar odometry method, integrating into the computation the knowledge about the physics of the sensor, is proposed. A model of measurement error enables higher precision in estimation of the point normal covariance. Adjacent laser beams are used in an outlier correspondence rejection scheme. The method is ranked in the KITTI's leaderboard with 1.37% positioning error. 3.67% is achieved in comparison with the LOAM method on the internal dataset.

23.Extended Local Binary Patterns for Efficient and Robust Spontaneous Facial Micro-Expression Recognition ⬇️

Facial MicroExpressions (MEs) are spontaneous, involuntary facial movements when a person experiences an emotion but deliberately or unconsciously attempts to conceal his or her genuine emotions. Recently, ME recognition has attracted increasing attention due to its potential applications such as clinical diagnosis, business negotiation, interrogations and security. However, it is expensive to build large scale ME datasets, mainly due to the difficulty of naturally inducing spontaneous MEs. This limits the application of deep learning techniques which require lots of training data. In this paper, we propose a simple, efficient yet robust descriptor called Extended Local Binary Patterns on Three Orthogonal Planes (ELBPTOP) for ME recognition. ELBPTOP consists of three complementary binary descriptors: LBPTOP and two novel ones Radial Difference LBPTOP (RDLBPTOP) and Angular Difference LBPTOP (ADLBPTOP), which explore the local second order information along radial and angular directions contained in ME video sequences. ELBPTOP is a novel ME descriptor inspired by the unique and subtle facial movements. It is computationally efficient and only marginally increases the cost of computing LBPTOP, yet is extremely effective for ME recognition. In addition, by firstly introducing Whitened Principal Component Analysis (WPCA) to ME recognition, we can further obtain more compact and discriminative feature representations, and achieve significantly computational savings. Extensive experimental evaluation on three popular spontaneous ME datasets SMIC, CASMEII and SAMM show that our proposed ELBPTOP approach significantly outperforms previous state of the art on all three evaluated datasets. Our proposed ELBPTOP achieves 73.94% on CASMEII, which is 6.6% higher than state of the art on this dataset. More impressively, ELBPTOP increases recognition accuracy from 44.7% to 63.44% on the SAMM dataset.

24.Multi-scale Cell Instance Segmentation with Keypoint Graph based Bounding Boxes ⬇️

Most existing methods handle cell instance segmentation problems directly without relying on additional detection boxes. This method generally fails to separate touching cells due to the lack of global understanding of the objects. In contrast, box-based instance segmentation solves this problem by combining object detection with segmentation. However, existing methods typically utilize anchor box-based detectors, which would lead to inferior instance segmentation performance due to the class imbalance issue. In this paper, we propose a new box-based cell instance segmentation method. In particular, we first detect the five pre-defined points of a cell via keypoints detection. Then we group these points according to a keypoint graph and subsequently extract the bounding box for each cell. Finally, cell segmentation is performed on feature maps within the bounding boxes. We validate our method on two cell datasets with distinct object shapes, and empirically demonstrate the superiority of our method compared to other instance segmentation techniques. Code is available at: this https URL.

25.Feature Graph Learning for 3D Point Cloud Denoising ⬇️

Identifying an appropriate underlying graph kernel that reflects pairwise similarities is critical in many recent graph spectral signal restoration schemes, including image denoising, dequantization, and contrast enhancement. Existing graph learning algorithms compute the most likely entries of a properly defined graph Laplacian matrix $\mathbf{L}$, but require a large number of signal observations $\mathbf{z}$'s for a stable estimate. In this work, we assume instead the availability of a relevant feature vector $\mathbf{f}i$ per node $i$, from which we compute an optimal feature graph via optimization of a feature metric. Specifically, we alternately optimize the diagonal and off-diagonal entries of a Mahalanobis distance matrix $\mathbf{M}$ by minimizing the graph Laplacian regularizer (GLR) $\mathbf{z}^{\top} \mathbf{L} \mathbf{z}$, where edge weight is $w{i,j} = \exp{-(\mathbf{f}_i - \mathbf{f}_j)^{\top} \mathbf{M} (\mathbf{f}_i - \mathbf{f}j) }$, given a single observation $\mathbf{z}$. We optimize diagonal entries via proximal gradient (PG), where we constrain $\mathbf{M}$ to be positive definite (PD) via linear inequalities derived from the Gershgorin circle theorem. To optimize off-diagonal entries, we design a block descent algorithm that iteratively optimizes one row and column of $\mathbf{M}$. To keep $\mathbf{M}$ PD, we constrain the Schur complement of sub-matrix $\mathbf{M}{2,2}$ of $\mathbf{M}$ to be PD when optimizing via PG. Our algorithm mitigates full eigen-decomposition of $\mathbf{M}$, thus ensuring fast computation speed even when feature vector $\mathbf{f}_i$ has high dimension. To validate its usefulness, we apply our feature graph learning algorithm to the problem of 3D point cloud denoising, resulting in state-of-the-art performance compared to competing schemes in extensive experiments.

26.Real-time Background-aware 3D Textureless Object Pose Estimation ⬇️

In this work, we present a modified fuzzy decision forest for real-time 3D object pose estimation based on typical template representation. We employ an extra preemptive background rejector node in the decision forest framework to terminate the examination of background locations as early as possible, result in a significantly improvement on efficiency. Our approach is also scalable to large dataset since the tree structure naturally provides a logarithm time complexity to the number of objects. Finally we further reduce the validation stage with a fast breadth-first scheme. The results show that our approach outperform the state-of-the-arts on the efficiency while maintaining a comparable accuracy.

27.DetectFusion: Detecting and Segmenting Both Known and Unknown Dynamic Objects in Real-time SLAM ⬇️

We present DetectFusion, an RGB-D SLAM system that runs in real-time and can robustly handle semantically known and unknown objects that can move dynamically in the scene. Our system detects, segments and assigns semantic class labels to known objects in the scene, while tracking and reconstructing them even when they move independently in front of the monocular camera. In contrast to related work, we achieve real-time computational performance on semantic instance segmentation with a novel method combining 2D object detection and 3D geometric segmentation. In addition, we propose a method for detecting and segmenting the motion of semantically unknown objects, thus further improving the accuracy of camera tracking and map reconstruction. We show that our method performs on par or better than previous work in terms of localization and object reconstruction accuracy, while achieving about 20 FPS even if the objects are segmented in each frame.

28.Class-specific Anchoring Proposal for 3D Object Recognition in LIDAR and RGB Images ⬇️

Detecting objects in a two-dimensional setting is often insufficient in the context of real-life applications where the surrounding environment needs to be accurately recognized and oriented in three-dimension (3D), such as in the case of autonomous driving vehicles. Therefore, accurately and efficiently detecting objects in the three-dimensional setting is becoming increasingly relevant to a wide range of industrial applications, and thus is progressively attracting the attention of researchers. Building systems to detect objects in 3D is a challenging task though, because it relies on the multi-modal fusion of data derived from different sources. In this paper, we study the effects of anchoring using the current state-of-the-art 3D object detector and propose Class-specific Anchoring Proposal (CAP) strategy based on object sizes and aspect ratios based clustering of anchors. The proposed anchoring strategy significantly increased detection accuracy's by 7.19%, 8.13% and 8.8% on Easy, Moderate and Hard setting of the pedestrian class, 2.19%, 2.17% and 1.27% on Easy, Moderate and Hard setting of the car class and 12.1% on Easy setting of cyclist class. We also show that the clustering in anchoring process also enhances the performance of the regional proposal network in proposing regions of interests significantly. Finally, we propose the best cluster numbers for each class of objects in KITTI dataset that improves the performance of detection model significantly.

29.Shallow Unorganized Neural Networks using Smart Neuron Model for Visual Perception ⬇️

The recent success of Deep Neural Networks (DNNs) has revealed the significant capability of neuromorphic computing in many challenging applications. Although DNNs are derived from emulating biological neurons, there still exist doubts over whether or not DNNs are the final and best model to emulate the mechanism of human intelligence. In particular, there are two discrepancies between computational DNN models and the observed facts of biological neurons. First, human neurons are interconnected randomly, while DNNs need carefully-designed architectures to work properly. Second, human neurons usually have a long spiking latency (~100ms) which implies that not many layers can be involved in making a decision, while DNNs could have hundreds of layers to guarantee high accuracy. In this paper, we propose a new computational neuromorphic model, namely shallow unorganized neural networks (SUNNs), in contrast to DNNs. The proposed SUNNs differ from standard ANNs or DNNs in three fundamental aspects: 1) SUNNs are based on an adaptive neuron cell model, Smart Neurons, that allows each neuron to adaptively respond to its inputs rather than carrying out a fixed weighted-sum operation like the neuron model in ANNs/DNNs; 2) SUNNs cope with computational tasks using only shallow architectures; 3) SUNNs have a natural topology with random interconnections, as the human brain does, and as proposed by Turing's B-type unorganized machines. We implemented the proposed SUNN architecture and tested it on a number of unsupervised early stage visual perception tasks. Surprisingly, such shallow architectures achieved very good results in our experiments. The success of our new computational model makes it a working example of Turing's B-Type machine that can achieve comparable or better performance against the state-of-the-art algorithms.

30.TARN: Temporal Attentive Relation Network for Few-Shot and Zero-Shot Action Recognition ⬇️

In this paper we propose a novel Temporal Attentive Relation Network (TARN) for the problems of few-shot and zero-shot action recognition. At the heart of our network is a meta-learning approach that learns to compare representations of variable temporal length, that is, either two videos of different length (in the case of few-shot action recognition) or a video and a semantic representation such as word vector (in the case of zero-shot action recognition). By contrast to other works in few-shot and zero-shot action recognition, we a) utilise attention mechanisms so as to perform temporal alignment, and b) learn a deep-distance measure on the aligned representations at video segment level. We adopt an episode-based training scheme and train our network in an end-to-end manner. The proposed method does not require any fine-tuning in the target domain or maintaining additional representations as is the case of memory networks. Experimental results show that the proposed architecture outperforms the state of the art in few-shot action recognition, and achieves competitive results in zero-shot action recognition.

31.ImageNet-trained deep neural network exhibits illusion-like response to the Scintillating Grid ⬇️

Deep neural network (DNN) models for computer vision are now capable of human-level object recognition. Consequently, similarities in the performance and vulnerabilities of DNN and human vision are of great interest. Here we characterize the response of the VGG-19 DNN to images of the Scintillating Grid visual illusion, in which white dots are perceived to be partially black. We observed a significant deviation from the expected monotonic relation between VGG-19 representational dissimilarity and dot whiteness in the Scintillating Grid. That is, a linear increase in dot whiteness leads to a non-linear increase and then, remarkably, a decrease (non-monotonicity) in representational dissimilarity. In control images, mostly monotonic relations between representational dissimilarity and dot whiteness were observed. Furthermore, the dot whiteness level corresponding to the maximal representational dissimilarity (i.e. onset of non-monotonic dissimilarity) matched closely with that corresponding to the onset of illusion perception in human observers. As such, the non-monotonic response in the DNN is a potential model correlate for human illusion perception.

32.signADAM: Learning Confidences for Deep Neural Networks ⬇️

In this paper, we propose a new first-order gradient-based algorithm to train deep neural networks. We first introduce the sign operation of stochastic gradients (as in sign-based methods, e.g., SIGN-SGD) into ADAM, which is called as signADAM. Moreover, in order to make the rate of fitting each feature closer, we define a confidence function to distinguish different components of gradients and apply it to our algorithm. It can generate more sparse gradients than existing algorithms do. We call this new algorithm signADAM++. In particular, both our algorithms are easy to implement and can speed up training of various deep neural networks. The motivation of signADAM++ is preferably learning features from the most different samples by updating large and useful gradients regardless of useless information in stochastic gradients. We also establish theoretical convergence guarantees for our algorithms. Empirical results on various datasets and models show that our algorithms yield much better performance than many state-of-the-art algorithms including SIGN-SGD, SIGNUM and ADAM. We also analyze the performance from multiple perspectives including the loss landscape and develop an adaptive method to further improve generalization. The source code is available at this https URL.

33.Image Classification with Hierarchical Multigraph Networks ⬇️

Graph Convolutional Networks (GCNs) are a class of general models that can learn from graph structured data. Despite being general, GCNs are admittedly inferior to convolutional neural networks (CNNs) when applied to vision tasks, mainly due to the lack of domain knowledge that is hardcoded into CNNs, such as spatially oriented translation invariant filters. However, a great advantage of GCNs is the ability to work on irregular inputs, such as superpixels of images. This could significantly reduce the computational cost of image reasoning tasks. Another key advantage inherent to GCNs is the natural ability to model multirelational data. Building upon these two promising properties, in this work, we show best practices for designing GCNs for image classification; in some cases even outperforming CNNs on the MNIST, CIFAR-10 and PASCAL image datasets.

34.An Interpretable Compression and Classification System: Theory and Applications ⬇️

This study proposes a low-complexity interpretable classification system. The proposed system contains three main modules including feature extraction, feature reduction, and classification. All of them are linear. Thanks to the linear property, the extracted and reduced features can be inversed to original data, like a linear transform such as Fourier transform, so that one can quantify and visualize the contribution of individual features towards the original data. Also, the reduced features and reversibility naturally endure the proposed system ability of data compression. This system can significantly compress data with a small percent deviation between the compressed and the original data. At the same time, when the compressed data is used for classification, it still achieves high testing accuracy. Furthermore, we observe that the extracted features of the proposed system can be approximated to uncorrelated Gaussian random variables. Hence, classical theory in estimation and detection can be applied for classification. This motivates us to propose using a MAP (maximum a posteriori) based classification method. As a result, the extracted features and the corresponding performance have statistical meaning and mathematically interpretable. Simulation results show that the proposed classification system not only enjoys significant reduced training and testing time but also high testing accuracy compared to the conventional schemes.

35.Automated Muscle Segmentation from Clinical CT using Bayesian U-Net for Personalization of a Musculoskeletal Model ⬇️

We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891 +/- 0.016 (mean +/- std) and an average symmetric surface distance (ASD) of 0.994 +/- 0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845 +/- 0.031 DC and 1.556 +/- 0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in active-learning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.

36.An Efficient 3D CNN for Action/Object Segmentation in Video ⬇️

Convolutional Neural Network (CNN) based image segmentation has made great progress in recent years. However, video object segmentation remains a challenging task due to its high computational complexity. Most of the previous methods employ a two-stream CNN framework to handle spatial and motion features separately. In this paper, we propose an end-to-end encoder-decoder style 3D CNN to aggregate spatial and temporal information simultaneously for video object segmentation. To efficiently process video, we propose 3D separable convolution for the pyramid pooling module and decoder, which dramatically reduces the number of operations while maintaining the performance. Moreover, we also extend our framework to video action segmentation by adding an extra classifier to predict the action label for actors in videos. Extensive experiments on several video datasets demonstrate the superior performance of the proposed approach for action and object segmentation compared to the state-of-the-art.

37.Human Extraction and Scene Transition utilizing Mask R-CNN ⬇️

Object detection is a trendy branch of computer vision, especially on human recognition and pedestrian detection. Recognizing the complete body of a person has always been a difficult problem. Over the years, researchers proposed various methods, and recently, a breakthrough came into the light as Mask R-CNN. Based on Faster R-CNN, Mask R-CNN was able to generate a segmentation mask for each instance. We propose an application to extract multiple persons and put them into a new background image utilizing Mask R-CNN. Mask R-CNN detects all type of object mask from images. Then our algorithm considers only the target person and extracts a person only without obstacles, such as dogs in front of the person, and the user also can select multiple persons as their expectations. Our algorithm is effective for both an image and a video irrespective of the length of it. Also, extract those persons and place them into the new background. Our algorithm does not add any overhead to Mask R-CNN, running at 5 fps. We show examples of yoga-person in an image and a dancer in a dance-video frame. We hope our simple and effective approach would serve as a baseline for replacing the image background and help ease future research.

38.Construct Dynamic Graphs for Hand Gesture Recognition via Spatial-Temporal Attention ⬇️

We propose a Dynamic Graph-Based Spatial-Temporal Attention (DG-STA) method for hand gesture recognition. The key idea is to first construct a fully-connected graph from a hand skeleton, where the node features and edges are then automatically learned via a self-attention mechanism that performs in both spatial and temporal domains. We further propose to leverage the spatial-temporal cues of joint positions to guarantee robust recognition in challenging conditions. In addition, a novel spatial-temporal mask is applied to significantly cut down the computational cost by 99%. We carry out extensive experiments on benchmarks (DHG-14/28 and SHREC'17) and prove the superior performance of our method compared with the state-of-the-art methods. The source code can be found at this https URL.

39.Unsupervised Segmentation of Hyperspectral Images Using 3D Convolutional Autoencoders ⬇️

Hyperspectral image analysis has become an important topic widely researched by the remote sensing community. Classification and segmentation of such imagery help understand the underlying materials within a scanned scene, since hyperspectral images convey a detailed information captured in a number of spectral bands. Although deep learning has established the state of the art in the field, it still remains challenging to train well-generalizing models due to the lack of ground-truth data. In this letter, we tackle this problem and propose an end-to-end approach to segment hyperspectral images in a fully unsupervised way. We introduce a new deep architecture which couples 3D convolutional autoencoders with clustering. Our multi-faceted experimental study---performed over benchmark and real-life data---revealed that our approach delivers high-quality segmentation without any prior class labels.

40.Order Matters: Shuffling Sequence Generation for Video Prediction ⬇️

Predicting future frames in natural video sequences is a new challenge that is receiving increasing attention in the computer vision community. However, existing models suffer from severe loss of temporal information when the predicted sequence is long. Compared to previous methods focusing on generating more realistic contents, this paper extensively studies the importance of sequential order information for video generation. A novel Shuffling sEquence gEneration network (SEE-Net) is proposed that can learn to discriminate unnatural sequential orders by shuffling the video frames and comparing them to the real video sequence. Systematic experiments on three datasets with both synthetic and real-world videos manifest the effectiveness of shuffling sequence generation for video prediction in our proposed model and demonstrate state-of-the-art performance by both qualitative and quantitative evaluations. The source code is available at this https URL.

41.Recurrent Connections Aid Occluded Object Recognition by Discounting Occluders ⬇️

Recurrent connections in the visual cortex are thought to aid object recognition when part of the stimulus is occluded. Here we investigate if and how recurrent connections in artificial neural networks similarly aid object recognition. We systematically test and compare architectures comprised of bottom-up (B), lateral (L) and top-down (T) connections. Performance is evaluated on a novel stereoscopic occluded object recognition dataset. The task consists of recognizing one target digit occluded by multiple occluder digits in a pseudo-3D environment. We find that recurrent models perform significantly better than their feedforward counterparts, which were matched in parametric complexity. Furthermore, we analyze how the network's representation of the stimuli evolves over time due to recurrent connections. We show that the recurrent connections tend to move the network's representation of an occluded digit towards its un-occluded version. Our results suggest that both the brain and artificial neural networks can exploit recurrent connectivity to aid occluded object recognition.

42.Automated Surgical Activity Recognition with One Labeled Sequence ⬇️

Prior work has demonstrated the feasibility of automated activity recognition in robot-assisted surgery from motion data. However, these efforts have assumed the availability of a large number of densely-annotated sequences, which must be provided manually by experts. This process is tedious, expensive, and error-prone. In this paper, we present the first analysis under the assumption of scarce annotations, where as little as one annotated sequence is available for training. We demonstrate feasibility of automated recognition in this challenging setting, and we show that learning representations in an unsupervised fashion, before the recognition phase, leads to significant gains in performance. In addition, our paper poses a new challenge to the community: how much further can we push performance in this important yet relatively unexplored regime?

43.PH-GCN: Person Re-identification with Part-based Hierarchical Graph Convolutional Network ⬇️

The person re-identification (Re-ID) task requires to robustly extract feature representations for person images. Recently, part-based representation models have been widely studied for extracting the more compact and robust feature representations for person images to improve person Re-ID results. However, existing part-based representation models mostly extract the features of different parts independently which ignore the relationship information between different parts. To overcome this limitation, in this paper we propose a novel deep learning framework, named Part-based Hierarchical Graph Convolutional Network (PH-GCN) for person Re-ID problem. Given a person image, PH-GCN first constructs a hierarchical graph to represent the pairwise relationships among different parts. Then, both local and global feature learning are performed by the messages passing in PH-GCN, which takes other nodes information into account for part feature representation. Finally, a perceptron layer is adopted for the final person part label prediction and re-identification. The proposed framework provides a general solution that integrates local, global and structural feature learning simultaneously in a unified end-to-end network. Extensive experiments on several benchmark datasets demonstrate the effectiveness of the proposed PH-GCN based Re-ID approach.

44.Pan-tilt-zoom SLAM for Sports Videos ⬇️

We present an online SLAM system specifically designed to track pan-tilt-zoom (PTZ) cameras in highly dynamic sports such as basketball and soccer games. In these games, PTZ cameras rotate very fast and players cover large image areas. To overcome these challenges, we propose to use a novel camera model for tracking and to use rays as landmarks in mapping. Rays overcome the missing depth in pure-rotation cameras. We also develop an online pan-tilt forest for mapping and introduce moving objects (players) detection to mitigate negative impacts from foreground objects. We test our method on both synthetic and real datasets. The experimental results show the superior performance of our method over previous methods for online PTZ camera pose estimation.

45.Cross-Domain Car Detection Using Unsupervised Image-to-Image Translation: From Day to Night ⬇️

Deep learning techniques have enabled the emergence of state-of-the-art models to address object detection tasks. However, these techniques are data-driven, delegating the accuracy to the training dataset which must resemble the images in the target task. The acquisition of a dataset involves annotating images, an arduous and expensive process, generally requiring time and manual effort. Thus, a challenging scenario arises when the target domain of application has no annotated dataset available, making tasks in such situation to lean on a training dataset of a different domain. Sharing this issue, object detection is a vital task for autonomous vehicles where the large amount of driving scenarios yields several domains of application requiring annotated data for the training process. In this work, a method for training a car detection system with annotated data from a source domain (day images) without requiring the image annotations of the target domain (night images) is presented. For that, a model based on Generative Adversarial Networks (GANs) is explored to enable the generation of an artificial dataset with its respective annotations. The artificial dataset (fake dataset) is created translating images from day-time domain to night-time domain. The fake dataset, which comprises annotated images of only the target domain (night images), is then used to train the car detector model. Experimental results showed that the proposed method achieved significant and consistent improvements, including the increasing by more than 10% of the detection performance when compared to the training with only the available annotated data (i.e., day images).

46.VIFIDEL: Evaluating the Visual Fidelity of Image Descriptions ⬇️

We address the task of evaluating image description generation systems. We propose a novel image-aware metric for this task: VIFIDEL. It estimates the faithfulness of a generated caption with respect to the content of the actual image, based on the semantic similarity between labels of objects depicted in images and words in the description. The metric is also able to take into account the relative importance of objects mentioned in human reference descriptions during evaluation. Even if these human reference descriptions are not available, VIFIDEL can still reliably evaluate system descriptions. The metric achieves high correlation with human judgments on two well-known datasets and is competitive with metrics that depend on human references

47.Artificial Neural Network Algorithm based Skyrmion Material Design of Chiral Crystals ⬇️

The model presented in this research predicts ideal chiral crystal and propose a new direction of designing chiral crystals. Skyrmions are topologically protected and structurally assymetric materials with an exotic spin composition. This work presents deep learning method for skyrmion material design of chiral crystals. This paper presents an approach to construct a probabilistic classifier and an Artificial Neural Network(ANN) from a true or false chirality dataset consisting of chiral and achiral compounds with 'A' and 'B' type elements. A quantitative predictor for accuracy of forming the chiral crystals is illustrated. The feasibility of ANN method is tested in a comprehensive manner by comparing with probalistic classifier method. Throughout this manuscript we present deep learnig algorithm design with modelling and simulations of materials. This research work elucidated paves a way to develop sophisticated software tool to make an indicator of crystal design.

48.Automatic Radiology Report Generation based on Multi-view Image Fusion and Medical Concept Enrichment ⬇️

Generating radiology reports is time-consuming and requires extensive expertise in practice. Therefore, reliable automatic radiology report generation is highly desired to alleviate the workload. Although deep learning techniques have been successfully applied to image classification and image captioning tasks, radiology report generation remains challenging in regards to understanding and linking complicated medical visual contents with accurate natural language descriptions. In addition, the data scales of open-access datasets that contain paired medical images and reports remain very limited. To cope with these practical challenges, we propose a generative encoder-decoder model and focus on chest x-ray images and reports with the following improvements. First, we pretrain the encoder with a large number of chest x-ray images to accurately recognize 14 common radiographic observations, while taking advantage of the multi-view images by enforcing the cross-view consistency. Second, we synthesize multi-view visual features based on a sentence-level attention mechanism in a late fusion fashion. In addition, in order to enrich the decoder with descriptive semantics and enforce the correctness of the deterministic medical-related contents such as mentions of organs or diagnoses, we extract medical concepts based on the radiology reports in the training data and fine-tune the encoder to extract the most frequent medical concepts from the x-ray images. Such concepts are fused with each decoding step by a word-level attention model. The experimental results conducted on the Indiana University Chest X-Ray dataset demonstrate that the proposed model achieves the state-of-the-art performance compared with other baseline approaches.

49.Scene-and-Process-Dependent Spatial Image Quality Metrics ⬇️

Spatial image quality metrics designed for camera systems generally employ the Modulation Transfer Function (MTF), the Noise Power Spectrum (NPS), and a visual contrast detection model. Prior art indicates that scene-dependent characteristics of non-linear, content-aware image processing are unaccounted for by MTFs and NPSs measured using traditional methods. We present two novel metrics: the log Noise Equivalent Quanta (log NEQ) and Visual log NEQ. They both employ scene-and-process-dependent MTF (SPD-MTF) and NPS (SPD-NPS) measures, which account for signal-transfer and noise scene-dependency, respectively. We also investigate implementing contrast detection and discrimination models that account for scene-dependent visual masking. Also, three leading camera metrics are revised that use the above scene-dependent measures. All metrics are validated by examining correlations with the perceived quality of images produced by simulated camera pipelines. Metric accuracy improved consistently when the SPD-MTFs and SPD-NPSs were implemented. The novel metrics outperformed existing metrics of the same genre.

50.Validation of Modulation Transfer Functions and Noise Power Spectra from Natural Scenes ⬇️

The Modulation Transfer Function (MTF) and the Noise Power Spectrum (NPS) characterize imaging system sharpness/resolution and noise, respectively. Both measures are based on linear system theory but are applied routinely to systems employing non-linear, content-aware image processing. For such systems, MTFs/NPSs are derived inaccurately from traditional test charts containing edges, sinusoids, noise or uniform tone signals, which are unrepresentative of natural scene signals. The dead leaves test chart delivers improved measurements, but still has limitations when describing the performance of scene-dependent systems. In this paper, we validate several novel scene-and-process-dependent MTF (SPD-MTF) and NPS (SPD-NPS) measures that characterize, either: i) system performance concerning one scene, or ii) average real-world performance concerning many scenes, or iii) the level of system scene-dependency. We also derive novel SPD-NPS and SPD-MTF measures using the dead leaves chart. We demonstrate that all the proposed measures are robust and preferable for scene-dependent systems than current measures.

51.Inferring Occluded Geometry Improves Performance when Retrieving an Object from Dense Clutter ⬇️

Object search -- the problem of finding a target object in a cluttered scene -- is essential to solve for many robotics applications in warehouse and household environments. However, cluttered environments entail that objects often occlude one another, making it difficult to segment objects and infer their shapes and properties. Instead of relying on the availability of CAD or other explicit models of scene objects, we augment a manipulation planner for cluttered environments with a state-of-the-art deep neural network for shape completion as well as a volumetric memory system, allowing the robot to reason about what may be contained in occluded areas. We test the system in a variety of tabletop manipulation scenes composed of household items, highlighting its applicability to realistic domains. Our results suggest that incorporating both components into a manipulation planning framework significantly reduces the number of actions needed to find a hidden object in dense clutter.

52.A Retina-inspired Sampling Method for Visual Texture Reconstruction ⬇️

Conventional frame-based camera is not able to meet the demand of rapid reaction for real-time applications, while the emerging dynamic vision sensor (DVS) can realize high speed capturing for moving objects. However, to achieve visual texture reconstruction, DVS need extra information apart from the output spikes. This paper introduces a fovea-like sampling method inspired by the neuron signal processing in retina, which aims at visual texture reconstruction only taking advantage of the properties of spikes. In the proposed method, the pixels independently respond to the luminance changes with temporal asynchronous spikes. Analyzing the arrivals of spikes makes it possible to restore the luminance information, enabling reconstructing the natural scene for visualization. Three decoding methods of spike stream for texture reconstruction are proposed for high-speed motion and stationary scenes. Compared to conventional frame-based camera and DVS, our model can achieve better image quality and higher flexibility, which is capable of changing the way that demanding machine vision applications are built.