Browsing by Author "Eicholtz, Matthew R."
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item A 2.5d Yolo-Based Fusion Algorithm for 3d Localization of Cells(Institute of Electrical and Electronics Engineers (IEEE), 2019-11-06) Ziabari, Amirkoushyar; Rose, Derek C.; Eicholtz, Matthew R.; Solecki, David J.; Shirinifard, AbbasAdvances in microscopy techniques such as lattice-light-sheet, confocal, two-photon, and electron microscopy have enabled the visualization of 3D image volumes of tightly packed cells, extracellular structures in tissues, organelles, and subcellular components inside cells. These images sampled by 2D projections are often not accurately interpreted even by human experts. As a use case we focus on 3D image volumes of tightly packed nuclei in brain tissue. Due to out-of-plane excitation and low resolution in the z-axis, non-overlapping cells appear as overlapping 3D volumes and make detecting individual cells challenging. On the other hand, running 3D detection algorithms is computationally expensive and infeasible for large datasets. In addition, most existing 3D algorithms are designed to extract 3D objects by identifying the depth in the 2D images. In this work, we propose a YOLO-based 2.5D fusion algorithm for 3D localization of individual cells in densely packed volumes of nuclei. The proposed method fuses 2D detection of nuclei in sagittal, coronal, and axial planes and predicts six coordinates of the 3D bounding cubes around the detected 3D cells. Promising results were obtained on multiple examples of synthetic dense volumes of nuclei imitating confocal microscopy experimental datasets.Item Estimating vehicle fuel economy from overhead camera imagery and application for traffic control(OSTI.GOV U.S. Department of Energy Office of Scientific and Technical Information, 2020-01-01) Karnowski, Thomas; Tokola, Ryan; Oesch, T Sean; Eicholtz, Matthew R.; Price, Jeff; Gee, TimIn this work, we explore the ability to estimate vehicle fuel consumption using imagery from overhead fisheye lens cameras deployed as traffic sensors. We utilize this information to simulate vision-based control of a traffic intersection, with a goal of improving fuel economy with minimal impact to mobility. We introduce the ORNL Overhead Vehicle Dataset (OOVD), consisting of a data set of paired, labeled vehicle images from a ground-based camera and an overhead fisheye lens traffic camera. The data set includes segmentation masks based on Gaussian mixture models for vehicle detection. We show the dataset utility through three applications: the estimate of fuel consumption based on segmentation bounding boxes, vehicle discrimination for those vehicles with largest bounding boxes, and a fine-grained classification on a limited number of vehicle makes and models using a pre-trained set of convolutional neural network models. We compare these results with estimates based on a large open-source data set based on web-scraped imagery. Finally, we show the utility of the approach using reinforcement learning in a traffic simulator using the open source Simulation of Urban Mobility (SUMO) package. Our results show the feasibility of the approach for controlling traffic lights for better fuel efficiency based solely on visual vehicle estimates from commercial, fisheye lens cameras.Item A Two-Tier Convolutional Neural Network for Combined Detection and Segmentation in Biological Imagery(Institute of Electrical and Electronics Engineers (IEEE), 2019-11-14) Ziabari, Amirkoushyar; Shirinifard, Abbas; Eicholtz, Matthew R.; Solecki, David J.; Rose, Derek C.Deep learning techniques have been useful for modern microscopy imaging techniques to further study and analyze biological structures and organs. Convolutional neural networks (CNN) have improved 2D object detection, localization, and segmentation. For imagery containing biological structures with depth, it is especially desirable to perform these tasks in 3D. Traditionally, performing these tasks simultaneously in 3D has proven to be computationally expensive. Currently available methodologies thus largely work to segment 3D objects from 2D images (without context from captured 3D volumes). In this work, we present a novel approach to perform fast and accurate localization, detection, and segmentation of volumes containing cells. Specifically, in our method, we modify and tune two state-of-the-art CNNs, namely 2D YOLOv2 and 3D U-Net, and combine them with a new fusion and image processing algorithms. Annotated volumes in this space are limited, and we have created synthetic data that mimics actual structures for training and testing our proposed approach. Promising results on this test data demonstrate the value of the technique and offers a methodology for 3D cell analysis in real microscopy imagery.