Browsing by Author "Guo, Xinqing"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Combining learning and computational imaging for 3D inference(University of Delaware, 2017) Guo, XinqingAcquiring 3D geometry of the scene is a key task in computer vision. Applications are numerous, from classical object reconstruction and scene understanding to the more recent visual SLAM and autonomous driving. Recent advances in computational imaging have enabled many new solutions to tackle the problem of 3D reconstruction. By modifying the camera's components, computational imaging optically encodes the scene, then decodes it with tailored algorithms. ☐ This dissertation focuses on exploring new computational imaging techniques, combined with recent advances in deep learning, to infer 3D geometry of the scene. In general, our approaches can be categorized into active and passive 3D sensing. ☐ For active illumination methods, we propose two solutions: first, we present a multi-flash (MF) system implemented on the mobile platform. Using the sequence of images captured by the MF system, we can extract the depth edges of the scene, and further estimate a depth map on a mobile device. Next, we show a portable immersive system that is capable of acquiring and displaying high fidelity 3D reconstructions using a set of RGB-D sensors. The system is based on structured light technique and is able to recover 3D geometry of the scene in real time. We have also developed a visualization system that allows users to dynamically visualize the event from new perspectives at arbitrary time instances in real time. ☐ For passive sensing methods, we focus on light field based depth estimation. For depth inference from a single light field, we present an algorithm that is tailored for barcode images. Our algorithm analyzes the statistics of raw light field images and conducts depth estimation with real time speed for fast refocusing and decoding. To mimic the human vision system, we investigate the dual light field input and propose a unified deep learning based framework to extract depth from both disparity cue and focus cue. To facilitate training, we have created a large dual focal stack database with ground truth disparity. While above solution focuses on fusing depth from focus and stereo, we also exploit combing depth from defocus and stereo, with an all-focus stereo pair and a defocused image of one of the stereo views as input. We have adopted the hourglass network architecture to extract depth from the image triplets. We have then studied and explored multiple neural network architectures to improve depth inference. We demonstrate that our deep learning based approaches preserve the strength of focus/defocus cue and disparity cue while effectively suppressing their weaknesses.Item High frequency ultrasound transducer for real time ultrasound biomicroscopy with optoacoustic arrays(University of Delaware, 2011) Guo, XinqingUltrasound biomicroscopy (UBM) is a high resolution biomedical imaging technique using high frequency ultrasound waves. Fabricating highly populated detector arrays represents a major technical challenge for real-time UBM systems. A potential solution is optoacoustic technology, where high frequency ultrasound is detected with optical methods. The advantages of optoacoustic detection are large bandwidth, good sensitivity, and the capability for large scale parallel read-out. In this thesis, the receiving and transmitting part of a UBM imaging array are investigated separately. Optoacoustic detection is explored with a thin film etalon consisting of two gold films separated by a transparent layer. Simulations and experiments demonstrate that optoacoustic detection sensitivity is maximized with a gold layer thickness of 45 nm. Various transparent layer materials were investigated, including polystyrene microspheres, SU-8 2005 photoresist, and parylene. Experiments demonstrate that parylene is the best material due to its precise thickness control and uniformity. Ideally, the ultrasound transmitter and optoacoustic etalon are integrated into a single device. Piezoelectric materials are the most efficient emitters of ultrasound, but optical transparency is required to facilitate integration with an etalon. Lithium niobate (LiNbO3) is chosen for its high piezoelectricity and excellent optical transparency. Initial efforts with LiNbO3 concentrated on fabricating a "conventional" transducer that is not optically transparent. An unfocused transducer was fabricated that produces 25 MHz ultrasound with a -6 dB bandwidth of 15 MHz and a two-way insertion loss of 27.6 dB. An optically transparent LiNbO3 transducer with indium tin oxide (ITO) electrodes is currently under development. An approach to combine the optically transparent LiNbO3 emitter with an optoacoustic etalon is proposed.