Browsing by Author "Yang, Yang"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Seasonal Eddy Variability in the Northwestern Tropical Atlantic Ocean(Journal of Physical Oceanography, 2023-04-01) Huang, Minghai; Yang, Yang; Liang, XinfengEddies in the northwestern tropical Atlantic Ocean play a crucial role in transporting the South Atlantic Upper Ocean Water to the North Atlantic and connect the Atlantic and the Caribbean Sea. Although surface characteristics of those eddies have been well studied, their vertical structures and governing mechanisms are much less known. Here, using a time-dependent energetics framework based on the multiscale window transform, we examine the seasonal variability of the eddy kinetic energy (EKE) in the northwestern tropical Atlantic. Both altimeter-based data and ocean reanalyses show a substantial EKE seasonal cycle in the North Brazil Current Retroflection (NBCR) region that is mostly trapped in the upper 200 m. In the most energetic NBCR region, the EKE reaches its minimum in April–June and maximum in July–September. By analyzing six ocean reanalysis products, we find that barotropic instability is the controlling mechanism for the seasonal eddy variability in the NBCR region. Nonlocal processes, including advection and pressure work, play opposite roles in the EKE seasonal cycle. In the eastern part of the NBCR region, the EKE seasonal evolution is similar to the NBCR region. However, it is the nonlocal processes that control the EKE seasonality. In the western part of the NBCR region, the EKE magnitude is one order of magnitude smaller than in the NBCR region and shows a different seasonal cycle, which peaks in March and reaches its minimum in October–November. Our results highlight the complex mechanisms governing eddy variability in the northwestern tropical Atlantic and provide insights into their potential changes with changing background conditions.Item Towards immersive VR experience(University of Delaware, 2017) Yang, YangVirtual Reality~(VR) becomes more and more popular since there is a huge consumer interest in a more immersive experience of the visual contents. Therefore, providing immersive VR contents has become one of the key research topics. However, generating such contents requires tremendous efforts. ☐ This dissertation focuses on exploring new computer vision algorithms and computer graphics techniques, such as 3D fusion (reconstruction) of surgical environment, 3D reconstruction from 2D panoramas, stereoscopic conversion on panoramas and virtual DoF synthesis, to produce high-quality and visual pleasant VR contents. ☐ We first develop a real-time immersive 3D fusion system based on active sensing. Our solution builds upon multi-Kinect surgical training system and provides the real-time streaming capability. Specifically, we develop a client-server model. On the server front, we efficiently fuse multiple Kinect data acquired from different viewpoints and compress and then stream the data to the client side. On the client front, we build an interactive space-time navigator to allow remote users (e.g., trainees) to witness live surgical procedure as if they are personally on the scene. ☐ We further present a novel efficient technique to infer 3D structure from 2D panoramas by simultaneously estimating spatial layouts (floor, wall or ceiling) and objects (e.g., furniture pieces). In particular, We first conduct saliency and object detection (semantic cues) on perspective sub-views to extract object masks and apply line detection and normal estimation to extract geometric cues. Next, we map the results back to the panorama and use the geometric cues to conduct ground plane estimation and fix line/plane breakages caused by occlusions. We then partition the image into superpixels connected by the estimated lines/planes and solve the corresponding constraint graph on non-object regions to infer the spatial layout. Finally, we use the layout as basis for growing the objects via their normals and recover the complete panoramic depth map. ☐ Also, we seek to develop a learning based solution to automatically convert existing monoscopic panoramas to stereoscopic ones. More specifically, we train a stereo synthesis network by using perspective stereo pairs and their disparity maps as inputs. Given a 2D panorama, we partition it into perspective sub-views. We show that directly synthesizing stereo views from individual sub-views cannot satisfy the epipolar constraint. We instead generate a sequence of left and right stereo view pairs and stitch them into concentric mosaics. ☐ We finally exploit depth sensing capabilities on emerging mobile devices and develop a new depth-guided refocus synthesis technique particularly tailored for mobile devices. Our technique takes coarse depth maps as inputs and applies novel depth-aware pseudo ray tracing. Our pseudo ray tracing scheme resembles the light field synthesis but does not require the actual creation of the light field.