Digitizing Wildlife: The Case of a Reptile 3-D Virtual Museum

In this article, we design and develop a 3-D virtual museum with holistic metadata documentation and a variety of reptile behaviors and movements. First, we reconstruct the reptile‘s mesh in high resolution, and then create its rigged/skinned digital counterpart. We acquire the movement of two subjects using an optical motion capture system, accelerometers, and RGB-vision cameras; these movements are then segmented and annotated to various behaviors. The 3-D environment, virtual reality (VR), and augmented reality (AR) functionalities of our online repository serve as tools for interactively educating the public on animals, which are difficult to observe and study in their natural environment. It also reveals important information regarding animals’ intangible characteristics (e.g., behavior), that is critical for the preservation of wildlife. Our museum is publicly accessible, enabling motion data reusability, and facilitating learning applications through gamification. We conducted a user study that confirms the naturalness and realism of our reptiles, along with the ease of use and usefulness of our museum.

GPU-Accelerated Collision Analysis of Vehicles in a Point Cloud Environment

We present a GPU-accelerated collision detection method for the navigation of vehicles in enclosed spaces represented using large point clouds. Our approach takes a CAD model of a vehicle, converts it to a volumetric representation or voxels, and computes the collision of the voxels with a point cloud representing the environment to identify a suitable path for navigation. We perform adaptive and efficient collision of voxels with the point cloud without the need for mesh generation. We have developed a GPU-accelerated voxel Minkowski sum algorithm to perform a clearance analysis of the vehicle. Finally, we provide theoretical bounds for the accuracy of the collision and clearance analysis. Our GPU implementation is linked with Unreal Engine to provide flexibility in performing the analysis.

Keypoint-Based Disentangled Pose Network for Category-Level 6-D Object Pose Tracking

Category-level 6-D object pose tracking is very challenging in the field of 3-D computer vision. Keypoint-based object pose estimation has demonstrated its effectiveness in dealing with it. However, current approaches first estimate the keypoints through a neural network and further compute the interframe pose change via least-squares optimization. They estimate rotation and translation in the same way, ignoring the differences between them. In this work, we propose a keypoint-based disentangled pose network, which disentangles the 6-D object pose change to 3-D rotation and 3-D translation. Specifically, the translation is directly estimated by the network and the rotation is indirectly calculated by singular value decomposition according to the keypoints. Extensive experiments on the NOCS-REAL275 dataset demonstrate the superiority of our method.

Predicting Surface Reflectance Properties of Outdoor Scenes Under Unknown Natural Illumination

Estimating and modeling the appearance of an object under outdoor illumination conditions is a complex process. This article addresses this problem and proposes a complete framework to predict the surface reflectance properties of outdoor scenes under unknown natural illumination. Uniquely, we recast the problem into its two constituent components involving the bidirectional reflectance distribution function incoming light and outgoing view directions: first, surface points’ radiance captured in the images, and outgoing view directions are aggregated and encoded into reflectance maps, and second, a neural network trained on reflectance maps infers a low-parameter reflection model. Our model is based on phenomenological and physics-based scattering models. Experiments show that rendering with the predicted reflectance properties results in a visually similar appearance to using textures that cannot otherwise be disentangled from the reflectance properties.

Deep Synthesis of Cloud Lighting

Current appearance models for the sky are able to represent clear-sky illumination to a high degree of accuracy. However, these models all lack a common feature of real skies: clouds. These are an essential component for many applications which rely on realistic skies, such as image editing and synthesis. While clouds can be added to existing sky models through rendering, this is hard to achieve due to the difficulties of representing clouds and the complexities of volumetric light transport. In this work, an alternative approach to this problem is proposed whereby clouds are synthesized using a learned data-driven representation. This leverages a captured collection of high dynamic range cloudy sky imagery, and combines this dataset with clear-sky models to produce plausible cloud appearance from a coarse representation of cloud positions. This representation is artist controllable, allowing for novel cloudscapes to be rapidly synthesized, and used for lighting virtual environments.