KEYWORDS: Clouds, 3D modeling, 3D displays, Cameras, 3D image processing, Data modeling, Visual process modeling, Performance modeling, Stereoscopic cameras, Molecules
We present a point based reconstruction and transmission pipeline for a collaborative tele-immersion system.
Two or more users in different locations collaborate with each other in a shared, simulated environment as
if they were in the same physical room. Each user perceives point-based models of distant users along with
collaborative data like molecule models. Disparity maps, computed by a commercial stereo solution, are filtered
and transformed into clouds of 3D points. The clouds are compressed and transmitted over the network to distant
users. At the other side the clouds are decompressed and incorporated into the 3D scene. The viewpoint used
to display the 3D scene is dependent on the position of the head of the user. Collaborative data is manipulated
through natural hand gestures. We analyse the performance of the system in terms of computation time, latency
and photo realistic quality of the reconstructed models.
Nowadays, video-conference tends to be more and more advantageous because of the economical and
ecological cost of transport. Several platforms exist. The goal of the TIFANIS immersive platform is to let
users interact as if they were physically together. Unlike previous teleimmersion systems, TIFANIS uses
generic hardware to achieve an economically realistic implementation. The basic functions of the system are
to capture the scene, transmit it through digital networks to other partners, and then render it according to
each partner's viewing characteristics. The image processing part should run in real-time.
We propose to analyze the whole system. it can be split into different services like central processing
unit (CPU), graphical rendering, direct memory access (DMA), and communications trough the network.
Most of the processing is done by CPU resource. It is composed of the 3D reconstruction and the detection
and tracking of faces from the video stream. However, the processing needs to be parallelized in several
threads that have as little dependencies as possible. In this paper, we present these issues, and the way we deal
with them.
KEYWORDS: Visualization, Cameras, 3D modeling, Image fusion, Image segmentation, Reconstruction algorithms, Visual process modeling, Information fusion, Calibration, 3D image processing
A system reconstructing arbitrary shapes from images in real time and with enough accuracy would be paramount
for a huge number of applications. The difficulty lies in the trade off between accuracy and computation
time. Furthermore, given the image resolution and our real time needs, only a small number of cameras can be
connected to a standard computer. The system needs a cluster and a strategy to share information. We introduce
a framework for real time voxel based reconstruction from images on a cluster. From our point of view, the
volumetric framework has five major advantages: an equivalent tree representation, an adaptable voxel description,
an embedded multi-resolution capability, an easy fusion of shared information and an easy exploitation of
inter-frame redundancy; and three minor disadvantage, its lack of precision with respect to method working at
point level, its lack of global constraints on the reconstruction and the need of strongly calibrated cameras. Our
goal is to illustrate the advantages and disadvantages of the framework in a practical example: the computation
of the distributed volumetric inferred visual hull. The advantages and disadvantages are first discussed in general
terms and then illustrated in the case of our concrete example.
A growing number of mixed reality applications have to build 3D models of arbitrary shapes. However, modeling of an arbitrary shape implies a trade-off between accuracy and computation time. Real-time methods based on the visual hull cannot model the holes of the shape inside the approximated silhouette. Carving methods can but they are not real time. The aim of this paper is to improve their accuracy and computation time. It presents a novel multiresolution algorithm for 3D reconstruction of arbitrary 3D shapes from range data acquired at fixed viewpoints. The algorithm is split into two parts. The first part labels a voxel thanks to the current viewpoint and without taking into account previous labels. The second part updates the labels and grows the octree representing the voxelized space. It determines the number of calls made to the first part, which is time consuming. A novel set of labels, the study of the parallelepiped projections and a front to back propagation of information allow us to improve accuracy in both parts, to reduce the computation cost of the voxel labeling part and to reduce the number of calls made to it by the mutiresolution and voxel updating part.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.