The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.
KEYWORDS: Sensors, Time of flight cameras, Cameras, 3D-TOF imaging, 3D acquisition, Imaging systems, Image registration, Radiotherapy, 3D modeling, Image acquisition
In this paper we present a system that uses Time-of-Flight (ToF) technology to correct the position of a patient in respect to a previously acquired reference surface. A ToF sensor enables the acquisition of a 3-D surface model containing more than 25,000 points using a single sensor in real time. One advantage of this technology is that the high lateral resolution makes it possible to accurately compute translation and rotation of the patient in respect to a reference surface. We are using an Iterative Closest Point (ICP) algorithm to determine the 6 degrees of freedom (DOF) vector. Current results show that for rigid phantoms it is possible to obtain an accuracy of 2.88 mm and 0.28° respectively. Tests with human persons validate the robustness and stability of the proposed system. We achieve a mean registration error of 3.38 mm for human test persons. Potential applications for this system can be found within radiotherapy or multimodal image acquisition with different devices.
KEYWORDS: 3D displays, Cameras, 3D image processing, 3D scanning, Coded apertures, Visualization, 3D visualizations, Infrared radiation, 3D imaging standards, Imaging systems
Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and
annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation
of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried
out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which
requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly
in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly
increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's
comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device
and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx
display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar
setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and
from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel
3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information
out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we
present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional
volume.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.