We investigate how visual motion registered during one's own movement through a structured world can be used to gauge travel distance. Estimating absolute travel distance from the visual flow induced in the optic array of a moving observer is problematic because optic flow speeds co-vary with the dimensions of the environment and are thus subject to an environment specific scale factor. Discrimination of the distances of two simulated self-motions of different speed and duration is reliably possible from optic flow, however, if the visual environment is the same for both motions, because the scale factors cancel in this case. Here, we ask whether a distance estimate obtained from optic flow can be transformed into a spatial interval in the same visual environment. Subjects viewed a simulated self-motion sequence on a large (90 by 90 deg) projection screen or in a computer animated virtual environment (CAVE) with completely immersive, stereographic, head-yoked projection, that extended 180deg horizontally and included the floor space in front of the observer. The sequence depicted self-motion over a ground plane covered with random dots. Simulated distances ranged from 1.5 to 13 meters with variable speed and duration of the movement. After the movement stopped, the screen depicted a stationary view of the scene and two horizontal lines appeared on the ground in front of the observer. The subject had to adjust one of these lines such that the spatial interval between the lines matched the distance traveled during the movement simulation. Adjusted interval size was linearly related to simulated travel distance, suggesting that observers could obtain a measure of distance from the optic flow. The slope of the regression was 0.7. Thus, subjects underestimated distance by 30%. This result was similar for stereoscopic and monoscopic conditions. We conclude that optic flow can be used to derive an estimate of travel distance, but this estimate is subject to scaling when compared to static intervals in the environment, irrespective of steroscopic depth cues.
In this work we describe an application of the Support Vector Machine (SVM) classifier for the segmentation of wounds
in color images. The SVM-based segmentation combines naturally a high dimensional space of image features into a
single classification machine. Since particular choice of image features is crucial for the performance of SVM classifier,
we investigate the efficiency of color- and texture-based features for the differentiation between skin and wound tissue.
We find that color features provide better separation between these two tissues. However, incorporation of even a single
textural feature improves an overall quality of the SVM classification. We test the impact of each color feature on the
quality of wound segmentation and find optimal combination of these features which produces best segmentation result.
We suggest a Histogram Sampling technique, which gives wider separation between wound and skin in the color space.
Finally, we find a set of image features, which is typical for most types of wounds. When these features are used as an
input to the SVM classifier, a fairly robust segmentation of different wound types is achieved. We evaluate the
performance of SVM-based segmentation using ground-truth segmentation carried out by clinicians.
KEYWORDS: Human-machine interfaces, Virtual reality, Control systems, Navigation systems, 3D modeling, Visual process modeling, Cameras, 3D displays, Prototyping, Infrared radiation
In this paper we present our effort towards the goal of perceptual user interface for major interaction tasks, such as navigation/travel, selection/picking and personal data access, for virtual shopping. A set of 3-D navigation devices, vision-based pointing and personal access system are mainly discussed. The motivation and design principles behind these interfaces are also described. A prototype integration solution, which bring these devices together in virtual shopping environment, is given. These interfaces and interaction devices have been implemented and tested for evaluation.
This paper addresses the navigation problem for an autonomous robot designed to inspect sewerage. The robot, which is about half size of the sewer diameter, must keep its orientation in the entirely dark sewerage. The only a priory information known is the geometrical shape of the sewer. This implies a strong geometrical constraint on the environment the robot can expect. In the paper we worked out an active vision system to be used on-board of the sewer robot. The system has two components: (a) an optical camera and (b) a laser cross hair projector generating an ideal cross. The print left by the laser cross hair projector on the pipe surface is snapped with the camera. The image of the 'cross trace' on the camera plane is two intersecting quadratic curves. The shape of the curves gives a clue about a direction the robot looks. In this paper ewe investigate the curves that are the images of the laser 'cross traces' as they are seen on the camera plane for a simulated environmental model. We show how the shape of the curves viewed by the camera depends upon particular camera/laser relative position and orientation, assuming a cylindrical sewer pipe. We give a strategy how to align the robot with the sewer axis on the basis of curve images.
A vision based navigation system is a basic tool to provide autonomous operations of unmanned vehicles. For offroad navigation that means that the vehicle equipped with a stereo vision system and perhaps a laser ranging device shall be able to maintain a high level of autonomy under various illumination conditions and with little a priori information about the underlying scene. The task becomes particularly important for unmanned planetary exploration with the help of autonomous rovers. For example in the LEDA Moon exploration project currently under focus by the European Space Agency (ESA), during the autonomous mode the vehicle (rover) should perform the following operations: on-board absolute localization, elevation model (DEM) generation, obstacle detection and relative localization, global path planning and execution. Focus of this article is a computational solution for fully autonomous path planning and path execution. An operational DEM generation method based on stereoscopy is introduced. Self-localization on the DEM and robust natural feature tracking are used as basic navigation steps, supported by inertial sensor systems. The following operations are performed on the basis of stereo image sequences: 3D scene reconstruction, risk map generation, local path planning, camera position update during the motion on the basis of landmarks tracking, obstacle avoidance. Experimental verification is done with the help of a laboratory terrain mockup and a high precision camera mounting device. It is shown that standalone tracking using automatically identified landmarks is robust enough to give navigation data for further stereoscopic reconstruction of the surrounding terrain. Iterative tracking and reconstruction leads to a complete description of the vehicle path and its surrounding with an accuracy high enough to meet the specifications for autonomous outdoor navigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.