Virtual reality (VR) systems provide exciting new ways to interact with information and with the world. The visual VR
environment can be synthetic (computer generated) or be an indirect view of the real world using sensors and displays.
With the potential opportunities of a VR system, the question arises about what benefits or detriments a military pilot
might incur by operating in such an environment. Immersive and compelling VR displays could be accomplished with
an HMD (e.g., imagery on the visor), large area collimated displays, or by putting the imagery on an opaque canopy. But
what issues arise when, instead of viewing the world directly, a pilot views a "virtual" image of the world? Is 20/20
visual acuity in a VR system good enough? To deliver this acuity over the entire visual field would require over 43
megapixels (MP) of display surface for an HMD or about 150 MP for an immersive CAVE system, either of which
presents a serious challenge with current technology. Additionally, the same number of sensor pixels would be required
to drive the displays to this resolution (and formidable network architectures required to relay this information), or
massive computer clusters are necessary to create an entirely computer-generated virtual reality with this resolution. Can
we presently implement such a system? What other visual requirements or engineering issues should be considered?
With the evolving technology, there are many technological issues and human factors considerations that need to be
addressed before a pilot is placed within a virtual cockpit.
Multi-spectral image portrayal using several sensors is a revolutionary way to increase the amount of useful visual
information to the end user. However, for maximum usability, the information from multiple sensors must be fused into
a single image that can be understood. The decisions about which sensors are delivering the most important information
for a given viewing situation and what manipulations should be done to the acquired data are complex. To better
examine this complexity, information was obtained from aviators about which visual tasks are deemed to be most
important. This information was gathered from discussions with pilots and other aircrew members as well as from
relevant publications. The important visual task information was then used to develop a matrix that included specific
visual aspects of the task (e.g., detection or identification). The matrix also included other parameters that could affect
or alter the ability to "see" the desired target or perform the task. These other parameters include ambient lighting,
environmental conditions (e.g., clear or hazy atmospheres), man-made impediments to vision (camouflage or smoke),
and which image enhancing algorithms should be applied (e.g., contrast enhancement or noise reduction). This top-down
evaluation was then used to determine which image enhancement algorithms are most important and which will be
employed most often for the identified visual tasks.
The increasing use of lasers on the modern battlefield may necessitate the wear of laser eye protection devices (LEPDs) by warfighters. Unfortunately, LEPDs that protect against visible laser wavelengths often reduce overall light transmittance and a wearer's vision can be degraded, especially in low light conditions. Wearing night vision goggles (NVGs) provides laser eye protection behind the goggles, but NVGs do not block lasers that might enter the eye around the NVGs. Therefore, LEPDs will be worn under NVGs. People wearing NVGs look below the NVGs to read displays and for other near vision tasks. This effort involved determining the effects of wearing variable density filters on vision in low light conditions, with and without the presence of a simulated head-down display (HDD). Each subject's visual acuity was measured under moonlight illumination levels while wearing neutral density filters and LEPDs. Similar measurements of the subjects' visual detection thresholds, both on and off-axis, were made. Finally, the effects of wearing variable density filters on visual acuity on the HDD were determined. Wearing variable density filters in low-light conditions reduces visual acuity and detection. The presence of the HDD reduced acuity slightly through variable density filters but. the HDD had no effect on on-axis detection and actually improved off-axis detection. The reasons for this final finding are unclear.
The use of lasers by both the military and civilian community is rapidly expanding. Thus, the potential for and severity of laser eye injury and retinal damage is increasing. Sensitive and accurate methods to evaluate and follow laser retinal damage are needed. The multifocal electroretinogram (mfERG) has the potential to meet these criteria. In this study, the mfERG was used to evaluate changes to retinal function following laser exposure. Landolt C contrast acuity was also measured in the six behaviorally trained Rhesus monkeys. The monkeys then received Nd:YAG laser lesions (1064 nm, 9 ns pulse width) in each eye. One eye received a single foveal lesion of approximately 0.13 mJ total intraocular exposure (TIE) and the other received six parafoveal lesions which varied in TIE from 0.13 to 4 mJ. mfERGs and behavioral data were collected both pre- and post-exposure. mfERGs were recorded using stimuli that contained 103, 241, and 509 hexagons. Landolt C contrast acuity was measured with five sizes of Landolt C (0.33 to 11.15 cycles/degree) of varying contrast. mfERG response densities were sensitive to the functional retinal changes caused by the laser insult. In general, larger lesions showed greater mfERG abnormalities than smaller laser lesions. Deficits in contrast acuity were found to be more severe in the eyes with foveal injuries. Although the mfERG and contrast acuity assess different areas of the visual system, both are sensitive to laser-induced retinal damage and may be complementary tests for laser eye injury triage.
The past several years has seen a severe shortage of pathogen-free Indian origin rhesus macaques due to the increased requirement for this model in retroviral research. With greater than 30 years of research data accumulated using the Rhesus macaque as the model for laser eye injury there exists a need to bridge to a more readily available nonhuman primate model. Much of the data previously collected from the Rhesus monkey (Macaca mulatta) provided the basis for the American National Standards Institute (ANSI) standards for laser safety. Currently a Tri-service effort is underway to utilize the Cynomolgus monkey (Macaca fasicularis) as a replacement for the Rhesus macaque. Preliminary functional and morphological baseline data collected from multifocal electroretinography (mfERG), optical coherence tomography (OCT) and retinal cell counts were compared from a small group of monkeys and tissues to determine if significant differences existed between the species. Initial functional findings rom mfERG yielded only one difference for the n2 amplitude value which was greater in the Cynomolgus monkey. No significant differences were seen in retinal and foveal thickness, as determined by OCT scans and no significant differences were seen in ganglion cell and inner nuclear cell nuclei counts. A highly significant difference was seen in the numbers of photoreceptor nuclei with greater numbers in the Rhesus macaque. This indicates more studies should be performed to determine the impact that a model change would have on the laser bioeffects community and their ability to continue to provide minimal visible lesion data for laser safety standards. The continued goal of this project will be to provide that necessary baseline information for a seamless transition to a more readily available animal model.
Much of the guidance provided to designers of visual displays is highly simplified because of historical limitations of visual display hardware and software. Vast improvements have been made in processors, communication channel bandwidth, and display screen performance over the past 10 years; and the pace of these visual system improvements is accelerating. It is now time to undertake a critical review of the true performance capability of the human visual system (HVS). Designers can now expect to realize systems that optimize (increase) worker productivity rather than minimize the negative impact on human effectiveness of hardware artifacts like low resolution - spatial, gray level, and temporal - relative to the real world. Myths and realities of human vision are examined to show where some assumptions used by designers are not based on solid research. Some needed new human vision studies are identified. An ideal display system is described that would enable, rather than limit, full exploitation of HVS capability.
A helmet tracker is a critical element in the path that delivers targeting and other sensor data to the user of a helmet-mounted display (HMD) in a military aircraft. The original purpose of an HMD was to serve as a helmet-mounted sight and provide a means to fully utilize the capabilities of off-boresight munitions. Recently, the role of the HMD has evolved from being strictly a targeting tool to providing detailed flight path and situation awareness information. These changes, however, have placed even greater value on the visual information that is transferred through the helmet tracker to the HMD. Specifically, the timeliness and accuracy of the information, which is of critical importance when the HMD is used as a targeting aid, is of even greater importance when the HMD is used to display flight reference information. This is especially relevant since it has been proposed to build new military aircraft without a physical head-up display (HUD) and display HUD information virtually with an HMD. In this paper, we review the current state of helmet tracker technology with respect to use in military aviation. We also identify the parameters of helmet trackers that offer the greatest risk when using an HMD to provide information beyond targeting data to the user. Finally, we discuss the human factors limitations of helmet tracker systems for delivering both targeting and flight reference information to a military pilot.
Recent advances in display technology have made it possible to superimpose color-coded symbology on the images produced by night vision goggles (NVGs). The resulting color mixture shifts the symbology's hue and saturation, which can impede recognition of the color code. We are developing luminance-contrast specifications for color-coded NVG symbology to ensure accurate color recognition.
Pilots, developers, and other users of night-vision goggles (NVGs) have pointed out that different NVG image intensifier tubes have different subjective noise characteristics. Currently, no good model of the visual impact of NVG noise exists. Because it is very difficult to objectively measure the noise of a NVG, a method for assessing noise subjectively using simple psychophysical procedures was developed. This paper discusses the use of a computer program to generate noise images similar to what an observer sees through an NVG, based on filtered white noise. The images generated were based on 1/f (where f is frequency) filtered white noise with several adjustable parameters. Adjusting each of these parameters varied different characteristics of the noise. This paper discusses a study where observers compared the computer-generated noise images to true NVG noise and were asked to determine which computer-generated image was the best representation of the true noise. This method was repeated with different types of NVGs and at different luminance levels to study what NVG parameters cause variations in NVG noise.
Previously, we presented an experiment in which we defined minimum, but not sufficient, luminance contrast ratios for color recognition and legibility for helmet-mounted display (HMD) use. In that experiment, observers made a subjective judgement of their ability to recognize a color by stopping the incremental increase in contrast ratio of a static display. For some target color/background combinations, there were extremely high error rates and in these cases sufficient contrast ratios were not achieved. In the present experiment, we randomly presented one of three target colors on one of five backgrounds. The contrast ratio of the target on the background ranged from 1.025:1 up to 1.3:1 in steps of 0.025. As before, we found that observers could accurately identify the target colors at very low contrast ratios. In addition, we defined the range in which color recognition and legibility became sufficient (>= 95% correct). In a second experiment we investigated how ell observers did when more than one color appeared in the symbology at one time. This allowed observers to compare target colors against each other on the five backgrounds. We discuss our results in terms of luminance contrast ratio requirements for both color recognition as well as legibility in HMDs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.