Computer network security has become a very serious concern of commercial, industrial, and military organizations
due to the increasing number of network threats such as outsider intrusions and insider covert activities.
An important security element of course is network intrusion detection which is a difficult real world problem
that has been addressed through many different solution attempts. Using an artificial immune system has been
shown to be one of the most promising results. By enhancing jREMISA, a multi-objective evolutionary algorithm
inspired artificial immune system, with a secondary defense layer; we produce improved accuracy of intrusion
classification and a flexibility in responsiveness. This responsiveness can be leveraged to provide a much more
powerful and accurate system, through the use of increased processing time and dedicated hardware which has
the flexibility of being located out of band.
Evolutionary algorithms (EAs) have been employed in recent years in the design of robust image transforms.
EAs attempt to improve the defining filter coefficients of a discrete wavelet transform (DWT) to improve image
quality for bandwidth-restricted surveillance applications, such as the transmission of images by swarms of
unmanned aerial vehicles (UAVs) over shared channels. Regardless of the specific algorithm employed, filter
coefficients are optimized over a common fitness landscape that defines allowable configurations that filters may
take. Any optimization algorithm attempts to identify highly-fit filter configurations within the landscape. The
evolvability of transform filters depends upon the ruggedness, deceptiveness, neutrality, and modality of the
underlying landscape traversed by the EA. We have previously studied the evolvability of image transforms for
satellite image processing with regards to ruggedness and deceptiveness. Here we examine the position of wavelet
coefficients within a landscape to determine whether optimization algorithms should be seeded near this position
or randomly seeded in the global landscape. Through examination of landscape deceptiveness, both near wavelet
coefficients and throughout the global range of the landscape, we determine that the neighborhood surrounding
the wavelet contains a greater concentration of highly fit solutions. EAs that concentrate their search effort in
this neighborhood have a better chance of identifying filters that improve upon standard wavelets. An improved
understanding of the underlying fitness landscape characteristics impacts the design of evolutionary algorithms
capable of identifying near-optimal image transforms suitable for deployment in defense and security applications
of image processing.
An important aspect of contemporary military communications in the design of robust image transforms
for defense surveillance applications. In particular, efficient yet effective transfer of critical
image information is required for decision making. The generic use of wavelets to transform an
image is a standard transform approach. However, the resulting bandwidth requirements can be
quite high, suggesting that a different bandwidth-limited transform be developed. Thus, our specific
use of genetic algorithms (GAs) attempts to replace standard wavelet filter coefficients with
an optimized transform filter in order to retain or improve image quality for bandwidth-restricted
surveillance applications. To find improved coefficients efficiently, we have developed a software engineered
distributed design employing a genetic algorithm (GA) parallel island model on small and
large computational clusters with multi-core nodes. The main objective is to determine whether
running a distributed GA with multiple islands would either give statistically equivalent results
quicker or obtain better results in the same amount of time. In order to compare computational
performance with our previous serial results, we evaluate the obtained "optimal" wavelet coefficients
on test images from both approaches which results in excellent comparative metric values.
A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.
In recent years, there has been increased interest in the use of evolutionary algorithms (EAs) in the design of robust
image transforms for use in defense and security applications. An EA replaces the defining filter coeffcients
of a discrete wavelet transform (DWT) to provide improved image quality within bandwidth-limited image processing
applications, such as the transmission of surveillance data by swarms of unmanned aerial vehicles (UAVs)
over shared communication channels. The evolvability of image transform filters depends upon the properties
of the underlying fitness landscape traversed by the evolutionary algorithm. The landscape topography determines
the ease with which an optimization algorithm may identify highly-fit filters. The properties of a fitness
landscape depend upon a chosen evaluation function defined over the space of possible solutions. Evaluation
functions appropriate for image filter evolution include mean squared error (MSE), the universal image quality
index (UQI), peak signal-to-noise ratio (PSNR), and average absolute pixel error (AAPE). We conduct a theoretical
comparison of these image quality measures using random walks through fitness landscapes defined over
each evaluation function. This analysis allows us to compare the relative evolvability of the various potential
image quality measures by examining fitness topology for each measure in terms of ruggedness and deceptiveness.
A theoretical understanding of the topology of fitness landscapes aids in the design of evolutionary algorithms
capable of identifying near-optimal image transforms suitable for deployment in defense and security applications
of image processing.
Assignment problems are a common area of research in operational research and computer science. Military applications include military personnel assignment, combat radio frequency assignment, and weapon target assignment. In general, assignment problems can be found in a wide array of areas, from modular placement to resource scheduling. Many of these problems are very similar to one another. This paper models and compares some of the assignment problems in literature. These similar problems are then generalized into a generalized multi-objective problem, the
constrained assignment problem. Using a multi-objective genetic algorithm, we solve an example of a constrained assignment problem called the airman assignment problem. Results show that good solutions along the interior portion of the Pareto front are found
in earlier generations and later generations produce more exterior
points.
Military imaging systems often require the transmission of copious amounts of data in noisy or bandwidth-limited
situations. High rates of lossy image compression may be achieved through the use of quantization at the expense
of resulting image quality. We employ genetic algorithms (GAs) to evolve military-grade transforms capable
of improving reconstruction of satellite reconnaissance images under conditions subject to high quantization
error. The resulting transforms outperform existing wavelet transforms at a given compression ratio allowing
transmission of data at a lower bandwidth. Because GAs are notoriously difficult to tune, the selection of
appropriate variation operators is critical when designing GAs for military-grade algorithm development. We
test several state-of-the-art real-coded crossover and mutation operators to develop an evolutionary system
capable of producing transforms providing robust performance over a set of fifty satellite images of military
interest. With appropriate operators, evolved filters consistently provide an average mean squared error (MSE)
reduction greater than 17% over the original wavelet transform. By improving image quality, evolved transforms
increase the amount of intelligence that may be obtained reconstructed images.
Popular press and congressional record report a belief by the intelligence community that Al Qaeda members communicate through messages embedded invisibly in images shared via the Internet. This is certainly plausible as steganography has a rich history of military and civilian use. Current signature-based approaches for detecting the presence of hidden messages rely on discerning "footprints" of steganographic tools. Of greater recent concern is detecting the use of novel tools for which no signature has been established. This research addresses this concern by using a method for detecting anomalies in seemingly innocuous images, applying a genetic algorithm within a computational immune system to leverage powerful image processing through wavelet analysis. The sensors developed with this system demonstrated a surprising level of capability to detect the use of steganographic tools for which the system had no previous exposure, including one tool designed to be statistically stealthy.
Current signature-based approaches for detecting the presence of hidden messages in digital files - the initial step in steganalysis - rely on discerning "footprints" of steganographic tools. Of greater recent concern is detecting the use of novel proprietary or "home-grown" tools for which no signature has been established. This research focuses on detecting anomalies in seemingly innocuous images, applying a genetic algorithm within a computational immune system to leverage powerful image processing through wavelet analysis. The sensors developed with this new, synergistic system demonstrated a surprising level of capability to detect the use of steganographic tools for which the system had no previous exposure.
Wave optics propagation codes are widely used to simulate the propagation of electromagnetic radiation through a turbulent medium. The basis of these codes is typically the two dimensional Fast Fourier Transform (FFT). Conventional FFTs (i.e. the standard Matlab FFT) do not use parallel processing and for large arrays, the processing time can be cumbersome. This research investigates the use of network- based parallel computing using personal computers. In particular, this study uses the Air Force Institute of Technology (AFIT) Bimodal Cluster a heterogeneous cluster of PCs connected by fast Ethernet for parallel digital signal processing using an FFT algorithm developed for use on this system. The parallel algorithms developed for the Parallel Distributed Computing Laboratory could greatly increase the computational power of current wave optics codes. The objective of this research is to implement current parallel FFT algorithms for use with wave optics propagation codes and quantify performance enhancement. With the parallel version of the FFT implemented into existing wave optics simulation code, high resolution simulations can be run in a fraction of the time currently required using conventional FFT algorithms. We present the results of implementing this parallel FFT algorithm and the enhanced performance achieved over the Matlab FFT2 function.
The induction of decision rules from data is important to many disciplines, including artificial intelligence and pattern recognition. To improve the state of the art in this area, we introduced the genetic rule and classifier construction environment (GRaCCE). It was previously shown that GRaCCE consistently evolved decision rule sets from data, which were significantly more compact than those produced by other methods (such as decision tree algorithms). The primary disadvantage of GRaCCe, however, is its relatively poor run-time execution performance. In this paper, a concurrent version of the GRaCCE architecture is introduced, which improves the efficiency of the original algorithm. A prototype of the algorithm is tested on an in- house parallel processor configuration and the results are discussed.
The US military sees a great use for software agent technology in its `synthetic battlespace', a Distributed Virtual Environment currently used as a training and planning arena. The Computer Generated Actors (CGAs) currently used in the battlespace display varying capabilities, but state of the art falls short of presenting believable agents. This lack of `believability' directly contributes to simulation and participant inconsistencies. Even if CGAs display believable behavior no formalized methodology exists for judging that display or for comparing CGA performance. A formal method is required to obtain a quantitative measurement of performance for use in assessing a CGA's performance in some simulation, and thus its suitability for use in the battlespace. This paper proposes such a quantitative evaluation method for determining an agent's observed degree of performance as related to skills. Since the method delineates what is being measured and the criteria upon which the measurement is based, it also explains the particular evaluation given for specific military CGAs.
The objective of the Gross Motion Control project at the Air Force Institute of Technology (AFIT) Robotic Systems Laboratory is to investigate alternative control approaches that will provide payload invariant high speed trajectory tracking for non-repetitive motions in free space. Our research has concentrated on modifications to the modelbased control structure. We are actively pursuing development and evaluation of both adaptive primary (inner loop) and robust secondary (output loop) controllers. In-house developments are compared and contrasted to the techniques proposed by other researchers. The case study for our evaluations is the first three links of a PUMA- 560. Incorporating the principals of multiple model adaptive estimation artificial neural networks and Lyapunov theory into the model-based paradigm has shown the potential for enhanced tracking. Secondary controllers based on Quantitative Feedback Theory or augmented with auxiliary inputs significantly improve the robustness to payload variations and unmodeled drive system dynamics. This paper presents an overview of the different concepts under investigation and highlights our latest experimental results.
This report details the implementation of the Kohonen Self-Organizing Net on a 32-node Intel iPSC/1 HyperCube and the 25 performance improvement gained by increasing the dimensionality of the net without increasing processing requirements. 1. KOHONEN SELF-ORGANIZING MAP IMPLEMENTED ON THE INTEL iPSC HYPERCUBE This report examines the implementation of a Kohonen net on the Intel iPSC/l HyperCube and explores the performance improvement gained by increasing the dimensionality of the Kohonen net from the conventional two-dimensional case to the n-dimensional case where n is the number of inputs to the Kohonen net. In this example the Kohonen net performance is improved by increasing the dimensionality of the net without increasing the number of weights or nodes in the net and without increasing processing requirements. Kohonen in his Tutorial/ICCN 1 2 states that the dimensionality of the grid is not restricted to two but that maps in the biological brain tend to be two-dimensional. It is proposed that this is not a particularly severe restriction in the brain where not all inputs are connected to all nodes and specific maps can be formed for specific functions but in the case of the massively connected Kohonen net reducing all problems to two dimensions places an unnecessary burden on the learning process and necessarily causes the loss of information regarding the interrelationship of inputs and corresponding output clusters. Indeed reducing the dimension is a projection
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.