With the exponential market increase in mini- and micro-drones, this technology became available for anybody around the world. Therefore, the possibility of using drones as weapons represents a danger in both the civil and military domains. In the last years, a lot of programs have been initiated to identify cost-effective measures for detection, classification, tracking and neutralization of this type of threat. In a prospective approach to mitigate these risks, the French-German Research Institute of Saint-Louis (ISL) is studying since the past ten years the technical responses that can be provided for the detection, identification, localization and tracking of small-sized aerial drones, in a variety of operational scenarios and contexts of use. In this paper, we will summary the main ISL results in this domain. Over the time, ISL participated in a lot of joint trials allowing to test acoustical and optical detection system in different operational scenarios and for a great variety of types of drones. The collected images form a database of almost 1 million of images of drones in flight at different scales, with different backgrounds and at different wavelengths (Vis, SWIR). This database is of great importance for the development of detection/localization/tracking algorithms for conventional or AI-based techniques.
Micro unmanned aerial vehicles (MUAV) have become increasingly popular during the last decade due to their access to a wide consumer market. With the increasing number of MUAV, the unintended and intended misuse areas has risen as a potentially increasing risk. To counter this threat, surveillance systems are under development which will monitor the MUAV flight behavior. In this context, the reliable tracking and prediction of the MUAV flight behavior is crucial to increase the performance of countermeasures. In this paper, we discuss electro-optical computational imaging methods with a focus on the ability to perform a tracking and prediction of the three dimensional (3D) flight path. In first experimental investigation, we recorded and analyzed image sequences of a MUAV quad-copter flying at low altitude in laboratory and in outdoor scenarios. Our results show, that we are able to track the three dimensional flight path with high accuracy and we are able to give a reliable prediction of the MUAV flight behavior within the near future.
KEYWORDS: 3D modeling, Cameras, Detection and tracking algorithms, 3D image processing, Image analysis, Corner detection, Clouds, Short wave infrared radiation, Computational imaging, Imaging systems
Micro unmanned aerial vehicles (MUAV) have become increasingly popular during the last decade due to their access to a wide consumer market. With the increasing number of MUAV, the unintended and intended misuse by flying close to sensitive areas has risen as a potentially increasing risk. To counter this threat, surveillance systems are under development which will monitor the MUAV flight behavior. In this context, the reliable tracking and prediction of the MUAV flight behavior is crucial to increase the performance of countermeasures. In this paper, we discuss electro-optical computational imaging methods with a focus on the ability to perform a tracking of the three dimensional (3D) flight path. We evaluate the analysis of different imaging methods performed with active laser detection as well as with passive imaging using advanced scenario analysis. In first experimental investigation, we recorded and analyzed image sequences of a MUAV quad-copter flying at low altitude in laboratory and in outdoor scenario. Our results show, that we are able to track the three dimensional flight path with high accuracy and we are able to give a reliable prediction of the MUAV flight behavior within the near future.
During the last two decades the number of visual odometry algorithms has grown rapidly. While it is straightforward to obtain a qualitative result, if the shape of the trajectory is in accordance with the movement of the camera, a quantitative evaluation is needed to evaluate the performances and to compare algorithms. In order to do so, one needs to establish a ground truth either for the overall trajectory or for each camera pose. To this end several datasets have been created. We propose a review of the datasets created over the last decade. We compare them in terms of acquisition settings, environment, type of motion and the ground truth they provide. The purpose is to allow researchers to rapidly identifies the datasets that best fit their work. While the datasets cover a variety of techniques to establish a ground truth, we provide also the reader with techniques to create one that were not present among the reviewed datasets.
The increasing number of microunmanned aerial vehicles (MUAVs) is a rising risk for personal privacy and security of sensitive areas. Owing to the highly agile maneuverability and small cross section of the MUAV, effective countermeasures (CMs) are hard to deploy, especially when a certain temporal delay occurs between the localization and the CM effect. Here, a reliable prediction of the MUAV flight behavior can increase the effectiveness of CMs. We propose a pose estimation approach to derive the three-dimensional (3-D) flight path from a stream of two-dimensional intensity images. The pose estimation in a single image results in an estimation of the current position and orientation of the quadcopter in 3-D space. Combined with flight behavior model, this information is used to reconstruct the flight path and to predict the flight behavior of the MUAV. In our laboratory experiments, we obtained a standard deviation between 1 and 24 cm in a five-frame prediction of the 3-D position, depending in the actual flight behavior.
In the last decade, small and mirco unmanned aerial vehicles (MUAV) have become an increasing risk to security and safety in civilian and military scenarios. Further, countermeasures are hard to deploy due to the MUAV ability to operate with highly agile flight maneuver and the physical constrains due to very small cross-sections as well for RADAR as for LiDAR detection. The French-German Research Institute of Saint-Louis (ISL) is studying heterogeneous sensor networks for detection and identification of threats. Shortwave laser gated viewing is used to record images of the target and different image processing algorithms and filters are investigated to perform reliable tracking of MUAV flying in front of textured and clear sky backgrounds. Here, an analysis approach is presented to analyze the MUAV flight behavior and three-dimensional path from image data. Further, a prediction approach is presented to estimate the target position in near future.
In computer vision, the epipolar geometry embeds the geometrical relationship between two views of a scene. This geometry is degenerated for planar scenes as they do not provide enough constraints to estimate it without ambiguity. Nearly planar scenes can provide the necessary constraints to resolve the ambiguity. But classic estimators such as the 5-point or 8-point algorithm combined with a random sampling strategy are likely to fail in this case because a large part of the scene is planar and it requires lots of trials to get a nondegenerated sample. However, the planar part can be associated with a homographic model and several links exist between the epipolar geometry and homographies. The epipolar geometry can indeed be recovered from at least two homographies or one homgraphy and two noncoplanar points. The latter fits a wider variety of scenes, as it is unsure to be able to find a second homography in the noncoplanar points. This method is called plane-and-parallax. The equivalence between the parallax and the epipolar lines allows to recover the epipole as their common intersection and the epipolar geometry. Robust implementations of the method are rarely given, and we encounter several limitations in our implementation. Noisy image features and outliers make the lines not to be concurrent in a common point. Also off-plane features are unequally influenced by the noise level. We noticed that the bigger the parallax is, the lesser the noise influence is. We, therefore, propose a model for the parallax that takes into account the noise on the features location to cope with the previous limitations. We call our method the “parallax beam.” The method is validated on the KITTI vision benchmark and on synthetic scenes with strong planar degeneracy. The results show that the parallax beam improves the estimation of the camera motion in the scene with planar degeneracy and remains usable when there is not any particular planar structure in the scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.