Open Access Paper
12 July 2023 New generation, compact and smart space Earth observation instrument
R. Auribault, G. Bordot, S. Behar Lafenetre
Author Affiliations +
Proceedings Volume 12777, International Conference on Space Optics — ICSO 2022; 1277743 (2023) https://doi.org/10.1117/12.2690650
Event: International Conference on Space Optics — ICSO 2022, 2022, Dubrovnik, Croatia
Abstract
Video mission for Earth Observation will be the next step of optical instrument goals. It will allows to answer to new types of problematic and missions, specifically associated with on board Artificial Intelligence, as it opens the possibility to process a wide amount of data on board. Presently, due to technological limitation on data processing and downlink data rate, there is no instrument on the market designed to collect big amount of data. The VIDEO project will be a new type of instrument designed to be used with next generation of Artificial Intelligence processing capacity on board. It will allow Europe to takes the lead on the EO market within the next years. Due to its specific and innovative technologies and architecture adapted to new generation of data processing, the VIDEO instrument will be the pathfinder of the next instrument generation for Earth Observation.

1.

INTRODUCTION

The VIDEO instrument is a small and compact instrument with extra wide field of view. Based on Thales Alenia Space exclusive patent combining freeform mirrors in a smart optical compact TMA design and latest advances in material development, the VIDEO instrument will have the capability to perform High Resolution images as well as video monitoring on a wide scene.

The VIDEO project consortium includes 6 entities from 3 different European countries, combining the skills of academics and SMEs, and three large industrial companies, among which the project coordinator, Thales Alenia Space.

  • Polyshape/Addup => material development, mirrors and telescope structure

  • Pyxalis => detector developement

  • AMOS => freeform mirror polishing and coating

  • ULPGC => detection and compression algorithms

  • Thales Alenia Space in Spain => instrument End to End test

  • Thales Alenia Space in France => telescope design and AIT, overall project technical coordination.

The VIDEO instrument will includes the latest advances in Europe technologies for telescope structure and mirrors as well as latest innovative solution for video detection, acquisition, treatment and compression.

As the final purpose of the VIDEO instrument is to demonstrate video monitoring on a wide scene with autonomous motion detection and ranging capacity, the project will include video channel development in order to integrate latest CMOS matrix technologies as well a smart algorithms. An end to end ground demonstration with a downscaled demonstrator instrument will be performed in the frame of the project to prove the capacity of the overall European supply chain to Produce, Assemble, and Test the VIDEO instrument by using the best skills in Europe in all these domains.

Figure 1.

VIDEO instrument technological lock

00127_PSISDG12777_1277743_page_3_1.jpg

2.

VIDEO INSTRUMENT PRINCIPLE DESCRIPTION

At system level the VIDEO concept is based on the principle to have high temporal revisit with automated detection of region of interest.

The concept is well adapted to LEO constellation (or train of satellites) and this can be adapted to several use cases and operational concept (Fire detection, flood detection, deforestation identification, maritime disaster monitoring, ship identification and monitoring, maritime piracy fight, etc…)

Figure 2.

VIDEO System CONOPS

00127_PSISDG12777_1277743_page_4_1.jpg

The VIDEO instrument is based on a Korsch TMA (three mirrors anastigmatic) telescope optical design in order to have a wide field of view with a reduced number of optical surfaces. On top of that the design is fully reflective in order to allow growth potential for a wide range of wavelength for the future use (from UV to IR band).

The instrument is based on a cubic geometry in order to be able to stack easily several instruments together in order to increase the satellite swath or to increase the number of spectral channels on the same field of view.

The focal plane assembly (FPA) will be based on a big single 2D matrix detector (the target is to use the 220 MPix Gigapyx detector a derivative from the current 46Mpix version, part of the Gigapyx family) but the instrument image plane could be compatible with bigger matrix) in order to simplify the architecture (no spatial registration nor stability between several detectors). The baseline for the focal plane material is AlSi40 (in order to keep the athermal properties of the overall structure), but there is also several options as the need for thermoelastic stability of the FPA is less stringent with a single detector for the instrument.

2.1.

Instrument principle

The VIDEO instrument main goal is to perform video acquisition on small satellite with two main features, a detection mode and an identification mode.

The detection mode will allow continuous imaging in background with very downgraded signal and resolution (due to high speed swath on ground). But thanks to smart algorithm on board, it will allow to identify shape or event that will automatically trigger the identification mode with permanent video mode.

SNR is improved through post-accumulation of successive frame pixels imaging the same area on ground.

Figure 3.

Detection mode

00127_PSISDG12777_1277743_page_5_1.jpg

During the video identification mode, the algorithm will decide to perform multi-window acquisition if relevant, depending on objects detected and identified in the whole scene for ranging and storage purpose.

Data to be downloaded will be chosen among the relevant windows analyzed in order to minimize the size of the downlinked data.

In video identification mode, instrument line of sight is stabilized towards a fixed point on the ground.

Figure 4.

Identification mode (video and tracking)

00127_PSISDG12777_1277743_page_5_2.jpg

2.2.

VIDEO instrument technical choices

The VIDEO project proposes a set of breakthrough technologies for instruments for Earth Observation.

the VIDEO project instrument is to combine the latest technologies in terms of Additive Manufacturing for low Coefficient of Thermal Expansion (CTE) AlSi40 material development, in order to have same material for structure and mirrors. AlSi40 is a material specifically developed in order to adapt its CTE to the desired value thanks to the proportion of Si added in the Aluminum matrix (“40” refers to the proportion of Si in the material). Thanks to that, the instrument will have an extremely stable and homothetic behavior for optical point of view, as well as an optimized stiffness to mass ratio.

Figure 5.

VIDEO instrument breakthrough technologies

00127_PSISDG12777_1277743_page_6_1.jpg

Due to the AlSi40 based material for mirrors and structure, the instrument will be one of the best in class in terms of demisability (due to AlSi40 low melting point and additive manufacturing light weighted structure) in the perspective of being parts of future Low Earth Orbit small satellite observation constellations.

2.3.

Optical design

The optical design is a Korsch solution with three Free forms mirrors. It is based on the following patent “Telescope anastigmat à 3 miroirs de type Korsch” from Thales Alenia Space.

Figure 6.

VIDEO instrument optical layout

00127_PSISDG12777_1277743_page_6_2.jpg

The main advantages are:

  • The wide rectangular field of view

  • A compact solution with many applications

  • No obscuration, which allows a gain in radiometric.

This design has an intermediate image and a real exit pupil, which can be baffled in order to limit the straylight.

2.4.

Structure and mirrors

The structure of the instrument is based on a monolithic structure which is used as a primary structure to hold the mirrors, and also as a secondary structure to hold for instance parts for active and passive thermal control and stray light mitigation, diaphragms, vanes and baffles if necessary.

Thanks to additive manufacturing, there is no intermediate links inside the structure but only external interfaces (for mirrors, platform, thermal control and stray light mitigation)

Structure and mirrors design were done with topology optimization in order to increase the stiffness to mass ratio for the whole instrument.

Figure 7.

Monolithic structure of the instrument

00127_PSISDG12777_1277743_page_7_1.jpg

The mirrors are in the same material than the structure in order to keep the athermal properties for the instrument (mostly no sensitivity to focus under thermal environment modification).

The structure and mirrors are produced in additive manufacturing in order to optimize the specific stiffness (stiffness to mass ratio) of the overall instrument.

Figure 8.

M3 mirror design

00127_PSISDG12777_1277743_page_7_2.jpg

The link between the structure and the mirror is done thank to three isostatic bonded area (gluing box) in order to not induce constraints in the mirrors after optical alignment and to allow reversibility of the mirror mounting in case of anomaly or lack of performances.

The instrument link to the platform is realized thanks to interfaces in the four corners of the instrument. In that way, the accommodation of the instrument could be possible on several configuration and various platform.

2.5.

Detector

The detector for flight purpose will be as big as possible and use the future 220 Mpixels Gigapyx sensor version to be compatible with large Field of view application. After validating the 46 Mpixels version, the design resizing should be straight forward as the same stitched block will be used, even if a validation of the whole video chain including much more data to process will have to be done.

A particular challenge will be to design a large package compatible with the die dimension and space mission.

The 220 Mpixels (16640 x 13200) Gigapyx detector exhibits a pixel pitch of 4.4 µm in a BSI technology, with a monochrome or RGB format. The detector operates in rolling-shutter mode with a readout noise as low as 1.5 e rms in high gain. A low gain is also available and allows HDR (High Dynamic Range) operations. A 12 to 14 bits ADC is integrated on chip with adjustable analog gain from x0.5 to x8.

The Gigapyx technology offers a lot of advanced functions : multiple Region of Interest (ROI), flip and skipping modes, advanced burst-sequence and page configuration with multiple user settings, 2x2 and 4x4 Binning color-compatible, analog and digital gains, digital offset… Up to 256 selectable sub-LVDS data lanes at 860 Mbps for high speed-operations are available.

Figure 9.

Gigapyx 46 Mpixels sensor

00127_PSISDG12777_1277743_page_8_1.jpg

2.6.

Image processing

2.6.1.

Compression

The compression process is handled by means of a prediction-based near-lossless to lossless algorithm. Concretely, the proposed solution is based on the algorithm defined by the Consultative Committee for Space Data Systems (CCSDS) in the CCSDS123.0-B-2 standard. The CCSDS123.0-B-2 is a low-complexity solution for lossless and near-lossless compression, thought for multispectral and hyperspectral images. However, it is adapted in VIDEO project for carrying out the compression of panchromatic/RGB video sequences, requiring some modifications. This approach provides very important benefits over other possible alternatives:

  • It is intended to be a low-complexity solution specifically designed for being used on-board satellites, guarantying the viability of its execution for compressing the acquired video sequences in real-time.

  • It allows lossless compression, being specially interesting for research purposes to preserve the original data, if necessary. Additionally, when the compressor is used in its near-lossless configuration, the amount of information lost can be fixed in advance using maximum absolute and/or relative error parameters, controlling the compression ratio in an accurate way.

  • This solution could be further adapted for allowing the compression of multispectral and hyperspectral video sequences, making it a suitable option for future research projects. This would allow taking advantage of the redundancies in the space domain, in the temporal domain and in the spectral domain for increasing the compression performance of the algorithm. Following this strategy, we will be able to develop a solution not only for video compression, but also for supporting data and image compression using the same processing core.

2.6.2.

Detection and tracking

As a starting point, a version of the predictor developed in a previous project from University of Las Palmas Gran Canaria (ULPGC) is used, adapting it to the requirements of this mission for compressing video sequences.

The detection process is handled using a convolutional neural network (CNN) approach. The main reasons for following this approach are the high detection performance of neural networks when enough data is available, as well as their flexibility for detecting multiple and different kind of targets using a unique network. On one side, since the goal of this project is to be capable of collecting video sequences, it is assumed that a high amount of data will be available for training the deep learning models. On the other side, once that a particular network structure has been designed and implemented into the hardware available on-board, its detection performance can be constantly improved by updating the neural network weights (after carrying out larger training stages on ground). Additionally, it can be trained for detecting different kind of targets, switching the detection behavior of the network by just selecting the corresponding weights without changing the network itself, nor its implementation.

Figure 10.

Proposed methodology for detection

00127_PSISDG12777_1277743_page_9_1.jpg

Different standard neural network architectures has been tested. The experiments have been carried out in a workstation and using Python as description language. This including an efficient way to implement this kind of network architectures using hardware-friendly C language, guarantying that it can be efficiently implemented later onto Field Programmable Gate Array (FPGA) devices. It is also important to find out which kind of layers fits better in this kind of devices in order to select or develop an appropriated neural network architecture.

Figure 11.

Switching from detection from tracking mode

00127_PSISDG12777_1277743_page_9_2.jpg

2.7.

Demonstrator with reduce scale (end to end test)

The VIDEO Project development comprise an end to end demonstration based on reduce scaled instrument for functional test. The instrument demonstrator is scaled from the flight instrument architecture with a 1/3 ratio.

The VIDEO Telescope will be fully integrated and tested in a single plant (Thales Alenia Space in France) to optimize the transport and configuration change duration. The whole telescope demonstrator assembly and integration process will be performed in ISO 8 conditions as much as possible in order to limit particular pollution on optics and sensor parts.

Verification is implemented throughout the manufacturing and AIT cycles to ensure a successful integration campaign.

The main alignment steps and performance tests (Wave Front Error) are derived from set-ups already validated on previous instruments from Thales Alenia Space in France and will ensure a good performance, efficiency and capitalization between demonstrator and future flight models.

The Gigapyx 46 Mpixels format will be used for the demonstrator phase in the frame of this demonstrator test (demokit format instead of real focal plane). The target will be to prove the concept and validate the technology concept.

Regarding data processing and video channel validation, the scope of the demonstration will mainly focus on boat detection and tracking. Once that a feasible solution has been achieved, it could be extended for trying to detect other kind of targets.

3.

VIDEO INSTRUMENT DEVELOPMENT STATUS

The development status on the technologies will be focused on the demonstrator parts.

Due to the reduced scale, the demonstrator had a specific optical design.

Figure 12.

Main characteristics of the demonstrator telescope

00127_PSISDG12777_1277743_page_10_1.jpg

3.1.

Demonstrator optical performances

The WFE performances (@632.8 nm in the field) of the demonstrator are given here below:

Figure 13.

Demonstrator factory level WFE budget

00127_PSISDG12777_1277743_page_10_2.jpg

Taken into account the following contributors

  • Mirrors manufacturing : an allocation in WFE is chosen for each mirror. As the M3 mirror works in sub-pupil, an allocation in focus difference between to 2 sub-pupils is added.

  • Ground contributors taken into account for the AIT depositions, in the optical reference frame of each optical element, relative to their theoretical position.

  • Alignment accuracy of the M2 mirror (best compensators on ground are the 5 degrees of freedom of the M2 mirror)

  • Alignment accuracy of the focal plane (demokit)

As the gravity effect has not been studied yet, the factory level for the WFE is evaluated without the gravity effect.

The demonstrator transmission performance is taking into account the reflectivity coefficient of the coating (protected silver) of the mirrors, and an allocation for the transmission of the filter and the window, the value of the transmission is shown in the following table.

Figure 14.

Preliminary transmission value

00127_PSISDG12777_1277743_page_10_3.jpg

The impact of the particular and molecular contamination on the transmission is not taken into account for this value.

3.2.

Freeform mirror development

AlSi40 mirrors are already existing at TRL6 when produced by classical production processes, but the VIDEO development will demonstrate that this is possible to build AlSi40 freeform mirrors from additive manufacturing process.

Mirrors will be made of additive manufacturing blank, then diamond turning machined and finally polished to meet the Wave-Front Error (WFE) specification. This technology is not yet used for making optical mirrors at the moment, and this is where the innovation is.

The three mirrors of the demonstrator have the following characteristics.

Figure 15.

Main characteristics of the demonstrator mirrors

00127_PSISDG12777_1277743_page_11_1.jpg

In order to minimize the error on the optical shape of the freeform mirrors, the Sag(x,y) of the surface is already defined and compared to the equation of the mirror surface.

Figure 16.

Sag of the M3 mirror

00127_PSISDG12777_1277743_page_11_2.jpg

Topology optimization concept is to obtain an ideal material distribution in a designated volume on which constraints are applied. The aim can be different from a project to another, for example to enhance mechanical performances, or to gain mass.

The solver works on a Finite Element Model that has to be done before, boundary conditions and loads included. In data settings (the parameters of the topology optimization), it is required to define the objective of the optimization, the constrains (for example, the range limiting the responses) and the responses (what is calculated, for example force, stress, displacement…). By iterating process, the solver assigns a normalized density to each element (kind of sorting of elements) in order to best satisfy the objective and constrains. When the calculation has converged, the user has to select the threshold of the elements’ density, from 0 to 1, that conditions shape and thicknesses of the outcome design.

In the case of the VIDEO mirrors, there have been many iterations done among data settings before finding the best conditions for the topology optimization by defining crucial parameters which are the objective, constrains and responses, and the right combination between boundary conditions and loads. Also, three finite element models for M1 and three for M3 have been made from the beginning of the study, one to change the mesh size, one because the geometry changed a bit.

Topology optimization has been performed for the demonstrator mirrors M1 and M3 as if they were ones from the flight instrument, particularly with regard to the mechanical environments and the optical performances. In this framework, the loads applied are detailed in the following list:

  • Mechanical environments: gravity level at 30 g; first modal frequency > 100 Hz

  • Optical performances: SFE between 2 and 10 nm RMS (can be relaxed to 22 nm RMS for M1)

  • Polishing environment: local pressure on front face at 0,1 MPa to be conservative.

No thermal environment has been implemented in the simulations by lack of specification.

SFE, for Single Front Error, corresponds to the deformation of the front face of the mirror, expressed by quadratic sum of Zernike polynomials. Topology optimizations have been performed with the software OptiStruct® because it provides the chance to integrate this kind of opto-mechanical parameter (thanks to an external Fortran code).

A large volume at the rear of the mirrors is given as design space to OptiStruct® so that topology optimization has a maximum space to distribute the material.

Figure 17.

Design volume for M3 mirror

00127_PSISDG12777_1277743_page_12_1.jpg

On the contrary, elements of the front face and of the three external ears from red zones in pictures above are affected to the non-design space (they are not taken into account during the optimization and remain as they are).

Various iterations had been necessary to set the right mechanical subcases combined to the suitable constrain for optimization. First, load cases were applied one at a time on the start volume to see the impact on the design, then they were put together in the same optimization run to get an unique design.

As the solver OptiStruct® is not able to optimize itself the thickness of the front face (because of 3D element modeling and the non-design space definition), the mesh was prepared previously taking into account a front face composed with 3 layers of 1 mm high elements. Thus, several topology optimizations have been performed 2 or 3 times but with a different front face thickness.

Below, are presented some examples extracted from the numerous intermediate designs obtained since the beginning of the study.

Figure 18.

Illustration of design iterations for M3 mirror

00127_PSISDG12777_1277743_page_12_2.jpg

From these results, the next step is to interpreting the design (choice of the threshold) and smooth the shapes where it is useful. Actually, the smoothing is a step where the engineer’s feeling is at stake to get a part that is mechanically responding to the needs and also that can be manufactured.

Then, it is necessary to re-create the design in CAD format as the post-processor of the topology optimization does not provide a format from which we are able to perform mechanical analyzes immediately.

After that, the solid from the CAD file can be meshed by solid elements (3D) to get a finite element model in order to perform the final mechanical analyzes (it can be seen like a simple display).

The geometries shown after integrate also recommendations from partners to give better conditions for both manufacturing and polishing (reference surfaces, thicknesses).

Figure 19.

M3 mirror smoothing from topology optimization

00127_PSISDG12777_1277743_page_13_1.jpg

To allow the additive manufacturing and diamond turning processes to be taken into account when defining the mirrors, several iterations have been done on the surface reference, the calibration of the thickness not to have any deformation. Final modifications have been made to the models:

  • Addition of material to take into account the orientation of the part, during the additive manufacturing phase.

  • Addition of material on the active surface in order to take into account material removal during diamond turning passes.

  • Addition of reference surfaces to allow positioning and controls during the diamond turning steps on a freeform mirror design.

As the final step of the mirrors design, a proper modelling and load cases are apply to verify the mechanical performances of the mirrors.

Figure 20.

Modelling of the M3 mirror

00127_PSISDG12777_1277743_page_13_2.jpg

These analyzes have been performed taking into account the extra-material modifications and the nickel plating (NiP) external layer that enables the mirror to be polished.

The load cases and boundary conditions associated are detailed below:

  • For the gravity load cases, 1 g or 30 g is applied in each axis X Y Z separately. The mirror is fixed by 3 external holes in the ears.

  • For the thermo-elastic load case, an increase of 1 °C is applied to all nodes (homogeneous thermal load). A 3 branches rigid element is added to allow the thermal expansion between the 3 boundary condition nodes (same as gravity load cases).

  • For the local pressure load cases, several independent zones are selected to apply the load 0,1 MPa at elements surface. The area for each one is about 2,5 mm2. Each subcase is intended to represent locally the tool contact during the diamond turning operation (polishing). The boundary conditions are specific as they fix the mirror by the 3 threaded M3 holes (internal diameter in the ears).

Figure 21.

Stress map plotted on deformed M3

00127_PSISDG12777_1277743_page_14_1.jpg

The first batch of mirror blanks was produced in July 2021, but unfortunately some cracks were identified on the blanks which had cannot allowed the mirror to be polished.

Figure 22.

First additive manufacturing batch for mirrors blanks

00127_PSISDG12777_1277743_page_14_2.jpg

A back up with standard milled mirrors was launched in parallel with the new batch of mirrors after the correction of the machine set up.

Figure 23.

M3 blank back up (milling process)

00127_PSISDG12777_1277743_page_14_3.jpg

Figure 24.

M3 blank on the additive manufacturing (new batch)

00127_PSISDG12777_1277743_page_14_4.jpg

The mirrors are presently in SPDT polishing (single Point Diamond Turning) at AMOS facilities. AlSi40 material will offer a nice compatibility (in terms of CTE) with NiP plating for the active surface optical layer. The polishing process will include utilization of SPDT machine, robots for post polishing activities and if necessary AMOS has the capability to use IBF (Ion Beam Figuring) machine to finalize the process until the required values.

During the polishing, mirrors will be measured several times with a dedicated designed metrology setup based on interferometer and CGHs (Computer-generated holography) one for each mirror. Through 3D CMM (Coordinate Measuring Machine) and laser tracker it will be possible to position and align mirrors on the setup with the precision required.

3.3.

AlSi40 Structure development

The telescope structure was also designed with topology optimization. At the beginning, the primitive geometry shown below was assigned as the design space of the topology optimization, except for the external interfaces and optics interfaces. It corresponds to the envelope volume of the instrument in which the optical path has been cleared.

The next step is to proceed to the meshing and model suitable and realistic conditions for loading.

Figure 25.

Allowable space and boundaries for topology optimization

00127_PSISDG12777_1277743_page_15_1.jpg

Figure 26.

Meshing of the structure volume

00127_PSISDG12777_1277743_page_15_2.jpg

Data setting are defined for the optimization:

Figure 27.

Data setting for structure optimization

00127_PSISDG12777_1277743_page_15_3.jpg

First raw result of the optimization is given here below:

Figure 28.

First result for telescope structure optimization

00127_PSISDG12777_1277743_page_15_4.jpg

For the final structure design based on the preliminary studies and with co-engineering work with AddUp team for the feasibility of the structure, the final design was issued.

The first structure was issued without Laser Beam Melting (LBM) manufacturability constraints in order to focus only on the need coming from mechanical and interfaces at telescope level.

Then a co-engineering phase was performed with AddUp in order to take into account the feasibility and the constraints for the LBM process. This consisted mainly to perform the following tasks on the structure model:

  • Minimize the number of elements and to merge them when necessary in order to avoid to have too thin elements that could brake during the process.

  • Find the good orientation for the overall structure in order to build it properly and, when necessary, modify the angle of the branches in order to stay compatible with the LBM process.

  • Verify that there is no brutal variation of section in all the elements to be sure that the thermal constraints will not be amplified during the manufacturing.

  • Modification in order to limit the secondary supporting structures during the manufacturing process.

  • Optimization in order to take into account the cleanliness constraints.

Figure 29.

Final result for demonstrator telescope structure

00127_PSISDG12777_1277743_page_16_1.jpg

Figure 30.

Manufacturing test on representative part

00127_PSISDG12777_1277743_page_16_2.jpg

The expected mass for final structure (without mirrors) is less than 1.3 kg. Structure verification is performed to validate that there is no obvious weaknesses on the design because the demonstrator is not supposed to sustain flight environments (mechanical or thermal).

Figure 31.

Demonstrator structure meshing for mechanical verification

00127_PSISDG12777_1277743_page_16_3.jpg

Figure 32.

Demonstrator stress under 30g quasi static load

00127_PSISDG12777_1277743_page_16_4.jpg

The maximum stress is 32 MPa under QSL of 30g. Considering the yield limit of the material of 228 MPa and the associated safety factors (1.875), the minimum residual margin for the telescope demonstrator structure is +280%.

The telescope structure design of the demonstrator is well sized and justified even if there will be no environmental solicitation in the frame of the H2020 VIDEO project.

The approach for the design and the sizing of the demonstrator structure is in this way quasi identical to the approach that would be applied for the structure of a telescope flight model.

Nevertheless, due to the scale ratio of the demonstrator structure (1/3 with respect to the flight model), there is a risk that the conclusion of the topology optimization, and the manufacturing constraints will lead to a slightly different design and shape for the telescope flight model. The final structure of the telescope demonstrator will be produced before end of 2022 in the frame of the VIDEO project.

3.4.

Video chain development

3.4.1.

Video chain principle

The goal of the VIDEO chain is to:

  • Acquire and treat images from the instrument.

  • Switch between two modes of operation : detection (image mode) and compression plus tracking (video mode).

  • Adapt the video compression ratio depending on the satellite available resources and to the available downlink bitrates.

  • Detect certain image features and apply selectively higher compression ratios to “low interest” areas.

  • Select between lossy and lossless compression modes.

The video channel in its camera is described as follow:

Figure 33.

Video channel architecture

00127_PSISDG12777_1277743_page_17_1.jpg

The Gigapyx detector module in its demokits interfaces with a proximity electronics to drive the sensor and a digital processing electronics based on a re-programmable FPGA in order to implement multiple complex real-time processing on a single hardware.

3.4.2.

Gigapyx Sensor development

The Gigapyx sensor development have been finalized in the frame of the VIDEO project, the first version of this sensor (in 46Mpixels definition) has been produced and tested. The measured characteristics of this sensor after manufacturing are presented in the following table.

Figure 34.

Gigapyx 46Mpix main performances (according to test report). (a) with and without FPN correction

00127_PSISDG12777_1277743_page_17_2.jpg

The future (bigger)version of the Gigapyx sensor will use more stitch in the 2 directions than the first version.

For end to end test purpose, the 46MPix sensor is provided in a demokit for accommodation on the demonstrator instrument for VIDEO with all necessary features to operate the sensor in the frame of the ground demonstration.

Figure 35.

Gigapyx 46Mpix demokit design

00127_PSISDG12777_1277743_page_18_1.jpg

3.4.3.

RGB Video compression efficiency

In this work, the CCSDS-123.0-B-2 algorithm for near lossless compression of multi- and hyperspectral images has been adapted to compress RGB video sequences while being still fully compliant with the standard and modifying its core functionality. The goal of this approach was to provide a solution for remote sensing applications that allows to carry out the compression of data of different nature with a single compression core that can be efficiently executed on on-board satellites.

To do this, the followed approach consist in using the temporal domain to predict the information of the subsequent video frames, instead of the previous spectral channels as it is done in the CCSDS 123.0-B-2 standard.

Figure 36.

Conversion from the spectral to the temporal domain

00127_PSISDG12777_1277743_page_18_2.jpg

Different experiment have been carried out to validate the compression performance of the developed compression chain and its different configuration. The verification has been automated to accelerate the process using a Python based test framework, compressing and decompressing the input video sequence and generating the reconstructed video to analyze it. Different reports are generated for each test case, including compression ratio and distortion.

Datasets of several video sequence from real remote sensing scenario have used to assess the quality of CCSDC-123 standard for RGB Video compression. At the end a set of exhaustive parameters have been identified as the one that provides the best results in terms of compression rate and distortion ratio, not only for RGB video but also for panchromatic video compression. Typical result in terms of compression ratio versus reconstructed video quality (measured in Peak Signal-to-Noise Ratio PSNR) are shown below.

Figure 37.

Typical compression performances (quality versus compression ratio)

00127_PSISDG12777_1277743_page_19_1.jpg

The obtained results demonstrate the goodness of the proposed solution for remote sensing on-board applications, having achieved compression ratios up to 39 without observing any degradation (i.e. almost lossless). Higher ratio can be achived at the cost of decreasing the decompressed video quality. Further future works may also include the modification of the proposed CCSDS 123.0-B-2 predictor for being able to work with ROI. This could allow to specify the relevant spatial areas that need to be preserved with higher level of detail and the areas that can be more aggressively compressed.

3.4.4.

Convolutional Neural Network (CNN) architectures efficiency for detection

In order to select the right architecture, a wide evaluation of existing CNNs models suitable for a resource constrained hardware implementation has to be carried out. Five different architectures have been evaluated (AlexNet, VGG Network, ResNet, MobileNet, DenseNet) measuring both the detection performance and the computational cost. At the end, regarding dedicated figure of merit for the targeted scenario (boat detection), the training parameters and the computational cost, a lighter architecture derived from MobileNet CNN, so called MobileNetv1Lite has been identified as the most suitable option for this work, and as a result this architecture was the best candidate for the FPGA implementation of the project.

Figure 38.

Summary of trainable parameters and computational cost

00127_PSISDG12777_1277743_page_19_2.jpg

Figure 39.

CNN detection performances (efficiency versus computational cost)

00127_PSISDG12777_1277743_page_19_3.jpg

After selecting the most suitable architecture for the project, the next step is the hardware implementation. FPGA device selected in the VIDEO project is the Xilinx Kintex Ultrascale XCKU040-2FFVA1156E. The implementation of the selected CNN architecture on the target FPGA will be performed on the latest stage of the project for the end to end test.

3.5.

End to end test

Since the demonstrator end to end test is the final goal of the project, the main activities performed by Thales Alenia Space in Spain are related to the preparation of the test:

  • Development of the FPGA hardware implementation for the demonstrator.

  • Development of the Software for communication with the FPGA and the Demokit.

  • Specification of communication interfaces and Firmware for the demonstrator detection chain.

  • Performance assessment with real implemented hardware tests.

  • Integration of ULPGC algorithms with the FPGA detection chain.

  • Instrument test bench mechanical design and set up (including µdisplay operational test)

The following scheme describes the software and test hardware parts of the end-to-end configuration set-up, together with the corresponding responsibilities (blue = Thales Alenia Space in Spain, green = Pyxalis, Orange = ULPGC):

Figure 40.

Schematic of the function and responsibility sharing for the VIDEO end to end test

00127_PSISDG12777_1277743_page_20_1.jpg

The data in a format legible by ULPGC’s algorithms will be an output of the Pyxalis camera software. The raw data from the camera will be also output for validation checks of the algorithms.

A software produced by Thales Alenia Space in Spain will perform the following functions:

  • Management of modes (video/detection) based on the user’s commands and outputs from ULPGC’s detection algorithm

  • Decompression of data provided by the ULPGC algorithms (software tool provided by ULPGC)

  • Perform the digital processing required to compare the video and detected items with raw data from the camera and original data sent to the micro-display

  • User interface for the end-to-end demonstration at instrument level

  • Store the raw data provided by the camera

This test bench will also be used during the development of the camera:

  • Test the VHDL code on the full camera or on the digital processing board of the camera

  • Test the detector itself, bypassing the digital processing of the camera

The end to end test includes a set of revolutionary features for instrument testing including extended scene simulations (thanks to micro display Sony Oled ECX335S associated with collimator), synthetic image database (generated by Thales Alenia Space in France) for simulation and algorithm training on the two instrument modes.

Figure 41.

µdisplay configuration for extended scene display

00127_PSISDG12777_1277743_page_21_1.jpg

Figure 42.

3D synthetic boat integrated on real image landscape for automatic image data base generation

00127_PSISDG12777_1277743_page_21_2.jpg

4.

CONCLUSION

The VIDEO project is close to its final achievement to demonstrate that the set of new breakthrough technologies is suitable for innovative optical payload architectures for space.

This was possible thanks to the European Commission support all along the project, but also to tremendous skills, heritage and know-hows from all the consortium partners during the development of system, sub-systems and technologies.

The final demonstration and validation will be held in Madrid at Thales Alenia Space in Spain facilities in 2023.

ACKNOWLEDGEMENT

This work has been conducted within the Video Imaging Demonstrator for Earth Observation (VIDEO) project, that has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 870485. This publications reflects only the authors’ view. The Agency is not responsible for any use that may be made of the information it contains.

REFERENCES

[1] 

Florence Montredon, “Additive Manufacturing strategies for Space Applications” / Thales Alenia Space,” Additive Manufacturing for Defence, Aerospace & Space congress, , London, United Kingdom (2016). Google Scholar

[2] 

JY. Plesseria, L. Jacques, P. Gailly, C. Lenaerts, K. Fleury-Frenette, Y. Garin, A. Chiavarini, A. Heck, C. Borbouse, F. Montredon, E. Chouteau, I. Liémans, P. Bigot, L. Pambaguian, “Development and test of a three and an half space applications using additive manufacturing technologies,” ECSSMET, (2018). Google Scholar

[3] 

Romen Neris, Adrian Rodriguez, Raul Guerra, Sebastian Lopez, and Roberto Sarmiento, FPGA-based implementation of a CNN architecture for the on-board processing of very high resolution remote sensing images, Google Scholar

[4] 

Yubal Barrios, Raul Guerra, Sebastian Lopez, and Roberto Sarmiento, Adaptation of the CCSDS 123.0-B-2 Standard for RGB and Multispectral Video Compression, Google Scholar
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
R. Auribault, G. Bordot, and S. Behar Lafenetre "New generation, compact and smart space Earth observation instrument", Proc. SPIE 12777, International Conference on Space Optics — ICSO 2022, 1277743 (12 July 2023); https://doi.org/10.1117/12.2690650
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Equipment

Mirrors

Video compression

Design and modelling

Astronomical imaging

Telescopes

RELATED CONTENT


Back to Top