EP4055349A1 - Verfahren und vorrichtung zum erzeugen von lerndaten für eine maschine mit künstlicher intelligenz zur flugzeuglandeunterstützung - Google Patents

Verfahren und vorrichtung zum erzeugen von lerndaten für eine maschine mit künstlicher intelligenz zur flugzeuglandeunterstützung

Info

Publication number
EP4055349A1
EP4055349A1 EP20797520.2A EP20797520A EP4055349A1 EP 4055349 A1 EP4055349 A1 EP 4055349A1 EP 20797520 A EP20797520 A EP 20797520A EP 4055349 A1 EP4055349 A1 EP 4055349A1
Authority
EP
European Patent Office
Prior art keywords
data
simulated
ground truth
sensor
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20797520.2A
Other languages
English (en)
French (fr)
Inventor
Thierry Ganille
Guillaume PABIA
Christian Nouvel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thales SA
Original Assignee
Thales SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thales SA filed Critical Thales SA
Publication of EP4055349A1 publication Critical patent/EP4055349A1/de
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0017Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information
    • G08G5/0021Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information located in the aircraft
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/02Automatic approach or landing aids, i.e. systems in which flight data of incoming planes are processed to provide landing data
    • G08G5/025Navigation or guidance aids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Definitions

  • the invention relates to the general field of aircraft landing assistance systems, and in particular it provides a method and a device for generating learning data that can be used by an artificial intelligence machine to deep learning for the recognition of an airstrip.
  • the invention addresses the problem of recognizing an airstrip in difficult weather conditions, such as fog for example, resulting in reduced or degraded visibility.
  • the EVS, EFVS and CVS systems are based on the presentation of a sensor image to the pilot and the identification by the latter of the landing strip and are based on sensors looking towards the front of the aircraft bringing a capacity of increased detection compared to the pilot's eye, in particular in degraded visibility conditions.
  • this type of sensor does not systematically detect the landing runway in degraded visibility before the regulatory minima, typically before the aircraft is under a height of 200ft above the runway threshold for a category I ILS approach. .
  • infrared sensors An alternative to infrared sensors is the obtaining of images by a so-called active sensor such as a radar sensor, in centimeter or millimeter band. Certain frequency bands chosen outside the peaks of water vapor absorption have very low sensitivity to severe weather conditions. Such sensors therefore make it possible to produce an image through fog, for example. However, even though these sensors have fine range resolution, they exhibit much coarser angular resolution than optical solutions. The resolution is directly related to the size of the antennas used, and it is often too coarse to obtain an image easily interpreted by the pilot to serve as a guide.
  • Recent approaches to improve image processing algorithms for air operations related to landing during bad weather conditions leading to reduced or degraded visibility implement algorithms based on artificial neural networks.
  • An artificial neural network or artificial neural network is a system whose design is originally schematically inspired by the functioning of biological neurons, and which subsequently came closer to statistical methods. Neural networks are generally optimized by machine learning methods.
  • the vast majority of artificial neural networks have a learning or training algorithm, which consists of modifying synaptic weights according to a set of data presented at the input of the network.
  • the purpose of this training is to enable the neural network to learn from the examples and to produce trained artificial intelligence models.
  • Deep learning is part of the family of machine learning methods. Machine deep learning must be performed on sufficiently large databases capable of training large systems. New data can be added to the learning base on an ongoing basis to refine learning.
  • the conventional method of generating learning databases for algorithms based on artificial intelligence of the deep learning type consists in using real data, which is labeled manually or with little automated tools, in order to generate a VT ground truth.
  • One of the foundations for successful deep learning is building a large learning database.
  • the problem is the variability of the conditions, and lies in the difficulty of obtaining a large number of sensor images for different approaches on different tracks. , with different light bars of approaches, by different weather conditions and for different aircraft.
  • the volume of such an image bank must reach a sufficient threshold for the content (the learning data) to be reliable both in terms of precision and in terms of geographical coverage.
  • An object of the invention is then to meet the aforementioned needs and to overcome the drawbacks of known techniques.
  • the invention proposes a solution for generating labeled learning data, which can be used by a deep learning image processing algorithm in order to generate trained artificial intelligence models, which are embedded in aircraft landing aid systems.
  • the invention relates to a solution for the automatic generation of learning data with the aim of training an artificial intelligence to recognize an aircraft landing strip in degraded visibility conditions.
  • images obtained by a synthetic image generator according to the method of the invention can be used alone or added to a real image bank as additional training data.
  • the general principle of the invention is based on the use of a flight simulator associated with a sensor simulator configured to provide simulated sensor data, as well as on the use of an automatic generator of labeled data configured. to hide in the labeled data, the parts which are without information in the corresponding simulated sensor data.
  • the simulated data originating from simulated sensor images and made available in a training database are used to train various artificial intelligence algorithms for classification and detection, in particular algorithms based on networks of deep neurons.
  • the artificial intelligence algorithm for deep learning is based on a convolutional neural network CNN ("Convolutional Neural Network").
  • a training database made up of simulated and / or real sensor images can be used to validate the robustness or weakness of different learning algorithms with respect to different scenarios ("use cases »In English) considered as problematic, by making it possible to run in parallel different algorithms on the data set and to detect on the results provided too important differences between the different algorithms.
  • the device (and the method) of the invention by simulating more different conditions than is possible with only real data, makes it possible to generate a much larger training database, and thus considerably increase the detection capacity of a landing runway recognition system using trained models which are obtained from the training database constituted according to the invention.
  • a computer-implemented method for generating labeled learning data for an artificial intelligence machine, the method comprising at least the steps of: defining parameters for a scenario approach of an aircraft to an airstrip; using the scenario parameters in a flight simulator to generate simulated flight data, said flight simulator being configured to simulate said aircraft in approach phase and an associated autopilot; use the flight data simulated in a sensor simulator to generate simulated sensor data, said sensor simulator being configured to simulate a forward looking sensor on board an aircraft and able to provide sensor data representative of information from interest of a landing runway, said simulated sensor data which are generated being representative of information of interest of said landing runway; and using the simulated flight data and the simulated sensor data to generate ground truth, said ground truth associated with the simulated sensor data forming a pair of simulated labeled training data.
  • the stage of defining scenario parameters consists of defining parameters relating to at least one given landing runway, weather conditions, environmental conditions.
  • the step of generating simulated flight data consists in simulating different approach paths of the simulated aircraft and of the autopilot with the scenario parameters, and in generating simulated flight data for the different paths including the position ( latitude, longitude, altitude) and attitude (yaw, pitch, roll) of the simulated aircraft.
  • the step of generating simulated sensor data consists of generating infrared images or simulated radar data.
  • the step of generating ground truth consists of:
  • the flight data and the sensor data are real data and the step of generating ground truth comprises, before the step of generating the first level ground truth, steps consisting in calculating the positioning errors during landing in the actual flight data and taking it into account to generate the first level ground truth.
  • the method further comprises a step of storing the simulated labeled learning data in a learning database.
  • the invention also covers a computer program product comprising code instructions for performing the steps of the claimed method, when the program is executed on a computer.
  • the invention further covers a device for generating labeled learning data for an artificial intelligence machine, the device comprising means for implementing the steps of the claimed method according to any one of the claims.
  • FIG.1 a general architecture of an artificial intelligence model generation system trained for track and approach ramp recognition, using a training database constituted according to one embodiment of the invention
  • FIG.2 a diagram of a device for generating synthetic images according to one embodiment of the invention
  • FIG.3 a method for generating ground truth according to one embodiment of the invention
  • FIG. 4e an example of generating a ground truth check image according to one embodiment of the invention.
  • FIG. 1 illustrates a general architecture of a system 100 for generating an artificial intelligence model trained for landing runway recognition, using a training database constituted according to an embodiment of the invention.
  • the system 100 generally comprises a learning database 102 coupled to an image analysis module 104 configured to implement a deep learning artificial intelligence algorithm and generate artificial intelligence models. (AI models) trained.
  • AI models artificial intelligence models
  • the learning database 102 should contain a very large amount of data representing as many possible situations as possible, encompassing different approaches on different runways with different approach light bars for different weather conditions.
  • the training database constituted according to the principles of the invention comprises training data simulated as being a plurality of labeled data sets, where each labeled data set corresponds to a pair (simulated sensor data, VT ground truth).
  • the simulated sensor data correspond to a plurality of images from different simulated sensors (IR, Radar) associated respectively with ground truth.
  • the ground truth is a reference image, containing information of interest such as the landing runway and the approach ramp.
  • the ground truth is a set of reference radar data containing information of interest such as a landing runway and an approach ramp.
  • the result of a segmentation operation of an input image of a classifier is compared with the ground truth.
  • the simulated learning data from the learning base 102 is provided by a device 112 capable of generating simulated data (synthetic IR images and / or radar data) and their associated ground truth.
  • actual training data can be added to the simulated training database to augment the training data set.
  • the actual learning data comes from sensors on aircraft, whether it is visible images, infrared images or radar data.
  • the real images are provided by a multitude of different sensors equipping a plurality of aircraft performing either real flights or flights dedicated to taking images, each image being associated with parameters the corresponding flight, and in particular the 3D position and 3D orientation parameters of the aircraft at the time of the shooting.
  • the real training data can come from a community of image suppliers 110, which within the meaning of the present invention is understood as a set of devices where each device is able to provide one or more real sensor images.
  • the community of image suppliers can be a fleet of aircraft, an aircraft within the meaning of the present invention taking a generic definition covering any flying device, whether it be an airplane, a helicopter, a drone, a balloon, whether it either piloted or not.
  • the image analysis module 104 is configured to implement a deep learning algorithm on training data sets retrieved from the training database 102, and generate trained artificial intelligence models (AI models) that can be carried on board aircraft.
  • AI models artificial intelligence models
  • the image analysis module of the invention 104 implements a deep learning algorithm for track and ramp detection.
  • the algorithm is based on a CNN convolutional neural network.
  • the goal of deep learning is to model data with a high level of abstraction.
  • the learning phase defines a trained AI model that meets the operational need. This model is then used in the operational context during the inference phase.
  • the learning phase is therefore essential. In order to obtain the best model, the learning phase requires having built up a large learning database that is the most representative of the operational context.
  • Good learning defines a predictive model which adapts well to learning data but which is also able to predict well on data not seen during learning. If the model does not fit the training data, the model suffers from under-training. If the model adapts too well to the training data and is not able to generalize, the model suffers from over-training.
  • the learning phase makes it possible to search for the best hyper-parameters of the architecture which best model the different labels (track / ramp).
  • the neural network propagates (i.e. extraction / abstraction of characteristics specific to the objects of interest) and estimates the presence and position of the objects.
  • the learning algorithm calculates a prediction error and back-propagates the error into the network to update the hyper-parameters of the model.
  • a learning phase involves many iterations on the different learning data in order to converge towards a low error and a high precision of the AI model.
  • FIG. 2 shows a diagram of a device 200 for generating labeled learning data according to one embodiment of the invention, which comprises several modules including a scenario generator 202, a flight simulator 204, a sensor simulator 206 and a ground truth generator 208.
  • the device for generating labeled learning data 200 is implemented in a computer 112 which comprises at least one processor, a memory configured to store code instructions from different programs including code instructions to allow the execution of the logical functions of the various modules of the device 200 that can be executed by the processor according to an operating system, a storage module, input / output I / O interfaces to peripherals, interconnection buses, as known.
  • the device of the invention 200 can thus be implemented from hardware and / or software elements. It may be available on computer readable medium as a computer program product, executable by a processor that includes instructions for performing the steps of the methods in their various embodiments.
  • the device of the invention 200 is configured to generate synthetic images with associated ground truth, which can be added as training data to a training database 102.
  • the flight simulator 204 is configured to simulate a type of aircraft with an associated automatic pilot (PA), the whole forming a software Flight / PA module configured to simulate realistic trajectories of approaches on different airstrips and by different weather conditions.
  • the sizing meteorological factors are the visibility conditions which will condition the graphic rendering of the ramp and the runway as well as the wind and turbulence which will influence the orientation and the stability of the simulated sensor relative to the airstrip.
  • the Flight / PA module 204 can be configured to successively carry out a determined number of approaches by varying various parameters, such as for example the runway on which the aircraft will land, the direction and the force of the wind. , the level of turbulence.
  • the scenario generation module 202 makes it possible to automatically initialize the Flight / PA module 204 at the start of the approach with initialization parameters 203 for a given landing runway, a given weather forecast and parameters. initials of the aircraft.
  • the flight module associated with the autopilot performs the complete approach until landing with the initial parameters then the scenario generation module reinitializes the flight module with new conditions 203.
  • the Flight / PA module 204 can be configured to automatically perform a very large number of approaches with multiple initial conditions.
  • the scenario generator 202 is configured to set environmental conditions, such as for example the time and date which make it possible to position the sun or to simulate night conditions, as well as visibility, fog, rain, ...
  • parameters as to the position of the runway threshold are input data from the Flight / PA module in order to simulate approaches slightly offset laterally and / or longitudinally to the runway such as this could occur. produce in manual piloting or with a slightly biased offset ILS beam or with an LPV approach with a small error in GPS location.
  • the position of the runway threshold is set in the Flight / PA module via the scenario generation module 202 (and not in the sensor simulator), the autopilot thus guiding the aircraft towards the offset runway providing a view at an angle slightly different from the track as per the sensor data.
  • Flight simulator 204 generates simulated flight data 205 for a type of simulated aircraft.
  • the flight data which are the position (latitude, longitude, altitude) and the attitude (yaw, pitch, roll) of the aircraft, become the input parameters of the sensor simulator 206 and of the ground truth generator 208 .
  • the sensor simulator 206 is configured to simulate an on-board sensor looking towards the front of the aircraft and capable of providing data characteristic of a landing runway (information of interest) and of its environment even by degraded visibility conditions.
  • the sensor simulator 206 receives the aircraft position and attitude parameters 205 provided by the Flight / PA module 204 and generates simulated sensor images.
  • the simulated sensor data can be images in the visible, in the infrared or radar data.
  • the simulated sensor can be a fixed IR camera positioned in the nose of the aircraft and oriented forward along the longitudinal axis away from the aircraft and usually slightly downward, providing a continuous stream of black and white images.
  • the simulated sensor can be a combination of several specific cameras for different infrared and visible wavelength ranges in order to maximize the ability of the sensor to detect items of interest in degraded visibility.
  • the simulated sensor can be an active LIDAR or millimeter radar sensor. This type of sensor provides a flow of 3D data (elevation, azimuth, reflectivity, distance).
  • the sensor simulator 206 is preferably configured as a function of the type of sensor that will be used on board the actual aircraft which will embark the final system, whether for example an IR camera or else a millimeter radar.
  • the sensor simulator 206 outputs simulated sensor data 207, for example in the form of a simulated IR image as shown in FIG. 4a.
  • the sensor data from the simulated sensor 207 are inputs from the ground truth generator 208, which also receives the flight parameters 205 from the simulated aircraft.
  • the ground truth generator 208 is configured to automatically generate, from the sensor data 207 and the flight data 205, a ground truth 209.
  • the ground truth generator 208 synchronously receives the flight data 205 from the aircraft simulator 204 and the sensor data 207 from the sensor simulator 206.
  • the input data (205, 207) of the ground truth generator 208 are synchronous and consistent, in that the flight data 205 generated by the Flight / PA module and processed directly by the truth generator. field 208, correspond to the same flight data 205 which was used to generate the sensor data 207.
  • the flight parameters 205 are sent only to the sensor simulator 206, which synchronously sends the sensor data 207 and the flight data 205.
  • the sensor simulator 206 sends an image associated with the corresponding aircraft parameters every 3 seconds.
  • a generator of synthetic images operates with difficulty in real time because of the calculation time necessary for the generation of each image.
  • the device of the invention can comprise a synchronization mechanism guaranteeing the adequacy of the flight parameters 205 with the sensor data 207.
  • the device is coupled to a buffer system with a given sampling of the flight data 205 upstream of the sensor simulator, then a synchronous transmission of the flight data 205 and of the sensor data 207 from the simulator from sensor to ground truth generator.
  • the ground truth generator 208 is configured to automatically generate a ground truth 209 for the simulated scenario.
  • the automatic generation of ground truth takes place in three steps 302, 304, 306 illustrated in Figure 3.
  • a step 302 consists automatically, from the flight data 205 supplied by the Flight / PA module 204, in generating labeling data representative of characteristics of the simulated landing runway, without restriction of visibility.
  • the data generated are theoretical data only, such as they should leave the real sensor, with good visibility for the landing strip and the approach light ramp defined for the scenario (to the representativeness of the sensor model near).
  • the approach light rail is generally detectable before the landing runway. Also in order to get detection as early as possible, it is important to train the AI algorithm to recognize the different types of light bars that are normalized. However, this is not always the case and sometimes the airstrip can be directly detectable first, for example if the light bar is made of LEDs instead of incandescent lamps, an IR sensor can detect the runway before the light bar. This can also be the case of a radar which from a distance can detect the contrast between the ground and the asphalt of the runway well before the contrast between the ground and the metal of the ramp, the reflection surfaces of which are too small. far. It is therefore important to train the AI algorithm to recognize the airstrip as well.
  • the ground truth generator of the invention can be configured to define the landing runway, that is to say the position of the two runway thresholds, the type of approach ramp used by the sensor simulator. as well as the characteristics of the simulated sensor, i.e. the orientation of the sensor in the aircraft frame, the fields - horizontal and vertical - of the sensor and the type of output data, for example for an image , the resolutions - horizontal and vertical - in pixels, any distortions, ...
  • step 302 allows step 302 to generate very simplified pseudo sensor data where only the track appears in a first type of uniform information (402), the light bar of approaches in a second type of uniform information (404) and the rest of the data in a third type of uniform information (406).
  • FIG. 4b illustrates a first level ground truth obtained at the end of the first step 302 in the case of a sensor generating 2D images. This is a uniform image (black in Figure 4b but could be red for example), with the track and the ramp appearing in two other different colors (green for the track and yellow for the ramp for example).
  • the simulated sensor reproduces the consequences of the simulated external environment and, even if a landing runway is in the field of the sensor, depending on the distance, the visibility, the weather conditions, the date and time and type of sensor, the runway and its approach ramp may not be visible or only partially visible.
  • another step 304 operated by the ground truth generator consists of the detection of the visibility limit by the identification of blind zones in the sensor data of the simulated sensor, then in a following step 306, the The method allows the blind areas that have been identified to be used to correct the first level ground truth generated in the first step 302, based on the sight distance in the sensor data of the simulated sensor, and generate a truth- corrected ground 209.
  • the method of the invention in one embodiment is implemented in step 304, an algorithm detection limit of visibility on each set of sensor data 207.
  • an algorithm detection limit of visibility on each set of sensor data 207 can, for example, calculate a contrast map of the image then its variance and as a function of the result, then binarize the image by different thresholds to obtain a single border in the image 408 between the part with visibility, characterized by the presence of contrasting elements, and the part without visibility, more uniform, as illustrated in the figure 4c.
  • the visibility limit detection step 304 makes it possible to provide a border 408 between an upper part of the sensor data, corresponding to the more distant and blind data, and a lower part of the sensor data, corresponding to the closer data and with visibility.
  • the border is used to perform a masking of the areas of the labeled data corresponding to the blind areas in the sensor data of said sensor simulated in the third step 306.
  • a mask having a fourth type of uniform information and covering the part corresponding to the blind data is superimposed on the pseudo sensor data resulting from the first step 302 in order to generate a corrected ground truth, as illustrated in FIG. 4d.
  • the upper part of the first level ground truth image resulting from the first step and corresponding to the blind part in the simulated IR image can be masked with information uniform which can be a blue color for example.
  • the ground truth generator also makes it possible to generate 308 a corrected ground truth control image as illustrated in FIG. 4e, by merging each set of ground truth data with the corresponding simulated sensor data.
  • a predefined level of transparency is applied to the ground-truth data from the second step 304 before superimposing them on the corresponding simulated sensor image to obtain a ground truth check image.
  • transparency makes it possible to verify in a control step that the landing runway and the ground truth approach light bar are perfectly superimposed on those of the sensor data from the simulated sensor, and only where they are visible. in the sensor data.
  • the check is based on a visual check by an operator, who can then activate an automatic deletion of the various files from the database corresponding to the item which is checked negatively.
  • the device of the invention thus makes it possible, by successive implementation of different scenarios, types of aircraft, on-board sensors, to generate a simulated database consisting of a plurality of sensor data from simulated sensors with a plurality of associated ground truth.
  • a set of data derived from the parameters of the aircraft can also be associated in the database with the other simulated data.
  • aircraft parameters can be added to the learning data, such as for example the course deviation, the height, or the DME distance from the runway.
  • the associated ground truth is an image of the same size with a specific color for each element of the image that we want the algorithm to learn to recognize.
  • the simulated database can be used as a training database to implement a deep learning algorithm for track recognition, based on an artificial neural network, without the control data which does not. not part of the training data.
  • data from real sensors during real flights are recorded with aircraft parameters (at least those of position and attitude), and information as to the synchronization of the sensor data with the parameters of the aircraft.
  • the difficulty of implementing the method of the invention on this type of data is at the step of generating the first level ground truth (step 302) because the real parameters of the aircraft are generally slightly erroneous.
  • a GPS position is precise to a few meters with a maximum of the order of 30 meters
  • a GPS altitude is even a little less precise
  • an aircraft heading is known with an accuracy of a few tenths of a degree to 2 or 3 degrees depending on the type of instrument providing the parameter.
  • the position and the altitude of the two runway thresholds are known precisely, it is possible to easily calculate the runway heading, and as the aircraft rolls fairly precisely on the central axis of the runway during deceleration, it is easy to calculate the heading error provided by the parameters of the aircraft, the GPS altitude error by comparing the altitude of the aircraft on touchdown with the altitude of the runway threshold, taking into account the height of the GPS antenna with respect to the ground and by taking several points during taxiing, and find an approximation of the latitude and longitude errors making it possible to correct the lateral deviation from the runway. Nevertheless, there remains an uncertainty on the longitudinal deviation from the runway, since we never know precisely where the aircraft is located longitudinally on the runway, but this error is the least impacting for the position of the runway in truth.
  • step 302 The following steps (304, 306) are the same as those described with reference to Fig. 3 for the simulated data, the blind parts are masked by a visibility limit detection algorithm, and control data can be generated. Data that is too erroneous can be deleted. This embodiment makes it possible to obtain additional data for the base training of simulated data, thus improving the learning phase of an AI algorithm.
  • the simulated training database is combined with an actual training database to greatly increase the amount of training data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Geometry (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Graphics (AREA)
  • Traffic Control Systems (AREA)
  • Feedback Control In General (AREA)
EP20797520.2A 2019-11-07 2020-11-03 Verfahren und vorrichtung zum erzeugen von lerndaten für eine maschine mit künstlicher intelligenz zur flugzeuglandeunterstützung Pending EP4055349A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1912484A FR3103049B1 (fr) 2019-11-07 2019-11-07 Procede et dispositif de generation de donnees d'apprentissage pour machine d'intelligence artificielle pour l'aide a l'atterrissage d'aeronef
PCT/EP2020/080803 WO2021089536A1 (fr) 2019-11-07 2020-11-03 Procede et dispositif de generation de donnees d'apprentissage pour machine d'intelligence artificielle pour l'aide a l'atterrissage d'aeronef

Publications (1)

Publication Number Publication Date
EP4055349A1 true EP4055349A1 (de) 2022-09-14

Family

ID=70154466

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20797520.2A Pending EP4055349A1 (de) 2019-11-07 2020-11-03 Verfahren und vorrichtung zum erzeugen von lerndaten für eine maschine mit künstlicher intelligenz zur flugzeuglandeunterstützung

Country Status (4)

Country Link
US (1) US20220406040A1 (de)
EP (1) EP4055349A1 (de)
FR (1) FR3103049B1 (de)
WO (1) WO2021089536A1 (de)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102159052B1 (ko) * 2020-05-12 2020-09-23 주식회사 폴라리스쓰리디 영상 분류 방법 및 장치

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3930862A1 (de) * 1989-09-15 1991-03-28 Vdo Schindling Verfahren und einrichtung zur darstellung von flugfuehrungsinformation
US8264498B1 (en) * 2008-04-01 2012-09-11 Rockwell Collins, Inc. System, apparatus, and method for presenting a monochrome image of terrain on a head-up display unit
US20100039294A1 (en) * 2008-08-14 2010-02-18 Honeywell International Inc. Automated landing area detection for aircraft
GB201118694D0 (en) * 2011-10-28 2011-12-14 Bae Systems Plc Identification and analysis of aircraft landing sites
FR2996670B1 (fr) * 2012-10-05 2014-12-26 Dassault Aviat Systeme de visualisation pour aeronef, et procede de visualisation associe
FR3030092B1 (fr) * 2014-12-12 2018-01-05 Thales Procede de representation tridimensionnelle d'une scene
AU2016315938B2 (en) * 2015-08-31 2022-02-24 Cape Analytics, Inc. Systems and methods for analyzing remote sensing imagery
FR3058233B1 (fr) 2016-11-03 2018-11-16 Thales Procede de superposition d'une image issue d'un capteur sur une image synthetique par la detection automatique de la limite de visibilite et systeme de visualision associe

Also Published As

Publication number Publication date
FR3103049A1 (fr) 2021-05-14
WO2021089536A1 (fr) 2021-05-14
US20220406040A1 (en) 2022-12-22
FR3103049B1 (fr) 2022-01-28

Similar Documents

Publication Publication Date Title
FR3103048A1 (fr) Procede et dispositif de generation de donnees synthetiques d'apprentissage pour machine d'intelligence artificielle pour l'aide a l'atterrissage d'aeronef
FR3103047A1 (fr) Procede et dispositif d'apprentissage par reseau de neurones artificiels pour l'aide a l'atterrissage d'aeronef
Lookingbill et al. Reverse optical flow for self-supervised adaptive autonomous robot navigation
US20100305857A1 (en) Method and System for Visual Collision Detection and Estimation
Li et al. Toward automated power line corridor monitoring using advanced aircraft control and multisource feature fusion
EP3657213B1 (de) Lernverfahren eines neuronennetzes an bord eines luftfahrzeugs für die landehilfe dieses luftfahrzeugs, und server für die umsetzung eines solchen verfahrens
GB2493249A (en) Context searching in images for target object
US11967103B2 (en) Multi-modal 3-D pose estimation
Nagarani et al. Unmanned Aerial vehicle’s runway landing system with efficient target detection by using morphological fusion for military surveillance system
US20210018611A1 (en) Object detection system and method
CN112596071A (zh) 无人机自主定位方法、装置及无人机
EP2517152B1 (de) Verfahren zur objecktklassifizierung in einem bildbeobachtungssystem
WO2021089539A1 (fr) Procede et dispositif d'aide a l'atterrissage d'aeronef en conditions de visibilite degradee
WO2021089536A1 (fr) Procede et dispositif de generation de donnees d'apprentissage pour machine d'intelligence artificielle pour l'aide a l'atterrissage d'aeronef
Yang et al. Autonomous UAVs landing site selection from point cloud in unknown environments
EP3656681A1 (de) Vorrichtung und verfahren zur landungsunterstützung eines luftfahrzeugs bei eingeschränkten sichtbedingungen
FR3112013A1 (fr) Procédé et dispositif d’aide à la conduite d’un aéronef se déplaçant au sol.
Milanov et al. Method for clustering and identification of objects in laser scanning point clouds using dynamic logic
Talaat et al. Enhanced aerial vehicle system techniques for detection and tracking in fog, sandstorm, and snow conditions
Lu et al. Aerodrome situational awareness of unmanned aircraft: an integrated self‐learning approach with Bayesian network semantic segmentation
Pandey et al. Deep Learning for Iceberg Detection in Satellite Images
Sajjad et al. A Comparative Analysis of Camera, LiDAR and Fusion Based Deep Neural Networks for Vehicle Detection
Bulatov et al. Segmentation methods for detection of stationary vehicles in combined elevation and optical data
EP4086819A1 (de) Lernverfahren einer überwachten künstlichen intelligenz zur identifizierung eines vorbestimmten objekts in der umgebung eines flugzeugs
Svanström et al. Drone Detection and Tracking in Real-Time by Fusion of Different Sensing Modalities. Drones 2022, 6, 317

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220505

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230427

17Q First examination report despatched

Effective date: 20230602