CN117951648A - Airborne multisource information fusion method and system - Google Patents

Airborne multisource information fusion method and system Download PDF

Info

Publication number
CN117951648A
CN117951648A CN202410347864.4A CN202410347864A CN117951648A CN 117951648 A CN117951648 A CN 117951648A CN 202410347864 A CN202410347864 A CN 202410347864A CN 117951648 A CN117951648 A CN 117951648A
Authority
CN
China
Prior art keywords
information
source
trained
training
context
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410347864.4A
Other languages
Chinese (zh)
Other versions
CN117951648B (en
Inventor
吴伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Zhengyang Bochuang Electronic Technology Co ltd
Original Assignee
Chengdu Zhengyang Bochuang Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Zhengyang Bochuang Electronic Technology Co ltd filed Critical Chengdu Zhengyang Bochuang Electronic Technology Co ltd
Priority to CN202410347864.4A priority Critical patent/CN117951648B/en
Publication of CN117951648A publication Critical patent/CN117951648A/en
Application granted granted Critical
Publication of CN117951648B publication Critical patent/CN117951648B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Testing Or Calibration Of Command Recording Devices (AREA)

Abstract

The application provides an airborne multisource information fusion method and system. Meanwhile, initializing a weight parameter guiding multi-source sensing network to process the non-supervision multi-source sensor data, outputting guiding multi-source context sensing information, screening out the guiding multi-source context sensing information matched with the guiding multi-source context sensing information according to the first trained report context sensing information, generating first fuzzy selection context information, training a basic trained multi-source sensing network based on the first fuzzy selection context information and the labeling context information of the supervised multi-source sensor data, gradually optimizing and finally generating a target multi-source sensing network, and when the target multi-source sensing network is used for analyzing candidate airborne multi-source fusion data, the obtained context sensing result is not only beneficial to improving flight safety and efficiency, but also can provide key information for subsequent flight decisions.

Description

Airborne multisource information fusion method and system
Technical Field
The application relates to the technical field of artificial intelligence, in particular to an airborne multisource information fusion method and system.
Background
With the rapid development of the aviation industry, aircraft (e.g., commercial airliners, unmanned aerial vehicles, etc.) need to perceive and understand complex and diverse flight environments in real time as the mission is performed. To improve flight safety and efficiency, aircraft are often equipped with various sensors, such as radar, optical cameras, infrared sensors, etc., to collect data about the surrounding environment. The large amount of data generated by these sensors requires efficient processing and analysis to enhance the situational awareness capabilities of the aircraft.
In the prior art, multisource awareness networks are used to integrate and process data from multiple sensors of an aircraft. However, these multi-source aware networks face a number of challenges. First, the processing of unsupervised multisource sensor data often relies on fuzzy selection of context information, which can lead to uncertainties and errors in the training process. Secondly, the supervised multi-source sensor data has marking information, but the marking work is time-consuming and high in cost, and the data quantity and the diversity are limited.
Disclosure of Invention
In view of the above, the present application aims to provide an airborne multisource information fusion method and system.
According to a first aspect of the present application, there is provided an on-board multi-source information fusion method, the method comprising:
Acquiring an onboard multi-source fusion training data sequence, wherein the onboard multi-source fusion training data sequence comprises supervised multi-source fusion training data and unsupervised multi-source fusion training data, the supervised multi-source fusion training data comprises supervised multi-source sensor data and supervised flight report data corresponding to the supervised multi-source sensor data, and the unsupervised multi-source fusion training data comprises unsupervised multi-source sensor data and unsupervised flight report data corresponding to the unsupervised multi-source sensor data;
Based on a trained report learning network of an initialized weight parameter, performing context awareness on the unsupervised flight report data to generate one or more pieces of first trained report context awareness information;
based on a guiding multi-source sensing network of the initialized weight parameters, performing context sensing on the unsupervised multi-source sensor data to generate one or more guiding multi-source context sensing information;
selecting the one or more guiding multi-source context awareness information according to the one or more first trained report context awareness information, and generating one or more first fuzzy selection context information;
Training a basic trained multi-source sensing network according to the first fuzzy selection scene information and the labeling scene information corresponding to the supervised multi-source sensor data to generate a target multi-source sensing network;
Acquiring candidate airborne multisource fusion data, and performing context awareness on the candidate airborne multisource fusion data based on the target multisource awareness network to generate context awareness results; the context awareness result comprises one or more context awareness tags and tag feature information corresponding to each context awareness tag.
In a possible implementation manner of the first aspect, the method further includes:
Acquiring network function layer definition information of a basic training report learning network;
and training the basic trained report learning network according to the supervised flight report data and the unsupervised flight report data to generate the trained report learning network of the initialization weight parameter.
In a possible implementation manner of the first aspect, the training the basic trained report learning network according to the supervised flight report data and the unsupervised flight report data to generate the trained report learning network of the initialization weight parameter includes:
Based on a guidance report learning network of the initialized weight parameters, performing context awareness on the unsupervised flight report data to generate one or more guidance report context awareness information;
performing context awareness on the supervised flight report data based on a basic trained report learning network, and generating one or more second trained report context awareness information;
Training the basic trained report learning network according to the one or more guidance report context awareness information, the report context annotation information corresponding to the supervised flight report data and the one or more second trained report context awareness information, and generating the trained report learning network of the initialization weight parameter.
In a possible implementation manner of the first aspect, the method further includes:
Based on the target multi-source perception network, performing context awareness on the unsupervised multi-source sensor data to generate one or more pieces of trained multi-source context awareness information;
selecting the one or more guidance report context awareness information according to the one or more trained multi-source context awareness information to generate one or more second fuzzy selection context information;
The training the basic trained report learning network according to the one or more guidance report context awareness information, the report context label information corresponding to the supervised flight report data and the one or more second trained report context awareness information, and generating the trained report learning network of the initialization weight parameter, including:
training the basic trained report learning network according to the one or more second fuzzy selection context information, the report context annotation information corresponding to the supervised flight report data and the one or more second trained report context awareness information, and generating the trained report learning network of the initialization weight parameter.
In a possible implementation manner of the first aspect, the selecting the one or more guiding multi-source context awareness information according to the one or more first trained report context awareness information generates one or more first fuzzy selection context information, including:
Determining a context dimension label corresponding to each piece of guiding multi-source context awareness information and a context dimension label corresponding to each piece of first trained report context awareness information;
Determining, for each guiding multi-source context awareness information, whether first trained report context awareness information identical to a context dimension label of the guiding multi-source context awareness information exists in the one or more first trained report context awareness information;
And outputting the guiding multi-source context awareness information as the first fuzzy selection context information if the context dimension label of the guiding multi-source context awareness information is the same as the context dimension label of any one of the one or more first trained report context awareness information.
In a possible implementation manner of the first aspect, the training the basic trained multisource sensing network according to each piece of first fuzzy selection context information and the labeling context information corresponding to the supervised multisource sensor data to generate the target multisource sensing network includes:
Based on the basic trained multi-source sensing network, performing context sensing on the unsupervised multi-source sensor data to generate one or more pieces of first trained multi-source context sensing information;
Based on the basic trained multi-source sensing network, performing context sensing on the supervised multi-source sensor data to generate one or more pieces of second trained multi-source context sensing information;
performing training error determination on the one or more first trained multi-source context awareness information according to each piece of first fuzzy selection context information to generate first training error information;
According to the labeling context information corresponding to the supervised multi-source sensor data, training error determination is carried out on the one or more second trained multi-source context awareness information, and second training error information is generated;
Outputting the sum of the first training error information and the second training error information as training error information of the basic trained multisource perception network;
and training the basic trained multi-source sensing network according to the training error information to generate the target multi-source sensing network.
In a possible implementation manner of the first aspect, the method further includes:
Performing cross scene perception information extraction on the one or more first fuzzy selection scene information and the one or more first trained multi-source scene perception information to generate cross scene perception information;
The training error determination is performed on the one or more first trained multi-source context awareness information according to each piece of the first fuzzy selection context information, and first training error information is generated, including:
Covering the first fuzzy selection context information according to the cross context awareness information;
And training error determination is carried out on the one or more first trained multi-source context awareness information according to the context awareness information in the cross context awareness information, and first training error information is generated.
In a possible implementation manner of the first aspect, after performing training of a plurality of training batches, the base trained multisource aware network meets a training termination condition, the method further comprises:
Initializing network function layer definition information of the basic trained multisource perception network at any time when training of a first training batch of the basic trained multisource perception network is carried out, generating an initialized trained multisource perception network of the first training batch, and guiding the network function layer definition information of the multisource perception network by initializing weight parameters to be set definition parameter information when training of the first training batch is carried out;
Training the basic trained multi-source sensing network according to the first fuzzy selection context information and the labeling context information corresponding to the supervised multi-source sensor data, wherein the training comprises the following steps:
And when the first training batch is trained, training the initial trained multi-source sensing network of the first training batch according to the first fuzzy selection context information and the labeling context information corresponding to the supervised multi-source sensor data, and generating a target trained multi-source sensing network of the first training batch.
In a possible implementation manner of the first aspect, the method further includes:
when training of the M-th training batch of the basic trained multi-source sensing network is carried out, acquiring a target trained multi-source sensing network generated by the M-1 th training batch;
Outputting the target trained multisource perception network generated by the M-1 training batch as a guiding multisource perception network of an initialization weight parameter during training of the M training batch;
Randomly initializing network function layer definition information of the basic trained multi-source sensing network to generate an initialized trained multi-source sensing network of an M training batch;
Training the basic trained multi-source sensing network according to the first fuzzy selection context information and the labeling context information corresponding to the supervised multi-source sensor data, wherein the training comprises the following steps:
When training of the M training batch, training the initialized trained multi-source sensing network of the M training batch according to the first fuzzy selection scene information and the labeling scene information corresponding to the supervised multi-source sensor data to generate a target trained multi-source sensing network of the M training batch; wherein M is an integer greater than 1 and not greater than X; and when M is equal to X, the target trained multi-source sensing network of the M th training batch forms the target multi-source sensing network, and X is the total training round number.
According to a second aspect of the present application, there is provided an on-board multi-source information fusion system, the on-board multi-source information fusion system comprising a machine-readable storage medium storing machine-executable instructions and a processor, the processor implementing the on-board multi-source information fusion method as described above when executing the machine-executable instructions.
According to a third aspect of the present application, there is provided a computer readable storage medium having stored therein computer executable instructions that, when executed, implement the on-board multi-source information fusion method described above.
According to any one of the aspects, the application realizes accurate perception and identification of the aircraft scene by effectively utilizing the onboard multisource fusion training data sequence and combining the trained report learning network and the guiding multisource perception network. First, by fusing supervised and unsupervised data, a rich training data set is obtained, which contains the flight data captured by the sensors and the associated flight report data. Such a data set provides a basis for subsequent context awareness training. On the basis, a trained report learning network for initializing weight parameters analyzes the unsupervised flight report data to generate first trained report context awareness information. Meanwhile, the multi-source sensing network is guided to process the data of the non-supervision multi-source sensor by initializing the weight parameters, and the multi-source context sensing information is output. The two parts of information complement each other, and a more complete view angle is provided for scene perception. Further, the guiding multi-source context awareness information matched with the first trained report context awareness information is screened out according to the first trained report context awareness information, and first fuzzy selection context information is generated. This step strengthens the link between unsupervised learning and supervised learning so that more valid information can be learned from unlabeled data. And then training the basic trained multi-source sensing network based on the first fuzzy selection context information and the labeling context information of the supervised multi-source sensor data, gradually optimizing and finally generating the target multi-source sensing network. The target network combines the multi-source information and extracts accurate context awareness results through a deep learning algorithm. Finally, when the target multi-source perception network is used for analyzing the candidate airborne multi-source fusion data, the network can output a context awareness tag containing rich feature information. These situational awareness results not only help improve flight safety and efficiency, but also can provide critical information for subsequent flight decisions. In general, the application fully utilizes the combination of the supervised data and the unsupervised data, and obviously improves the context awareness capability of the airborne multisource fusion data.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a flow diagram of an airborne multisource information fusion method according to an embodiment of the application;
fig. 2 is a schematic diagram of a component structure of an on-board multi-source information fusion system according to an embodiment of the present application;
reference numerals: 100-an onboard multi-source information fusion system; 1001-a processor; 1002-bus; 1003-memory; 1004-transceivers.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the accompanying drawings in the present application are for the purpose of illustration and description only, and are not intended to limit the scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this disclosure, illustrates operations implemented in accordance with some embodiments of the present application. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Furthermore, one skilled in the art, under the direction of this disclosure, may add at least one other operation to the flowchart and may destroy at least one operation from the flowchart.
In addition, the described embodiments are only some, but not all, embodiments of the application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, correspond to the scope of the application according to the embodiments of the application.
Fig. 1 is a schematic flow chart of an on-board multi-source information fusion method and system according to an embodiment of the present application, and it should be understood that in other embodiments, the order of part of the steps in the on-board multi-source information fusion method according to the present application may be shared with each other according to actual needs, or part of the steps may be omitted or maintained. The airborne multisource information fusion method comprises the following detailed steps:
step S110, acquiring an onboard multi-source fusion training data sequence.
In this embodiment, the onboard multi-source fusion training data sequence includes supervised multi-source fusion training data and unsupervised multi-source fusion training data, the supervised multi-source fusion training data includes supervised multi-source sensor data and supervised flight report data corresponding to the supervised multi-source sensor data, and the unsupervised multi-source fusion training data includes unsupervised multi-source sensor data and unsupervised flight report data corresponding to the unsupervised multi-source sensor data.
For example, the goal of this step is to collect on-board multi-source fusion training data for model training. The onboard multisource fusion training data comes from a variety of sensors on board the aircraft and is divided into two types: supervised and unsupervised.
The supervised multisource fusion training data may include sensor data collected from various sensors, such as radar, cameras, infrared sensors, and the like. At the same time, flight report data corresponding to these sensor data, such as flight crew operation records or system logs, etc., are included that provide a correct answer or context label, i.e., supervised flight report data, for guiding the model in learning how to accurately identify and classify the input data.
The unsupervised multisource fusion training data is also data collected from a variety of sensors, and the flight report data corresponding to the unsupervised multisource sensor data does not provide a correct answer or label.
Illustratively, the supervised multisource fusion training data specifically includes supervised multisource sensor data and supervised flight reporting data. For supervised multisource sensor data: it is assumed that an aircraft is equipped with radar, cameras, and infrared sensors, and that the data collected by these sensors during a particular flight mission constitutes supervised multisource sensor data. Because these supervised multisource sensor data are used for supervised learning, each piece of supervised multisource sensor data has been correctly labeled, such as to identify whether clouds, mountains, other aircraft, etc. are encountered in flight. For supervised flight reporting data: the supervised flight reporting data corresponding to the above sensor data may include pilot or automated system generated reports describing various events and conditions in flight, such as weather conditions, attitude, special conditions encountered, etc., and which are matched to the supervised multisource sensor data.
The unsupervised multisource fusion training data is also made up of two components: unsupervised multisource sensor data and unsupervised flight report data. For unsupervised multisource sensor data: again, it is assumed that the same aircraft collects data in another flight, but this time the data is not annotated, i.e. the specific scenario to which the sensor data corresponds at each moment is not known. These data are referred to as unsupervised multisource sensor data. For unsupervised flight report data: flight reports collected concurrently with the unsupervised multisource sensor data are not annotated. For example, a flight report may record some abnormal vibrations in flight, but not explicitly indicated as being due to turbulence encountered or engine problems.
In practice, the supervised multi-source fusion training data may be used directly to train the model, as each sample therein has an explicit label or description. The unsupervised multisource fusion training data requires automatic discovery of potential structures or patterns by algorithms because they lack explicit tags. The combination of the two can promote generalization capability and interpretation of unknown data of the model.
Step S120, performing context awareness on the unsupervised flight report data based on the trained report learning network initializing the weight parameters, and generating one or more first trained report context awareness information.
For example, an unsupervised flight report data, i.e. a trained report learning network, which is a coded representation of potential context in a flight report, may be processed using a neural network model already provided with initial parameter settings, the task of which is to identify the different flight contexts that may exist therein by analyzing the flight report data, even if these flight report data do not have explicit labels, the trained report learning network outputting first trained report context awareness information.
Step S130, based on the guiding multi-source sensing network of the initialized weight parameters, context sensing is carried out on the unsupervised multi-source sensor data, and one or more guiding multi-source context sensing information is generated.
For example, similar to the trained report learning network, the guided multisource sensing network in this step is also a neural network model with pre-set initial parameters that focus on analyzing the unsupervised multisource sensor data. The goal of this process is to extract features from the unsupervised multi-source sensor data, forming instructional multi-source context awareness information that can represent the context of the sensor data.
Step S140, selecting the one or more guiding multi-source context awareness information according to the one or more first trained report context awareness information, and generating one or more first fuzzy selection context information.
For example, at this stage, the first trained report context awareness information and the guided multi-source context awareness information may be compared to determine which multi-source awareness information best matches or complements context information extracted from the flight report, the selected result being referred to as a first fuzzy selection context information, which is a subset of data selected from the unsupervised multi-source sensor data that is most likely to contribute to improved model performance.
Step S150, training the basic trained multisource sensing network according to the first fuzzy selection context information and the labeling context information corresponding to the supervised multisource sensor data, and generating a target multisource sensing network.
For example, a new basic trained multisource awareness network may be trained using the first fuzzy selection context information obtained in the previous step and the annotation context information corresponding to the supervised multisource sensor data, with the goal of enabling the basic trained multisource awareness network to more accurately understand and classify different flight scenarios. After training is completed, the obtained basic trained multi-source sensing network can be used as a target multi-source sensing network capable of being formally deployed.
Step S160, acquiring candidate airborne multi-source fusion data, and performing context awareness on the candidate airborne multi-source fusion data based on the target multi-source awareness network to generate a context awareness result. The context awareness result comprises one or more context awareness tags and tag feature information corresponding to each context awareness tag.
For example, finally, new, unseen onboard multi-source fusion data (candidate onboard multi-source fusion data) may be collected, which may be data collected during the actual flight mission. And analyzing the candidate airborne multisource fusion data by using the trained target multisource perception network so as to identify the flight situation represented by the current candidate airborne multisource fusion data. The outputted context awareness results include a series of context awareness tags and their corresponding detailed tag characteristic information, thereby providing a comprehensive understanding of the context behind the data.
By way of example, it may be assumed that a context awareness system based on multi-source data fusion is analyzing sensor data of an aircraft. The following is a specific example:
assuming that the target multi-source perception network has been trained, context awareness is now performed on the newly collected candidate airborne multi-source fusion data. These candidate onboard multi-source fusion data may include data from different sensors on the aircraft, such as weather radar data, engine monitoring system data, avionics data, and the like.
Candidate on-board multisource fusion data: weather radar shows that there is a strong echo signal in the current flight area, implying that there may be precipitation or other weather phenomena, the engine monitoring system reports an increase in engine temperature, but the avionics equipment records current flight altitude, speed and heading information of the aircraft within the normal operating range.
Context awareness process: the target multi-source perception network analyzes the candidate airborne multi-source fusion data and generates a context awareness result. From training and design of the target multisource aware network, the following scenarios may be identified:
context awareness result example:
1. context awareness label: meteorological warning "
Tag characteristic information:
a strong radar return signal indicates potentially severe weather conditions.
No other aircraft are reported nearby to encounter problems and therefore there is no immediate threat for a moment.
The proposed procedure is to raise vigilance, taking into account adjustments to the course to avoid possible bad weather.
2. Context awareness label: "Engine State Normal"
Tag characteristic information:
the engine temperature, although rising, is within the safe operating range.
No sign of performance degradation was detected.
The monitoring suggests to continue to observe the temperature change, ensuring that it remains at a safe level.
3. Context awareness label: "normal flight parameters"
Tag characteristic information:
the altitude, speed, and heading are consistent with a predetermined flight plan.
There is no immediate need for adjustment.
The system monitoring advice maintains the current route.
In this example, the context awareness results provide important information to the pilot or to the automatic flight management system regarding the current state of the aircraft and the surrounding environment. Context aware tags give a concise summary of the context, while tag feature information provides detailed context information and possible operational suggestions. Thus, the method can help to improve flight safety, early warn potential risks in advance and support a decision-making process.
Based on the steps, through effective utilization of the onboard multi-source fusion training data sequence and combination of the trained report learning network and the guiding multi-source perception network, accurate perception and identification of the aircraft scene are realized. First, by fusing supervised and unsupervised data, a rich training data set is obtained, which contains the flight data captured by the sensors and the associated flight report data. Such a data set provides a basis for subsequent context awareness training. On the basis, a trained report learning network for initializing weight parameters analyzes the unsupervised flight report data to generate first trained report context awareness information. Meanwhile, the multi-source sensing network is guided to process the data of the non-supervision multi-source sensor by initializing the weight parameters, and the multi-source context sensing information is output. The two parts of information complement each other, and a more complete view angle is provided for scene perception. Further, the guiding multi-source context awareness information matched with the first trained report context awareness information is screened out according to the first trained report context awareness information, and first fuzzy selection context information is generated. This step strengthens the link between unsupervised learning and supervised learning so that more valid information can be learned from unlabeled data. And then training the basic trained multi-source sensing network based on the first fuzzy selection context information and the labeling context information of the supervised multi-source sensor data, gradually optimizing and finally generating the target multi-source sensing network. The target network combines the multi-source information and extracts accurate context awareness results through a deep learning algorithm. Finally, when the target multi-source perception network is used for analyzing the candidate airborne multi-source fusion data, the network can output a context awareness tag containing rich feature information. These situational awareness results not only help improve flight safety and efficiency, but also can provide critical information for subsequent flight decisions. In general, the application fully utilizes the combination of the supervised data and the unsupervised data, and obviously improves the context awareness capability of the airborne multisource fusion data.
In one possible embodiment, the method further comprises:
step S101, network function layer definition information of a basic training report learning network is obtained.
Step S102, training the basic trained report learning network according to the supervised flight report data and the unsupervised flight report data, and generating the trained report learning network of the initialization weight parameters.
For example, first, a basic neural network model, i.e., a basic trained report learning network, needs to be designed. This basic trained report learning network consists of a number of functional layers, such as convolutional, active, fully connected, etc. It is necessary to clarify the structure of these functional layers, such as the number of neurons per layer, the type of activation function, and the manner of connection between layers, which are collectively referred to as network function layer definition information.
Next, the base trained report learning network is trained using the supervised and unsupervised flight report data, ultimately resulting in a trained report learning network with initialized weight parameters.
In one possible implementation, step S102 may include:
step S1021, based on the guidance report learning network of the initialized weight parameters, context awareness is performed on the unsupervised flight report data, and one or more guidance report context awareness information is generated.
Step S1022, performing context awareness on the supervised flight report data based on the basic trained report learning network, and generating one or more second trained report context awareness information.
Step S1023, training the basic trained report learning network according to the one or more guidance report context awareness information, the report context annotation information corresponding to the supervised flight report data and the one or more second trained report context awareness information, and generating the trained report learning network of the initialization weight parameter.
For example, a guided report learning network with initial weight parameters is first created. The guidance report learning network is for processing the unsupervised flight report data. It is assumed that in one particular flight, it is recorded that the aircraft has experienced an unknown shock event, but that no relevant tag accounts for the cause of the shock. At this point, the guidance report learning network will analyze the unsupervised flight report data in an attempt to identify possible scenarios behind the shock event and generate guidance report context awareness information.
At the same time, the underlying trained report learning network will process the supervised flight report data. For example, in another flight, the flight report details the cloud encountered and the lightning protection measures taken, which are labeled information. Through these supervised flight reporting data, the base trained report learning network is able to learn and generate second trained report context awareness information.
Finally, the instructional report context awareness information, the report context label information (i.e., correct answers or labels) of the supervised flight report data, and the second trained report context awareness information are used together to train the base trained report learning network. Through the process, the weight parameters of the basic trained report learning network are adjusted and optimized, and finally, the trained report learning network capable of better understanding the initialized weight parameters of the flight report situation is formed.
In this example, the trained report learning network initializing the weight parameters may more accurately extract and identify various contexts from the flight report, whether from tagged supervised data or untagged unsupervised data. This enables the network to more efficiently conduct context awareness and understanding in the face of new flight reports.
In one possible embodiment, the method further comprises:
step A110, based on the target multi-source perception network, performing context awareness on the unsupervised multi-source sensor data to generate one or more pieces of trained multi-source context awareness information.
Step a120, selecting the one or more guiding report context awareness information according to the one or more trained multi-source context awareness information, and generating one or more second fuzzy selection context information.
For example, each step in the above technical content may be gradually interpreted by a specific flight scenario.
Assume that a commercial aircraft is flying transoceanically. The aircraft is equipped with a variety of sensor systems including weather radar, avionics, engine monitoring equipment, etc., while also recording flight report data.
In this step, the target multisource aware network will analyze the unsupervised multisource sensor data collected in flight. Assuming that the aircraft encounters unexpected weather changes during flight, the relevant sensors (e.g., weather radar) collect anomalous data, but these data are not previously annotated.
Weather radar captures data for a region of intense airflow that is not predicted, but because of the lack of specific weather models or a priori knowledge, it cannot be directly determined what type of weather phenomenon it represents. The target multi-source aware network analyzes the unsupervised multi-source sensor data and attempts to extract meaningful context information therefrom to generate trained multi-source context awareness information.
Thus, guidance report context awareness information previously generated from the unsupervised flight report data will be used to compare and select with trained multisource context awareness information.
Assuming that the trained report learning network has previously processed similar flight reports, it may learn from them certain weather-related features, even if the reports are unsupervised. When new trained multi-source context awareness information (e.g., unknown airflow regions implied by radar data) is provided to the system, the system checks whether the information matches previous instructional report context awareness information and selects the most relevant information from among them, forming second fuzzy selection context information.
Step S1023 may include: training the basic trained report learning network according to the one or more second fuzzy selection context information, the report context annotation information corresponding to the supervised flight report data and the one or more second trained report context awareness information, and generating the trained report learning network of the initialization weight parameter.
For example, the selected second fuzzy selection context information, reporting context label information corresponding to the supervised flight reporting data, and second trained reporting context awareness information may be used to further train the base trained report learning network to generate a trained report learning network with initialized weight parameters.
The base trained report learning network co-trains the trained report learning network with report scenario annotation information from the supervised flight report data (e.g., accurately annotated weather conditions in early flight missions), and second trained report scenario awareness information (which may contain knowledge about sudden airflow changes learned from previous unsupervised reports), plus new second fuzzy selection scenario information (relevant features selected from radar data for the current unknown airflow zone). In this way, the trained report learning network will be able to better understand and predict the context represented by those complex and unlabeled flight report data, such as weather changes.
Through the steps, the understanding capability of the system for the various situations in the flight is continuously learned and improved, so that potential flight environment changes can be predicted and responded more accurately in actual operation, and support is provided for flight safety and efficiency.
In one possible implementation, step S140 may include:
step S141, determining a context dimension label corresponding to each piece of guiding multi-source context awareness information and a context dimension label corresponding to each piece of first trained report context awareness information.
Step S142, for each guiding multi-source context awareness information, determining whether there is first trained report context awareness information identical to a context dimension label of the guiding multi-source context awareness information in the one or more first trained report context awareness information.
Step S143, if the context dimension label of the guiding multi-source context awareness information is the same as the context dimension label of any one of the one or more first trained report context awareness information, outputting the guiding multi-source context awareness information as the first fuzzy selection context information.
These steps will be explained using the previously mentioned examples of aircraft. In this scenario, there has been a report learning network trained with unsupervised data capable of outputting first trained report context awareness information, and a sensing network of multi-source sensor data outputting directed multi-source context awareness information.
First, for each guided multi-source context awareness information and each first trained report context awareness information derived from multi-source sensor data, it is necessary to determine their respective corresponding context dimension labels. Context dimension tags are high-level descriptions of context, such as "weather conditions," "heading changes," "mechanical properties," and the like.
For example, certain instructional multi-source context awareness information may be tagged with "weather conditions: such scenario dimension tags as thunderstorm, and the associated "first trained report scenario awareness information" may also contain the same tags, such as "weather conditions: thunderstorm).
Next, for each guiding multi-source context awareness information, searching for whether there is information with the same context dimension tag in all first trained report context awareness information.
If a guiding multi-source context awareness information is found to be labeled "mechanical properties: engine shake ", whether there is any information containing" mechanical properties "can be found in the first trained report context awareness information: engine shake "context information of the tag.
When information matching with a scene dimension label of the guiding multi-source scene perception information is found in the first trained report scene perception information, the guiding multi-source scene perception information is selected and marked as first fuzzy selection scene information. That is, assume that a context dimension tag is found in the first trained report context awareness information as "mechanical properties: the matching item of the engine shake ", the guiding multisource context awareness information corresponding thereto is output as the" first fuzzy selection context information ".
In practice, this process helps determine which instructional multi-source context awareness information is associated with the known first trained report context awareness information and should be further used to optimize the trained model. This approach improves the accuracy of the selection process and helps the subsequent training phase to focus on those most valuable and relevant context information.
In one possible implementation, step S150 may include:
Step S151, performing context awareness on the unsupervised multisource sensor data based on the basic trained multisource awareness network, so as to generate one or more pieces of first trained multisource context awareness information.
Step S152, performing context awareness on the supervised multi-source sensor data based on the basic trained multi-source awareness network, to generate one or more second trained multi-source context awareness information.
Step S153, performing training error determination on the one or more first trained multisource context awareness information according to each piece of the first fuzzy selection context information, and generating first training error information.
Step S154, performing training error determination on the one or more second trained multisource context awareness information according to the labeling context information corresponding to the supervised multisource sensor data, and generating second training error information.
Step S155, outputting the sum of the first training error information and the second training error information as the training error information of the basic trained multisource perception network.
Step S156, training the basic trained multisource sensing network according to the training error information, and generating the target multisource sensing network.
Each step will be explained below by means of a specific flight scenario.
Assume a commercial aircraft is equipped with various sensors such as radar, optical cameras, infrared sensors, etc. These sensors collect a large amount of flight data, including supervised multisource sensor data (already labeled data) and unsupervised multisource sensor data (unlabeled data).
The basic trained multisource awareness network needs to process two types of data first: unsupervised multisource sensor data and supervised multisource sensor data.
When the underlying trained multisource awareness network processes unsupervised multisource sensor data, such as some anomaly signals captured by the radar, an attempt may be made to understand weather patterns or other flight-related scenarios that these signals may represent, even though these data have no prior annotation information. The information thus generated is referred to as first trained multisource context awareness information.
Next, when the underlying trained multisource awareness network processes the supervised multisource sensor data, for example, the images captured by the cameras are explicitly labeled as a particular type of cloud layer, the underlying trained multisource awareness network generates second trained multisource context awareness information that matches these exact labels.
Next, training errors of the base trained multi-source aware network are determined using the first fuzzy selection context information and the annotation context information.
Based on the first fuzzy selection context information previously selected from the unsupervised data, the accuracy of the first trained multisource context awareness information is evaluated, i.e. a training error is determined, and first training error information is generated.
Likewise, based on the labeling context information of the supervised multi-source sensor data, the accuracy of the second trained multi-source context awareness information is assessed, generating second training error information.
And adding the first training error information and the second training error information to obtain the total training error information of the basic trained multisource perception network.
If the unknown weather pattern indicated by the radar data (first trained multisource context awareness information) does not match the previously selected context information (first fuzzy selection context information), then the training error will be high; meanwhile, if the camera image correctly identifies the cloud layer type (second trained multisource context awareness information), but there is still a small difference, a certain training error may also be generated. These two partial errors are added to form the total training error information.
And then, adjusting and optimizing the basic trained multisource perception network according to the total training error information so as to reduce errors. This optimization process continues until the error is minimized, at which point the target multi-source aware network is obtained.
Through the steps, the target multi-source perception network can more accurately identify and understand various situations encountered in the flight process, so that powerful data support is provided for flight decision, and the flight safety and efficiency are improved.
In a possible implementation manner, the embodiment may further perform cross-context awareness information extraction on the one or more first fuzzy selection context information and the one or more first trained multi-source context awareness information, so as to generate cross-context awareness information.
The first fuzzy selection context information and the first trained multi-source context awareness information may be analyzed to extract common features or interrelated information therebetween to generate cross-context awareness information.
For example, it is assumed that the first fuzzy selection context information includes data related to a certain specific weather phenomenon (e.g., thunderstorm), and the first trained multisource context awareness information also includes features identifying such weather phenomenon. By comparing these two sets of information, it is possible to extract their common features, which constitute cross-context awareness information that more fully describes the data patterns recorded by the various sensors of the aircraft when it encounters a thunderstorm.
Step S153 may include:
step S1531, overlaying the first fuzzy selection context information according to the cross context awareness information.
Step S1532, determining training errors of the one or more first trained multisource context awareness information according to the context awareness information in the cross context awareness information, and generating first training error information.
Next, the extracted cross-context awareness information is used to evaluate the accuracy of the first trained multisource context awareness information, i.e. to determine their training errors, and to generate first training error information.
First, it is ensured that the cross-context awareness information sufficiently covers the content in the first fuzzy selection context information. This means that it is confirmed whether or not the feature extracted from the first blur-selection context information is accurately represented by the cross-context awareness information.
These cross-context awareness information are then used to evaluate the first trained multi-source context awareness information. In particular, it will be checked whether the first trained multisource context awareness information can accurately reflect the features in the cross context awareness information. If inconsistencies or errors are found, these errors are recorded, forming first training error information.
For example, if the cross-context awareness information indicates that during thunderstorm weather, the radar and other sensors of the aircraft should record certain signal patterns, but the first trained multisource context awareness information does not correctly predict or identify those patterns, then a training error is identified and recorded as first training error information.
Through the steps, the context awareness system can be finely adjusted and optimized to reduce misinterpretations of similar contexts in the future. This process helps to improve the accuracy and reliability of the system, thereby enabling the aircraft to better accommodate complex and diverse flight environments.
In one possible implementation manner, after training of a plurality of training batches, the basic trained multi-source sensing network meets a training termination condition, network function layer definition information of the basic trained multi-source sensing network can be initialized at any time when training of a first training batch of the basic trained multi-source sensing network is performed, an initialized trained multi-source sensing network of the first training batch is generated, and when training of the first training batch is performed, network function layer definition information of the multi-source sensing network is guided by the initialization weight parameters to be set definition parameter information.
Step S150 may include: and when the first training batch is trained, training the initial trained multi-source sensing network of the first training batch according to the first fuzzy selection context information and the labeling context information corresponding to the supervised multi-source sensor data, and generating a target trained multi-source sensing network of the first training batch.
It is contemplated that an aircraft may be in long haul flight with various sensors (e.g., radar, optical cameras, and infrared sensors) loaded thereon to monitor the flight environment. A basic trained multisource aware network is being constructed to analyze the data collected by these sensors and optimize network performance through different training batches.
Before starting the first training batch, network function layer definition information of the basic trained multisource aware network needs to be initialized. This means that the structure of the network, such as the number of layers, the number of neurons per layer, the type of activation function, etc., is determined and the initial weight parameters are set.
In a first flight experiment of the aircraft, a preliminary structure of the multisource perceptual network was set, deciding to use a series of convolution layers to process the image data, LSTM (long short term memory) layers to analyze the time series data, as obtained from the engine sensors. At the same time, they assign random initial weight parameters to these layers, forming an initialisation trained multisource aware network.
The first training batch of training may be performed on the initialized trained multi-source sensory network using the first fuzzy selection context information and the labeling context information of the supervised multi-source sensor data, thereby generating a target trained multi-source sensory network of the first training batch.
For example, in the first training batch, first fuzzy selection context information generated from unsupervised data of previous flights is used, as well as multi-source sensor data noted by an expert. For example, a cloud picture taken by one unmanned aerial vehicle is labeled "rain clouds" and used for training along with the weather conditions associated with that type of cloud. In this way, the network begins to learn how to identify and understand complex flight scenarios from the sensor data.
After a plurality of training batches, when the performance of the basic trained multisource perception network reaches a preset training termination condition, the training process is ended. The training termination condition may be that a certain level of accuracy is achieved or that the error falls within an acceptable range.
Assuming that the accuracy of the prediction of the weather pattern is remarkably improved after the basic trained multi-source sensing network is trained for a plurality of rounds, and the error rate is lower than a preset threshold value, the training is stopped at the moment, and the final target multi-source sensing network is obtained.
To summarize, in the example of an aircraft, a basic trained multisource awareness network starts from an initial state, is continuously learned and adjusted through successive training batches until training termination conditions are met, and finally generates an accurate target multisource awareness network that can effectively utilize various sensor data on the aircraft to understand and predict the flight environment.
In one possible embodiment, the method further comprises:
And step C110, when training of the M-th training batch of the basic trained multi-source sensing network is carried out, acquiring a target trained multi-source sensing network generated by the M-1 th training batch.
And step C120, outputting the target trained multi-source sensing network generated by the M-1 th training batch to a guiding multi-source sensing network of the initialization weight parameters during training of the M-th training batch.
And step C130, randomly initializing network function layer definition information of the basic trained multi-source sensing network, and generating an initialized trained multi-source sensing network of an Mth training batch.
In step S150, it may include: and training the initialized trained multi-source sensing network of the M training batch according to the first fuzzy selection scene information and the labeling scene information corresponding to the supervised multi-source sensor data when the M training batch is trained, and generating a target trained multi-source sensing network of the M training batch. Wherein M is an integer greater than 1 and not greater than X. And when M is equal to X, the target trained multi-source sensing network of the M th training batch forms the target multi-source sensing network, and X is the total training round number.
Assume that multiple rounds of training are being performed to optimize the multisource awareness network of the aircraft. When the Mth training batch is ready to start, the target trained multisource perception network generated after the previous batch (the Mth-1 st batch) finishes training is firstly acquired. For example, a target trained multisource awareness network at the end of a4 th training batch may be acquired before the 5 th training batch of the aircraft begins.
Next, this trained network (the target trained multisource aware network of the M-1 th lot) is used as a starting point for the next lot (the M-th lot), i.e. the weights and parameters thereof are used as guiding multisource aware networks for initializing the weight parameters. For example, the network of training lots 4 provides initial weights and parameters for the guided multisource awareness network of training lots 5.
Next, for new training batches, in addition to using the network weights of the previous batch, some network layers need to be randomly initialized to avoid overfitting and increase the generalization capability of the network.
For example, at the beginning of the 5 th training batch, it may be necessary to reinitialize certain layers of the network in addition to utilizing the network weights of the 4 th batch, thereby forming an initialized trained multisource aware network for the 5 th training batch.
And then training the initialized trained multi-source sensing network by using the first fuzzy selection scene information and the labeling scene information corresponding to the supervised multi-source sensor data to generate the target trained multi-source sensing network of the round. For example, in a training lot 5, the first fuzzy selection context information (such as information about thunderstorm weather characteristics) and the supervised multi-source sensor data (such as sensor readings correctly marked as specific weather conditions) are used to train an initialization trained multi-source perception network for the 5 th lot. Through this training process, a 5 th batch of target trained multisource perception networks is generated.
This iterative training process continues until a predetermined total training round number X is reached. When M is equal to X, the target trained multisource perception network generated by the last round of training forms a final target multisource perception network. This network, through constant learning and tuning, becomes increasingly adept at understanding and predicting the complex scenarios of the environment in which the aircraft is located, thereby helping the aircraft to achieve safer and more efficient flight tasks.
Fig. 2 schematically illustrates an on-board multi-source information fusion system 100 that may be used to implement various embodiments described in the present application.
The on-board multi-source information fusion system 100 shown in fig. 2 includes: a processor 1001 and a memory 1003. The processor 1001 is coupled to the memory 1003, such as via a bus 1002. Optionally, the on-board multi-source information fusion system 100 may further include a transceiver 1004, where the transceiver 1004 may be used for data interaction between the server and other servers, such as transmission of data and/or reception of data, etc. It should be noted that, the transceiver 1004 is not limited to one embodiment in actual scheduling, and the structure of the on-board multi-source information fusion system 100 is not limited to the embodiment of the present application.
The Processor 1001 may be a CPU (Central Processing Unit ), general purpose Processor, DSP (DIGITAL SIGNAL Processor, data signal Processor), ASIC (Application SpecificIntegrated Circuit ), FPGA (Field Programmable GATE ARRAY, field programmable gate array) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. The processor 1001 may also be a combination that implements computing functionality, such as a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 1002 may include a path to transfer information between the components. Bus 1002 may be a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus, or an EISA (ExtendedIndustry Standard Architecture ) bus, or the like. The bus 1002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 2, but not only one bus or one type of bus.
The Memory 1003 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, EEPROM (ELECTRICALLY ERASABLEPROGRAMMABLE READ ONLY MEMORY ), CD-ROM (Compact DiscRead Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media, other magnetic storage devices, or any other medium that can be used to carry or store program code and that can be Read by a computer.
The memory 1003 is used for storing program codes for executing the embodiments of the present application and is controlled to be executed by the processor 1001. The processor 1001 is configured to execute the program code stored in the memory 1003 to implement the steps shown in the foregoing method embodiment.
Embodiments of the present application provide a computer readable storage medium having program code stored thereon, which when executed by a processor, implements the steps of the foregoing method embodiments and corresponding content.
It should be understood that, although various operation steps are indicated by arrows in the flowcharts of the embodiments of the present application, the order in which these steps are implemented is not limited to the order indicated by the arrows. In some implementations of embodiments of the application, the implementation steps in the flowcharts may be performed in other orders based on demand, unless explicitly stated herein. Furthermore, depending on the actual implementation scenario, some or all of the steps in the flowcharts may include multiple sub-steps or multiple stages, some or all of which may be performed at the same time, and each of which may be performed at different times, respectively. In the case of different execution timings, the execution order of the sub-steps or stages may be flexibly configured based on requirements, which is not limited by the embodiment of the present application.
The foregoing is merely an optional implementation manner of some of the implementation scenarios of the present application, and it should be noted that, for those skilled in the art, other similar implementation manners according to the technical idea of the present application may be adopted without departing from the technical idea of the solution of the present application, which is also within the protection scope of the embodiments of the present application.

Claims (10)

1. An airborne multisource information fusion method, which is characterized by comprising the following steps:
Acquiring an onboard multi-source fusion training data sequence, wherein the onboard multi-source fusion training data sequence comprises supervised multi-source fusion training data and unsupervised multi-source fusion training data, the supervised multi-source fusion training data comprises supervised multi-source sensor data and supervised flight report data corresponding to the supervised multi-source sensor data, and the unsupervised multi-source fusion training data comprises unsupervised multi-source sensor data and unsupervised flight report data corresponding to the unsupervised multi-source sensor data;
Based on a trained report learning network of an initialized weight parameter, performing context awareness on the unsupervised flight report data to generate one or more pieces of first trained report context awareness information;
based on a guiding multi-source sensing network of the initialized weight parameters, performing context sensing on the unsupervised multi-source sensor data to generate one or more guiding multi-source context sensing information;
selecting the one or more guiding multi-source context awareness information according to the one or more first trained report context awareness information, and generating one or more first fuzzy selection context information;
Training a basic trained multi-source sensing network according to the first fuzzy selection scene information and the labeling scene information corresponding to the supervised multi-source sensor data to generate a target multi-source sensing network;
Acquiring candidate airborne multisource fusion data, and performing context awareness on the candidate airborne multisource fusion data based on the target multisource awareness network to generate context awareness results; the context awareness result comprises one or more context awareness tags and tag feature information corresponding to each context awareness tag.
2. The on-board multi-source information fusion method of claim 1, further comprising:
Acquiring network function layer definition information of a basic training report learning network;
and training the basic trained report learning network according to the supervised flight report data and the unsupervised flight report data to generate the trained report learning network of the initialization weight parameter.
3. The method of claim 2, wherein training the basic trained report learning network according to the supervised flight report data and the unsupervised flight report data to generate the trained report learning network for initializing weight parameters comprises:
Based on a guidance report learning network of the initialized weight parameters, performing context awareness on the unsupervised flight report data to generate one or more guidance report context awareness information;
performing context awareness on the supervised flight report data based on a basic trained report learning network, and generating one or more second trained report context awareness information;
Training the basic trained report learning network according to the one or more guidance report context awareness information, the report context annotation information corresponding to the supervised flight report data and the one or more second trained report context awareness information, and generating the trained report learning network of the initialization weight parameter.
4. The on-board multi-source information fusion method of claim 3, further comprising:
Based on the target multi-source perception network, performing context awareness on the unsupervised multi-source sensor data to generate one or more pieces of trained multi-source context awareness information;
selecting the one or more guidance report context awareness information according to the one or more trained multi-source context awareness information to generate one or more second fuzzy selection context information;
The training the basic trained report learning network according to the one or more guidance report context awareness information, the report context label information corresponding to the supervised flight report data and the one or more second trained report context awareness information, and generating the trained report learning network of the initialization weight parameter, including:
training the basic trained report learning network according to the one or more second fuzzy selection context information, the report context annotation information corresponding to the supervised flight report data and the one or more second trained report context awareness information, and generating the trained report learning network of the initialization weight parameter.
5. The method of on-board multi-source information fusion according to claim 1, wherein selecting the one or more guiding multi-source context awareness information according to the one or more first trained report context awareness information, generating one or more first fuzzy selection context information, comprises:
Determining a context dimension label corresponding to each piece of guiding multi-source context awareness information and a context dimension label corresponding to each piece of first trained report context awareness information;
Determining, for each guiding multi-source context awareness information, whether first trained report context awareness information identical to a context dimension label of the guiding multi-source context awareness information exists in the one or more first trained report context awareness information;
And outputting the guiding multi-source context awareness information as the first fuzzy selection context information if the context dimension label of the guiding multi-source context awareness information is the same as the context dimension label of any one of the one or more first trained report context awareness information.
6. The method of claim 1, wherein training the basic trained multisource perception network according to the first fuzzy selection context information and the labeling context information corresponding to the supervised multisource sensor data to generate the target multisource perception network comprises:
Based on the basic trained multi-source sensing network, performing context sensing on the unsupervised multi-source sensor data to generate one or more pieces of first trained multi-source context sensing information;
Based on the basic trained multi-source sensing network, performing context sensing on the supervised multi-source sensor data to generate one or more pieces of second trained multi-source context sensing information;
performing training error determination on the one or more first trained multi-source context awareness information according to each piece of first fuzzy selection context information to generate first training error information;
According to the labeling context information corresponding to the supervised multi-source sensor data, training error determination is carried out on the one or more second trained multi-source context awareness information, and second training error information is generated;
Outputting the sum of the first training error information and the second training error information as training error information of the basic trained multisource perception network;
and training the basic trained multi-source sensing network according to the training error information to generate the target multi-source sensing network.
7. The on-board multi-source information fusion method of claim 6, further comprising:
Performing cross scene perception information extraction on the one or more first fuzzy selection scene information and the one or more first trained multi-source scene perception information to generate cross scene perception information;
The training error determination is performed on the one or more first trained multi-source context awareness information according to each piece of the first fuzzy selection context information, and first training error information is generated, including:
Covering the first fuzzy selection context information according to the cross context awareness information;
And training error determination is carried out on the one or more first trained multi-source context awareness information according to the context awareness information in the cross context awareness information, and first training error information is generated.
8. The on-board multi-source information fusion method of any one of claims 1-7, wherein after training for a plurality of training batches, the base trained multi-source awareness network satisfies a training termination condition, the method further comprising:
Initializing network function layer definition information of the basic trained multisource perception network at any time when training of a first training batch of the basic trained multisource perception network is carried out, generating an initialized trained multisource perception network of the first training batch, and guiding the network function layer definition information of the multisource perception network by initializing weight parameters to be set definition parameter information when training of the first training batch is carried out;
Training the basic trained multi-source sensing network according to the first fuzzy selection context information and the labeling context information corresponding to the supervised multi-source sensor data, wherein the training comprises the following steps:
And when the first training batch is trained, training the initial trained multi-source sensing network of the first training batch according to the first fuzzy selection context information and the labeling context information corresponding to the supervised multi-source sensor data, and generating a target trained multi-source sensing network of the first training batch.
9. The on-board multi-source information fusion method of claim 8, further comprising:
when training of the M-th training batch of the basic trained multi-source sensing network is carried out, acquiring a target trained multi-source sensing network generated by the M-1 th training batch;
Outputting the target trained multisource perception network generated by the M-1 training batch as a guiding multisource perception network of an initialization weight parameter during training of the M training batch;
Randomly initializing network function layer definition information of the basic trained multi-source sensing network to generate an initialized trained multi-source sensing network of an M training batch;
Training the basic trained multi-source sensing network according to the first fuzzy selection context information and the labeling context information corresponding to the supervised multi-source sensor data, wherein the training comprises the following steps:
When training of the M training batch, training the initialized trained multi-source sensing network of the M training batch according to the first fuzzy selection scene information and the labeling scene information corresponding to the supervised multi-source sensor data to generate a target trained multi-source sensing network of the M training batch; wherein M is an integer greater than 1 and not greater than X; and when M is equal to X, the target trained multi-source sensing network of the M th training batch forms the target multi-source sensing network, and X is the total training round number.
10. An on-board multi-source information fusion system comprising a processor and a computer readable storage medium storing machine executable instructions that when executed by the processor implement the on-board multi-source information fusion method of any one of claims 1-9.
CN202410347864.4A 2024-03-26 2024-03-26 Airborne multisource information fusion method and system Active CN117951648B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410347864.4A CN117951648B (en) 2024-03-26 2024-03-26 Airborne multisource information fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410347864.4A CN117951648B (en) 2024-03-26 2024-03-26 Airborne multisource information fusion method and system

Publications (2)

Publication Number Publication Date
CN117951648A true CN117951648A (en) 2024-04-30
CN117951648B CN117951648B (en) 2024-06-07

Family

ID=90802015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410347864.4A Active CN117951648B (en) 2024-03-26 2024-03-26 Airborne multisource information fusion method and system

Country Status (1)

Country Link
CN (1) CN117951648B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542354A (en) * 2010-12-31 2012-07-04 中国科学院研究生院 Optimal decision method based on situation analysis and hierarchy analysis
CN109857879A (en) * 2018-12-20 2019-06-07 杭州英歌智达科技有限公司 A kind of face retrieval method based on GAN
CN111191611A (en) * 2019-12-31 2020-05-22 同济大学 Deep learning-based traffic sign label identification method
US20210209939A1 (en) * 2020-12-08 2021-07-08 Harbin Engineering University Large-scale real-time traffic flow prediction method based on fuzzy logic and deep LSTM
CN113256680A (en) * 2021-05-13 2021-08-13 燕山大学 High-precision target tracking system based on unsupervised learning
CN114299328A (en) * 2021-12-08 2022-04-08 重庆邮电大学 Environment self-adaptive sensing small sample endangered animal detection method and system
CN114663818A (en) * 2022-04-06 2022-06-24 中国民航科学技术研究院 Airport operation core area monitoring and early warning system and method based on vision self-supervision learning
CN114816468A (en) * 2022-03-07 2022-07-29 珠高智能科技(深圳)有限公司 Cloud edge coordination system, data processing method, electronic device and storage medium
CN114926726A (en) * 2022-07-20 2022-08-19 陕西欧卡电子智能科技有限公司 Unmanned ship sensing method based on multitask network and related equipment
US20230074640A1 (en) * 2021-09-07 2023-03-09 International Business Machines Corporation Duplicate scene detection and processing for artificial intelligence workloads
CN115879535A (en) * 2023-02-10 2023-03-31 北京百度网讯科技有限公司 Training method, device, equipment and medium for automatic driving perception model
WO2023091730A1 (en) * 2021-11-19 2023-05-25 Georgia Tech Research Corporation Building envelope remote sensing drone system and method
CN116523104A (en) * 2023-03-17 2023-08-01 厦门大学 Abnormal group flow prediction method and device based on context awareness and deep learning
CN116679938A (en) * 2023-06-06 2023-09-01 福建师范大学 LLVM compiling option sequence two-stage optimization method and system
CN116797789A (en) * 2023-06-13 2023-09-22 长春理工大学 Scene semantic segmentation method based on attention architecture
CN116842127A (en) * 2023-08-31 2023-10-03 中国人民解放军海军航空大学 Self-adaptive auxiliary decision-making intelligent method and system based on multi-source dynamic data
CN117152715A (en) * 2023-08-25 2023-12-01 广西民族大学 Panoramic driving perception system and method based on improved YOLOP

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542354A (en) * 2010-12-31 2012-07-04 中国科学院研究生院 Optimal decision method based on situation analysis and hierarchy analysis
CN109857879A (en) * 2018-12-20 2019-06-07 杭州英歌智达科技有限公司 A kind of face retrieval method based on GAN
CN111191611A (en) * 2019-12-31 2020-05-22 同济大学 Deep learning-based traffic sign label identification method
US20210209939A1 (en) * 2020-12-08 2021-07-08 Harbin Engineering University Large-scale real-time traffic flow prediction method based on fuzzy logic and deep LSTM
CN113256680A (en) * 2021-05-13 2021-08-13 燕山大学 High-precision target tracking system based on unsupervised learning
US20230074640A1 (en) * 2021-09-07 2023-03-09 International Business Machines Corporation Duplicate scene detection and processing for artificial intelligence workloads
WO2023091730A1 (en) * 2021-11-19 2023-05-25 Georgia Tech Research Corporation Building envelope remote sensing drone system and method
CN114299328A (en) * 2021-12-08 2022-04-08 重庆邮电大学 Environment self-adaptive sensing small sample endangered animal detection method and system
CN114816468A (en) * 2022-03-07 2022-07-29 珠高智能科技(深圳)有限公司 Cloud edge coordination system, data processing method, electronic device and storage medium
CN114663818A (en) * 2022-04-06 2022-06-24 中国民航科学技术研究院 Airport operation core area monitoring and early warning system and method based on vision self-supervision learning
CN114926726A (en) * 2022-07-20 2022-08-19 陕西欧卡电子智能科技有限公司 Unmanned ship sensing method based on multitask network and related equipment
CN115879535A (en) * 2023-02-10 2023-03-31 北京百度网讯科技有限公司 Training method, device, equipment and medium for automatic driving perception model
CN116523104A (en) * 2023-03-17 2023-08-01 厦门大学 Abnormal group flow prediction method and device based on context awareness and deep learning
CN116679938A (en) * 2023-06-06 2023-09-01 福建师范大学 LLVM compiling option sequence two-stage optimization method and system
CN116797789A (en) * 2023-06-13 2023-09-22 长春理工大学 Scene semantic segmentation method based on attention architecture
CN117152715A (en) * 2023-08-25 2023-12-01 广西民族大学 Panoramic driving perception system and method based on improved YOLOP
CN116842127A (en) * 2023-08-31 2023-10-03 中国人民解放军海军航空大学 Self-adaptive auxiliary decision-making intelligent method and system based on multi-source dynamic data

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
XINDE LI等: "Multi-source information fusion: Progress and future", 《CHINESE JOURNAL OF AERONAUTICS》, 15 December 2023 (2023-12-15), pages 1 - 25 *
吴伟 等: "利用卫星遥感数据提取生态干扰信息的方法研究", 《科研信息化技术与应用》, vol. 8, no. 3, 20 May 2017 (2017-05-20), pages 37 - 43 *
孙全明: "基于情景感知的多模态出行推荐方法研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 3, 15 March 2022 (2022-03-15), pages 034 - 1546 *
深蓝学院: "【干货】基于深度学习的多视图几何:从监督学习到无监督学习", pages 1 - 9, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/123328865> *
王玥: "基于无保护左转的自动驾驶汽车外显交互界面对驾驶行为的影响研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 2, 15 February 2024 (2024-02-15), pages 035 - 536 *

Also Published As

Publication number Publication date
CN117951648B (en) 2024-06-07

Similar Documents

Publication Publication Date Title
CN107292386B (en) Vision-based rain detection using deep learning
CN108133172B (en) Method for classifying moving objects in video and method and device for analyzing traffic flow
KR20180107930A (en) Method and system for artificial intelligence based video surveillance using deep learning
Pinto et al. Case-based reasoning approach applied to surveillance system using an autonomous unmanned aerial vehicle
US11670182B2 (en) Systems and methods for intelligently displaying aircraft traffic information
Guerin et al. Unifying evaluation of machine learning safety monitors
US20240046614A1 (en) Computer-implemented method for generating reliability indications for computer vision
Ihekoronye et al. Aerial supervision of drones and other flying objects using convolutional neural networks
CN114241373A (en) End-to-end vehicle behavior detection method, system, equipment and storage medium
CN117951648B (en) Airborne multisource information fusion method and system
CN117475266A (en) Robot vision perception method and device based on multi-expert attention fusion
US10467474B1 (en) Vehicle track detection in synthetic aperture radar imagery
US20230260259A1 (en) Method and device for training a neural network
CN114550107B (en) Bridge linkage intelligent inspection method and system based on unmanned aerial vehicle cluster and cloud platform
CN116777062A (en) Extreme difficult-case-oriented self-adaptive fusion learning automatic driving safety decision-making method
CN106599865A (en) Disconnecting link state recognition device and method
Janousek et al. Deep neural network for precision landing and variable flight planning of autonomous UAV
US20220406199A1 (en) Method and device for supervising a traffic control system
WO2023193923A1 (en) Maritime traffic management
JP6934913B2 (en) Surveillance systems, image management devices, flying objects, and programs
TSEKHMYSTRO et al. INVESTIGATION OF THE EFFECT OF OBJECT SIZE ON ACCURACY OF HUMAN LOCALISATION IN IMAGES ACQUIRED FROM UNMANNED AERIAL VEHICLES
EP4307245A1 (en) Methods and systems for object classification and location
EP4287077A1 (en) Method and apparatus for testing an artificial neural network using surprising inputs
KR102561793B1 (en) System and method for recognition of atypical obstacle system and computer-readable recording medium including the same
EP4394632A1 (en) Incident confidence level

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant