CN112966773B - Unmanned aerial vehicle flight condition mode identification method and system - Google Patents

Unmanned aerial vehicle flight condition mode identification method and system Download PDF

Info

Publication number
CN112966773B
CN112966773B CN202110311072.8A CN202110311072A CN112966773B CN 112966773 B CN112966773 B CN 112966773B CN 202110311072 A CN202110311072 A CN 202110311072A CN 112966773 B CN112966773 B CN 112966773B
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
flight
mode
working condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110311072.8A
Other languages
Chinese (zh)
Other versions
CN112966773A (en
Inventor
杜航原
白亮
王文剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Shuoming Technology Co ltd
Original Assignee
Shanxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi University filed Critical Shanxi University
Priority to CN202110311072.8A priority Critical patent/CN112966773B/en
Publication of CN112966773A publication Critical patent/CN112966773A/en
Application granted granted Critical
Publication of CN112966773B publication Critical patent/CN112966773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of unmanned aerial vehicle control, and provides a method and a system for identifying a flight condition mode of an unmanned aerial vehicle aiming at the problem of identifying the flight condition mode of the unmanned aerial vehicle in the flight process. The flight condition mode identification process comprises the following steps: the unmanned aerial vehicle flight condition training data acquisition stage, the unmanned aerial vehicle working condition mode offline classification stage and the unmanned aerial vehicle working condition mode online matching stage are three main links, and by constructing a working condition data network in the multiple flight process of the unmanned aerial vehicle, the working condition states of the unmanned aerial vehicle at different moments in the flight process can be recorded, the complex correlation relationship between the working condition states can be effectively expressed, and the unmanned aerial vehicle flight condition mode recognition result with higher robustness and interpretability can be obtained. In addition, utilize the graph to become from the encoder structure and establish unmanned aerial vehicle flight operating mode classification model, make the model have certain generative ability to make unmanned aerial vehicle flight operating mode pattern recognition process have stronger generalization ability.

Description

Unmanned aerial vehicle flight condition mode identification method and system
Technical Field
The invention relates to the technical field of unmanned aerial vehicle control, in particular to an unmanned aerial vehicle flight condition mode identification method and system.
Background
An unmanned aerial vehicle is an unmanned aerial vehicle controlled by remote radio signals or by trajectory planning software carried on the vehicle. Compared with the traditional manned aircraft, the autonomous ability and the survival ability of the unmanned aerial vehicle are remarkably improved, the unmanned aerial vehicle can replace human beings to complete task operation under various severe environments, does not need to worry about the life safety of drivers, can be used for executing high-risk tasks, and is widely applied to multiple fields such as military affairs, engineering and scientific research. In order to ensure that the unmanned aerial vehicle can efficiently complete the given task, the stability of the health state of the unmanned aerial vehicle during flight and operation is very important. Due to the lack of real-time decision-making capability of pilots, unmanned plane operators often have difficulty in timely sensing the condition deterioration trend or making correct fault judgment in advance before faults occur. Therefore, the effective identification of the flight condition mode of the unmanned aerial vehicle and the establishment of a fault prediction model based on the effective identification become one of the core problems in the flight control of the unmanned aerial vehicle.
According to the identification mechanism, common working condition mode identification methods can be divided into model-dependent identification methods and model-independent identification methods, wherein the former method may have low identification accuracy due to the fact that a priori model does not accord with the characteristics of actual flight conditions, and the latter method has relatively strong self-adaptive capacity and good application prospect due to the fact that an identification model is constructed through flight state information acquired in real time. Meanwhile, due to the fact that available mode category label information is lacked in the flight process, complex association relations often exist among the unmanned aerial vehicle state variables collected in real time, the design of a reliable and effective working condition mode identification method independent of a model still has considerable difficulty, and the solution of the technical problem also has important theoretical and practical values.
Disclosure of Invention
The invention provides a method and a system for identifying the flight condition mode of an unmanned aerial vehicle, aiming at the problems, and the method and the system can effectively and reliably identify the flight condition mode of the unmanned aerial vehicle under the conditions that prior class label information is lacked and the state variable of the unmanned aerial vehicle has a complex incidence relation.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a method for identifying a flight condition mode of an unmanned aerial vehicle, which mainly comprises three stages of unmanned aerial vehicle flight condition training data acquisition, unmanned aerial vehicle working condition mode offline classification and unmanned aerial vehicle working condition mode online matching.
Further, the unmanned aerial vehicle flight condition mode identification method specifically comprises the following steps:
s1, constructing an unmanned aerial vehicle flight condition training data set;
s2, constructing an unmanned aerial vehicle flight condition training data network on the basis of the unmanned aerial vehicle flight condition training data set, and recording the working condition states of the unmanned aerial vehicle at different moments in the flight process and the correlation relationship between the working condition states;
s3, constructing a classification model of the flight condition mode of the unmanned aerial vehicle by using a graph variation self-encoder based on the unmanned aerial vehicle flight condition training data network obtained in the step S2;
s4, carrying out model solution on the unmanned aerial vehicle flight condition mode classification model constructed in the step S3 in an iterative calculation mode, and determining the optimal parameters of the model;
s5, classifying the flight condition modes of the unmanned aerial vehicle based on the flight condition training data network of the unmanned aerial vehicle by using the flight condition mode classification model of the unmanned aerial vehicle constructed in the step S3 and the optimal parameters of the model determined in the step S4;
s6, matching the online acquired unmanned aerial vehicle flight condition data with the unmanned aerial vehicle flight condition mode classification result, and determining the real-time flight condition mode of the unmanned aerial vehicle.
Further, the construction of the training data set of the flight condition of the unmanned aerial vehicle in the step S1 includes the following specific steps:
s11, acquiring and recording the key state signals output by the controller, the actuator and the sensor of the unmanned aerial vehicle at a fixed frequency in the multiple flight processes, wherein the method comprises the following steps: course angle, pitch angle, roll angle, corresponding angular velocity, height, vertical velocity, airspeed and main board temperature key state signals;
s12, constructing an original state vector of the unmanned aerial vehicle in the flight process by using various key state signals acquired at each moment, wherein each component in the state vector is a numerical value of each key state signal; recording the original state vector of the unmanned aerial vehicle at the moment t as stThe set of all original state vectors during the entire signal acquisition process is recorded as
Figure BDA0002989570950000031
Wherein T represents the number of critical state signal acquisitions;
s13, carrying out normalization processing on the original state vector of the unmanned aerial vehicle by using the formula (1), converting different signal components in the original state vector into the range of [ -1,1],
Figure BDA0002989570950000032
wherein x istRepresenting a vector s of original statestThe flight condition state characteristic vector of the unmanned aerial vehicle at the time t obtained after the processing is called as characteristic vector mu for shortsRepresenting the expectation of the distribution of data in the set S, δsStandard deviation of data distribution in the set S is represented, and a flight condition training data set of the unmanned aerial vehicle is constructed by using all characteristic vectors in the whole signal acquisition process and is recorded as
Figure BDA0002989570950000033
The step S2 includes the following specific steps:
s21, traversing the training data set of the flight condition of the unmanned aerial vehicle, and calculating the similarity between any two characteristic vectors by using the formula (2):
Figure BDA0002989570950000034
wherein x isiAnd xjRespectively showing the characteristics of the unmanned plane at the moment i and the moment j in the flight processSign the vector, | | xi-xj||2Is a feature vector xiAnd xjThe Euclidean distance between them, gamma is the sampling interval during signal acquisition, SimijRepresenting a feature vector xiAnd xjSimilarity between them;
s22, constructing a correlation matrix A of the flight condition of the unmanned aerial vehicle by taking the similarity among the eigenvectors as an element [ Sim ═ Sim-ij]T×T
S23, constructing an unmanned aerial vehicle flight condition training data network Net (X, A) by using the unmanned aerial vehicle flight condition data set X and the correlation matrix A, and recording the working condition states of the unmanned aerial vehicle at different moments in the flight process and the correlation relationship between the working condition states.
The classification model of the flight condition mode of the unmanned aerial vehicle in the step S3 includes an encoder, a decoder, and a classifier, and the overall structure of the classification model is as shown in fig. 1, and the step S3 includes the following steps:
s31, the encoder is used for mapping the unmanned aerial vehicle flight condition training data network Net (X, A) to a set Z formed by embedded vectors in a low-dimensional feature space, and the formalization representation of the encoding process is shown as formula (3):
Figure BDA0002989570950000041
wherein the content of the first and second substances,
Figure BDA0002989570950000042
for a set of all embedded vectors, ztRepresenting a feature vector xtThe corresponding embedded vector in the low-dimensional feature space is sampled in the distribution represented by equation (4):
Figure BDA0002989570950000043
in the formula, muz,tAnd
Figure BDA0002989570950000044
respectively an embedding vector ztAnd is calculated by two 2-layer graph convolution networks with the same structure, namely:
μz,t=GCNμ(xt,A) (5)
logσz,t=GCNσ(xt,A) (6)
wherein, GCNμ() And GCNσ() Respectively, representing a graph convolution network that computes the expectation and variance of the embedded vector distribution, which have the same structure, defined by equation (7):
GCN(X,A)=Gconv(ReLU(Gconv(A,X;W0));W1) (7)
wherein gcnv () represents a graph convolution network; w0And W1Respectively connecting weight matrixes in the first layer graph convolution network and the second layer graph convolution network are undetermined parameters in a classification model, and are determined by inputting an unmanned aerial vehicle flight condition training data network in the subsequent step; ReLU () is an activation function, defined by equation (8):
Figure BDA0002989570950000051
s32, reconstructing the embedded vector set Z into a network relation by a decoder
Figure BDA0002989570950000052
Namely, it is
Figure BDA0002989570950000053
The decoder definition is as shown in equation (9):
Figure BDA0002989570950000054
where σ () is a dirac function.
S33, the classifier is used for dividing the embedded vectors corresponding to the characteristic vectors of the flight conditions of the unmanned aerial vehicle into corresponding mode classes, and the class division result of the classifier is recorded as C*={C1,C2,…,CKWhere K is the number of mode classes, the classifier is constructed from a Gaussian mixture model, formalized as
Figure BDA0002989570950000055
πkIs a mode class CkThe vector formed by the prior distribution probabilities of the respective pattern classes is denoted as pi ═ pi [ pi ]12,…,πK];
Figure BDA0002989570950000056
For the mode class C in the Gaussian mixture modelkCorresponding gaussian distribution component, mukAnd
Figure BDA0002989570950000057
are respectively CkThe expectation and variance of the corresponding gaussian distribution components.
The step S4 includes the following specific steps:
s41, constructing a log-likelihood function of the flight condition of the unmanned aerial vehicle based on the mode classification model established in S3, wherein the log-likelihood function is as shown in formula (10):
Figure BDA0002989570950000058
wherein logp (x)t) For unmanned aerial vehicle feature vector xtA log-likelihood function of; for any unmanned aerial vehicle feature vector x, a Jensen inequality is utilized to obtain an Evidence Lower Bound (ELBO) of a log-likelihood function logp (x) as shown in formula (11):
Figure BDA0002989570950000061
in the formula (I), the compound is shown in the specification,
Figure BDA0002989570950000062
ELBO of an unmanned aerial vehicle feature vector x log-likelihood function, z is an embedded vector corresponding to x in a low-dimensional space, and a is xThe corresponding adjacency vector in adjacency matrix A, p (a, z, Ck) Calculated by equation (12)
p(a,z,Ck)=p(a|z)p(z|Ck)p(Ck) (12)
Wherein p (a | z) is defined by formula (13)
Figure BDA0002989570950000063
Wherein muAAnd
Figure BDA0002989570950000064
the expectation and variance of the reconstruction relationship, respectively, are obtained by the decoder of equation (9);
p(z|Ck) Is defined by formula (14)
Figure BDA0002989570950000065
In the formula, mukAnd
Figure BDA0002989570950000066
calculated by formula (5) and formula (6), respectively; p (C)k) Is defined by formula (15)
p(Ck)=Cat(Ck|π) (15)
In the formula, Cat (C)kPi) is an unmanned aerial vehicle mode class distribution function; q (z, C) in the formula (11)k| x, a) is the true posterior distribution p (z, C)kA posterior approximation of the variation of | x, a);
s42, transforming the evidence lower bound of equation (11) into equation (16) by using Monte Carlo stochastic gradient descent variational Bayes (Monte Carlo SGVB) operator
Figure BDA0002989570950000071
In the formula, M is the sampling number of Monte Carlo SGVB operator, D is the dimensionality of the unmanned aerial vehicle characteristic vector, and R is the embedded vectorDimension of (a), xdFor the d-th component of the drone feature vector x,
Figure BDA0002989570950000072
is muAThe d-th component in the m-th sample,
Figure BDA0002989570950000073
is composed of
Figure BDA0002989570950000074
The (r) th component of (a),
Figure BDA0002989570950000075
is composed of
Figure BDA0002989570950000076
The r component of (a), muz|rIs muzThe r component of (a), muk|rIs mukThe r-th component of (a);
s43, initializing the unmanned aerial vehicle flight condition mode classification model in the step S3 by using a Gaussian distribution initialization mode;
giving an iteration number L, and iteratively executing the steps S44-S49 until the iteration number is reached, finishing training the unmanned aerial vehicle flight condition mode classification model, and obtaining the optimal parameters of the model;
s44, calculating the expectation and the variance of each feature vector and the corresponding embedded vector of the adjacent vector thereof in the low-dimensional feature space in the unmanned aerial vehicle flight condition training data network Net (X, A) by using the formula (5) and the formula (6);
s45, randomly selecting one mode type from all mode types of the flight conditions of the unmanned aerial vehicle, and recording the mode type as CkThe model output value in this iteration is calculated using equation (17):
Figure BDA0002989570950000077
s46, selecting the mode from the related mode class C according to equation (18)kIs sampled with an embedded vector zt
Figure BDA0002989570950000078
S47, reconstructing the embedded vector in the low-dimensional feature space into a network relation by using the formula (8);
s48, calculating evidence lower bound of log likelihood function of unmanned aerial vehicle feature vector x by using formula (16)
Figure BDA0002989570950000079
And S49, performing back propagation by using a gradient method, and updating a connection weight matrix in the unmanned aerial vehicle flight condition mode classification model.
The step S5 includes the following specific steps:
s51 and expression (16) can be converted into expression (19):
Figure BDA0002989570950000081
in the formula (I), the compound is shown in the specification,
Figure BDA0002989570950000082
for Gaussian prior distribution of the embedding vectors, KL [ p (C)k|z)||q(Ck|x,a)]Is p (C)k| z) and q (C)kKL divergence between | x, a); the second term of equation (19) is defined by the KL divergence to be a non-negative number, and therefore the lower evidence bound
Figure BDA0002989570950000083
When maximum value is taken, the formula (20)
KL[p(Ck|z)|| q(Ck|x,a)]=0 (20)
The posterior distribution of the unmanned aerial vehicle pattern classification can be calculated by equation (21)
Figure BDA0002989570950000084
S52, obtaining the optimal parameters of the classification model of the flight condition mode of the unmanned aerial vehicle by iteratively executing the training process of the steps S44-S49, and then obtaining the classification result of the flight condition mode of the unmanned aerial vehicle by using the formula (21); the result represents the probability that each feature vector in the unmanned aerial vehicle flight condition data set is divided into the condition modes, and the result is a soft classification result with strong robustness and interpretability; on the basis, the soft classification result can be converted into a hard classification result by setting a threshold according to the actual requirement;
s53, calculating the central feature vector of each working condition mode type in the unmanned aerial vehicle flight working condition data set, as shown in formula (22):
Figure BDA0002989570950000085
in the formula (I), the compound is shown in the specification,
Figure BDA0002989570950000086
center feature vector, N, representing the kth operating mode classkThe number of feature vectors included in the k condition mode categories.
The specific method of the step S6 includes the steps of:
s61, in the process of executing a flight operation task by the unmanned aerial vehicle, acquiring a real-time key state signal of the unmanned aerial vehicle in the flight process on line according to the method in the step S10 and preprocessing the signal to form a to-be-matched real-time working condition state feature vector of the unmanned aerial vehicle;
s62, traversing and calculating Euclidean distances between the real-time working condition state feature vectors to be matched of the unmanned aerial vehicle and the central feature vectors of all working condition mode categories, selecting the central feature vector with the minimum Euclidean distance, and matching the real-time working condition state feature vectors to the working condition mode categories where the central feature vectors are located for matching;
s63, outputting the unmanned aerial vehicle implementation working condition mode class matching result to a related data analysis platform, and accordingly providing effective basis and support for unmanned aerial vehicle flight quality assessment, unmanned aerial vehicle health management, fault early warning, flight debugging and troubleshooting multiple scenes.
The invention also provides an unmanned aerial vehicle flight condition pattern recognition system which comprises a computer processor, a memory, an unmanned aerial vehicle flight condition training data acquisition unit, an unmanned aerial vehicle working condition pattern off-line classification unit and an unmanned aerial vehicle working condition pattern on-line matching unit.
Further, the unmanned aerial vehicle flight condition training data acquisition unit acquires key state signals output by a controller, an actuator and a sensor in the unmanned aerial vehicle at a fixed frequency in the multiple flight process, executes step S10, and preprocesses the key state signals acquired in the multiple flight process of the unmanned aerial vehicle through a computer processor, so as to construct an unmanned aerial vehicle flight condition training data set and load the unmanned aerial vehicle flight condition training data set into a computer memory; the unmanned aerial vehicle working condition mode offline classification unit executes the steps S20-S50 according to the unmanned aerial vehicle flight working condition training data set generated by the unmanned aerial vehicle flight working condition data acquisition unit, constructs an unmanned aerial vehicle flight working condition mode classification model, solves the classification model to obtain optimal parameters, and divides working condition state feature vectors in the unmanned aerial vehicle flight working condition training data set into a plurality of working condition mode categories; the unmanned aerial vehicle working condition mode online matching unit matches the real-time working condition state of the unmanned aerial vehicle in the flight process with the working condition mode categories obtained by the unmanned aerial vehicle working condition mode offline classification unit, outputs the unmanned aerial vehicle flight working condition mode recognition result to a related data analysis platform, and provides effective basis and support for various scenes such as unmanned aerial vehicle flight quality assessment, unmanned aerial vehicle health management, fault early warning, flight debugging and troubleshooting.
Compared with the prior art, the invention has the following advantages:
1. the working condition data network constructed by the invention in the multiple flight process of the unmanned aerial vehicle not only can record the working condition states of the unmanned aerial vehicle at different moments in the flight process, but also effectively expresses the complex correlation relationship between the working condition states, and is beneficial to obtaining the identification result of the flight working condition mode of the unmanned aerial vehicle with stronger robustness and interpretability.
2. According to the method, the graph variation self-encoder structure is utilized to construct the unmanned aerial vehicle flight condition mode classification model, so that the model has certain generating capacity, the unmanned aerial vehicle flight condition mode identification process has stronger generalization capacity, and the identification accuracy is improved.
3. The unmanned aerial vehicle flight condition mode recognition result is output to the related data analysis platform, and effective basis and support can be provided for various scenes such as unmanned aerial vehicle flight quality evaluation, unmanned aerial vehicle health management, fault early warning, flight debugging and troubleshooting.
Drawings
Fig. 1 is a diagram of a classification model structure of the flight condition mode of the unmanned aerial vehicle in step S3 according to the present invention;
FIG. 2 is a system structure diagram of an unmanned aerial vehicle flight condition pattern recognition system according to the present invention;
fig. 3 is a flowchart of an unmanned aerial vehicle flight condition pattern recognition method according to the present invention.
Detailed Description
In order to further explain the technical scheme of the invention, the invention is further explained by combining the drawings and the embodiment. It should be noted that variations and modifications can be made by those skilled in the art without departing from the principle of the present invention, and these should also be construed as falling within the scope of the present invention.
Example 1
The method for identifying the flight condition mode of the unmanned aerial vehicle is implemented through a computer program, a specific implementation mode of the technical scheme provided by the invention is detailed below according to the flow shown in fig. 3, a simulation model of the flight control system of the quad-rotor unmanned aerial vehicle is constructed on an unmanned aerial vehicle flight control platform ArduPilot, and the flight condition mode identification of the quad-rotor unmanned aerial vehicle is realized through the technical scheme provided by the invention. In this embodiment, 1000s flight simulation is performed according to a specified route by using the unmanned aerial vehicle simulation model within a range of 1 square kilometer, 6 types of flight condition modes are set, and the acquired unmanned aerial vehicle flight condition data are taken as processing objects, so that the implementation flow of the technical scheme of the present invention is explained in detail.
The implementation mode mainly comprises the following key contents:
s1, collecting key state signals output by a controller, an actuator and a sensor in the unmanned aerial vehicle at a fixed frequency in the flight simulation process, and forming working condition state characteristic vectors of the unmanned aerial vehicle after preprocessing so as to construct an unmanned aerial vehicle flight working condition training data set, wherein the method specifically comprises the following steps:
s11, collecting and recording 8 key state signals of the unmanned aerial vehicle, including course angle, pitch angle, roll angle, corresponding angular velocity, height, vertical velocity, airspeed and mainboard temperature, at a fixed frequency of 2Hz in a 1000S flight simulation process;
s12, constructing an original state vector of the unmanned aerial vehicle in the flight process by using various key state signals acquired at each moment, wherein each component in the state vector is a numerical value of each key state signal; recording the original state vector of the unmanned aerial vehicle at the moment t as stThe set of all original state vectors during the entire signal acquisition process is recorded as
Figure BDA0002989570950000111
Wherein T2000 represents the number of critical status signal acquisitions;
s13, carrying out normalization processing on the original state vector of the unmanned aerial vehicle by using the formula (1), converting different signal components in the original state vector into the range of [ -1,1],
Figure BDA0002989570950000121
wherein x istRepresented by the original state vector stThe flight condition state characteristic vector of the unmanned aerial vehicle at the time t obtained after the processing is called as characteristic vector mu for shortsRepresenting the expectation of the distribution of data in the set S, δsStandard deviation of data distribution in the set S is represented, and a flight condition training data set of the unmanned aerial vehicle is constructed by using all characteristic vectors in the whole signal acquisition process and is recorded as
Figure BDA0002989570950000122
S2, constructing an unmanned aerial vehicle flight condition training data network on the basis of the unmanned aerial vehicle flight condition training data set, recording the working condition states of different moments in the unmanned aerial vehicle flight process and the correlation relationship between the working condition states, and specifically comprising the following steps:
s21, traversing the training data set of the flight condition of the unmanned aerial vehicle, and calculating the similarity between any two characteristic vectors by using the formula (2):
Figure BDA0002989570950000123
wherein x isiAnd xjRespectively representing the characteristic vectors of the unmanned aerial vehicle at the moment i and the moment j in the flight process, | | xi-xj||2Is a feature vector xiAnd xjThe Euclidean distance between them, gamma is the sampling interval during signal acquisition, SimijRepresenting a feature vector xiAnd xjSimilarity between them;
s22, constructing a correlation matrix A of the flight condition of the unmanned aerial vehicle by taking the similarity among the eigenvectors as an element [ Sim ═ Sim-ij]T×T
S23, constructing an unmanned aerial vehicle flight condition training data network Net (X, A) by using the unmanned aerial vehicle flight condition training data set X and the correlation matrix A, and recording the working condition states of the unmanned aerial vehicle at different moments in the flight process and the correlation relationship between the working condition states.
S3, based on the unmanned aerial vehicle flight condition training data network obtained in the step S2, a graph variation self-encoder is utilized to construct an unmanned aerial vehicle flight condition mode classification model, the model comprises an encoder, a decoder and a classifier, the overall structure of the model is as shown in the attached figure 1, and the model construction specifically comprises the following steps:
s31, further, the encoder is configured to map the unmanned aerial vehicle flight condition training data network Net (X, a) to a set Z formed by embedded vectors in a low-dimensional feature space, in this embodiment, the dimension of the low-dimensional space is set to be 2, and a formal expression of an encoding process thereof is as shown in formula (3):
Figure BDA0002989570950000131
wherein the content of the first and second substances,
Figure BDA0002989570950000132
for a set of all embedded vectors, ztRepresenting a feature vector xtThe corresponding embedded vector in the low-dimensional feature space is sampled in the distribution represented by equation (4):
Figure BDA0002989570950000133
in the formula, muz,tAnd
Figure BDA0002989570950000134
respectively an embedding vector ztAnd is calculated from two structurally identical 2-layer Graph Convolution Networks (GCNs), namely:
μz,t=GCNμ(xt,A) (5)
logσz,t=GCNσ(xt,A) (6)
wherein, GCNμ() And GCNσ() Respectively, representing a graph convolution network that computes the expectation and variance of the embedded vector distribution, which have the same structure, defined by equation (7):
GCN(X,A)=Gconv(ReLU(Gconv(A,X;W0));W1) (7)
wherein gcnv () represents a graph convolution network; w0And W1Respectively connecting weight matrixes in the first layer graph convolution network and the second layer graph convolution network are undetermined parameters in a classification model, and are determined by inputting an unmanned aerial vehicle flight condition training data network in the subsequent step; ReLU () is an activation function, defined by equation (8):
Figure BDA0002989570950000135
s32, the decoder is used for reconstructing the embedded vector set Z into the network relation
Figure BDA0002989570950000136
Namely, it is
Figure BDA0002989570950000141
The decoder definition is as shown in equation (9):
Figure BDA0002989570950000142
where σ () is a dirac function.
S33, further, the classifier is used for dividing the embedded vectors corresponding to the characteristic vectors of the flight conditions of the unmanned aerial vehicle into corresponding mode classes, and the class division result of the classifier is recorded as C*={C1,C2,…,CKWhere K6 is the number of mode classes, the classifier is constructed from a gaussian mixture model, formalized as
Figure BDA0002989570950000143
πkIs a mode class CkThe vector formed by the prior distribution probabilities of the respective pattern classes is denoted as pi ═ pi [ pi ]12,…,πK];
Figure BDA0002989570950000144
For the mode class C in the Gaussian mixture modelkCorresponding gaussian distribution component, mukAnd
Figure BDA0002989570950000145
are respectively CkThe expectation and variance of the corresponding gaussian distribution components.
S4, carrying out model solution on the unmanned aerial vehicle flight condition mode classification model constructed in the step S3 in an iterative calculation mode, and determining undetermined parameters in the model, wherein the method specifically comprises the following specific steps:
s41, constructing a log-likelihood function of the flight condition of the unmanned aerial vehicle based on the mode classification model established in S3, wherein the log-likelihood function is as shown in formula (10):
Figure BDA0002989570950000146
wherein logp (x)t) For unmanned aerial vehicle eigenvector xtA log-likelihood function of; for any unmanned aerial vehicle feature vector x, obtaining an Evidence Lower Bound (ELBO) of a log likelihood function log p (x) of the unmanned aerial vehicle feature vector x by using a Jensen inequality as shown in formula (11):
Figure BDA0002989570950000147
in the formula (I), the compound is shown in the specification,
Figure BDA0002989570950000148
ELBO being the log-likelihood function of the feature vector x of the drone, z being the embedded vector for x corresponding in the low-dimensional space, a being the adjacency vector for x corresponding in the adjacency matrix A, p (a, z, C)k) Calculated by equation (12)
p(a,z,Ck)=p(a|z)p(z|Ck)p(Ck) (12)
Wherein p (a | z) is defined by formula (13)
Figure BDA0002989570950000151
Wherein, muAAnd
Figure BDA0002989570950000152
the expectation and variance of the reconstruction relationship, respectively, are obtained by the decoder of equation (9);
p(z|Ck) Is defined by formula (14)
Figure BDA0002989570950000153
In the formula, mukAnd
Figure BDA0002989570950000154
calculated by formula (5) and formula (6), respectively;
p(Ck) Is defined by formula (15)
p(Ck)=Cat(Ck|π) (15)
In the formula, Cat (C)kPi) is an unmanned aerial vehicle mode class distribution function; q (z, C) in the formula (11)kI x, a) is the true posterior distribution p (z, C)kA posterior approximation of the variation of | x, a);
s42, transforming the evidence lower bound of equation (11) into equation (16) by using Monte Carlo stochastic gradient descent variational Bayes (Monte Carlo SGVB) operator
Figure BDA0002989570950000155
In the formula, M is the sampling number of Monte Carlo SGVB operator, D is 8 the dimensionality of the characteristic vector of the unmanned aerial vehicle, R is 2 the dimensionality of the embedded vector, and xdFor the d-th component of the drone feature vector x,
Figure BDA0002989570950000156
is muAThe d-th component in the m-th sample,
Figure BDA0002989570950000157
is composed of
Figure BDA0002989570950000158
The (r) th component of (a),
Figure BDA0002989570950000159
is composed of
Figure BDA00029895709500001510
Of the r-th component, muz|rIs muzOf the r-th component, muk|rIs mukThe r-th component of (a);
s43, initializing the unmanned aerial vehicle flight condition mode classification model in the step S3 by using a Gaussian distribution initialization mode;
giving an iteration number L, and iteratively executing the steps S44-S49 until the iteration number is reached, finishing training the unmanned aerial vehicle flight condition mode classification model, and obtaining the optimal parameters of the model;
s44, calculating the expectation and the variance of each feature vector and the corresponding embedded vector of the adjacent vector thereof in the low-dimensional feature space in the unmanned aerial vehicle flight condition training data network Net (X, A) by using the formula (5) and the formula (6);
s45, randomly selecting one mode type from all mode types of the flight conditions of the unmanned aerial vehicle, and recording the mode type as CkThe model output value in this iteration is calculated using equation (17):
Figure BDA0002989570950000161
s46, selecting the mode from the related mode class C according to equation (18)kIs sampled with an embedded vector zt
Figure BDA0002989570950000162
S47, reconstructing the embedded vector in the low-dimensional feature space into a network relation by using the formula (8);
s48, calculating evidence lower bound of log likelihood function of unmanned aerial vehicle feature vector x by using formula (16)
Figure BDA0002989570950000163
And S49, performing back propagation by using a gradient method, and updating a connection weight matrix in the unmanned aerial vehicle flight condition mode classification model.
S5, classifying the flight condition modes of the unmanned aerial vehicle based on the unmanned aerial vehicle flight condition training data network by using the unmanned aerial vehicle flight condition mode classification model established in the step S3 and the model parameters determined in the step S4, and specifically comprising the following specific steps:
s51 and expression (16) can be converted into expression (19):
Figure BDA0002989570950000171
in the formula (I), the compound is shown in the specification,
Figure BDA0002989570950000172
for Gaussian prior distribution of the embedding vectors, KL [ p (C)k|z)||q(Ck|x,a)]Is p (C)k| z) and q (C)kKL divergence between | x, a); the second term of equation (19) is defined by the KL divergence to be a non-negative number, and therefore the lower evidence bound
Figure BDA0002989570950000173
When maximum value is taken, the formula (20)
KL[p(Ck|z)||q(Ck|x,a)]=0 (20)
The posterior distribution of the unmanned aerial vehicle pattern classification can be calculated by equation (21)
Figure BDA0002989570950000174
S52, obtaining the optimal parameters of the classification model of the flight condition mode of the unmanned aerial vehicle by iteratively executing the training process of the steps S44-S49, obtaining the classification result of the flight condition mode of the unmanned aerial vehicle by using the formula (21), and converting the soft classification result into a hard classification result by setting the threshold value to be 0.5;
s53, calculating the central feature vector of each working condition mode type in the unmanned aerial vehicle flight working condition data set, as shown in formula (22):
Figure BDA0002989570950000175
in the formula (I), the compound is shown in the specification,
Figure BDA0002989570950000176
center feature vector, N, representing the kth operating mode classkThe number of feature vectors included in the k condition mode categories.
S6, matching the online acquired unmanned aerial vehicle flight condition data with the unmanned aerial vehicle flight condition mode classification result, and determining the real-time flight condition mode of the unmanned aerial vehicle, wherein the specific method comprises the following steps:
s61, generating new flight operation real-time data by using the built simulation model of the flight control system of the quad-rotor unmanned aerial vehicle, and acquiring real-time key state signals of the unmanned aerial vehicle in the flight process on line according to the method in the step S1 and preprocessing the signals to form a to-be-matched real-time working condition state feature vector of the unmanned aerial vehicle;
s62, traversing and calculating Euclidean distances between the real-time working condition state feature vectors to be matched of the unmanned aerial vehicle and the central feature vectors of all working condition mode categories, selecting the central feature vector with the minimum Euclidean distance, and matching the real-time working condition state feature vectors to the working condition mode categories where the central feature vectors are located to perform matching;
s63, outputting the real-time working condition mode type matching result of the unmanned aerial vehicle to a related data analysis platform, and providing effective basis and support for various scenes such as unmanned aerial vehicle flight quality assessment, unmanned aerial vehicle health management, fault early warning, flight debugging and troubleshooting.
And (3) evaluating the technical effect:
in order to verify the effectiveness and the advancement of the technical scheme provided by the invention, the method, the K-means (K-means), the Kernel K-means (Kernel K-means), the iterative self-organizing data analysis (ISODATA), the K Neighbor (K near Neighbor, KNN) and the Weighted K Neighbor (WKNN) are used for identifying the working condition mode of the unmanned aerial vehicle, the average identification precision and the recall rate of 20 experiments are used as evaluation indexes, the matching results are compared and analyzed, and the comparison results are shown in the following table:
Figure BDA0002989570950000181
the results in the table show that the technical scheme of the invention can obtain a better working condition mode identification result when the working condition mode of the unmanned aerial vehicle is identified.
Example 2
As shown in FIG. 2, an unmanned aerial vehicle flight condition pattern recognition system comprises a computer processor, a memory, an unmanned aerial vehicle flight condition training data acquisition unit, an unmanned aerial vehicle working condition pattern off-line classification unit and an unmanned aerial vehicle working condition pattern on-line matching unit. The unmanned aerial vehicle flight condition data acquisition unit executes the step S1, and the computer processor preprocesses key state signals acquired in the multiple flight processes of the unmanned aerial vehicle, so as to construct an unmanned aerial vehicle flight condition training data set and load the training data set into a computer memory; the unmanned aerial vehicle working condition mode offline classification unit executes the steps S2-S5 according to the unmanned aerial vehicle flight working condition training data set generated by the unmanned aerial vehicle flight working condition data acquisition unit, and divides working condition state feature vectors in the unmanned aerial vehicle flight working condition training data set into a plurality of working condition mode categories; the unmanned aerial vehicle working condition mode online matching unit executes the step S6, matches the real-time working condition state in the flight process of the unmanned aerial vehicle with the working condition mode category obtained by the unmanned aerial vehicle working condition mode offline classification unit, outputs the flight working condition mode identification result of the unmanned aerial vehicle to the related data analysis platform, and can provide effective basis and support for various scenes such as unmanned aerial vehicle flight quality assessment, unmanned aerial vehicle health management, fault early warning, flight debugging and troubleshooting.

Claims (6)

1. A method for identifying the flight condition mode of an unmanned aerial vehicle is characterized by comprising the steps of collecting flight condition training data of the unmanned aerial vehicle, carrying out offline classification on the working condition mode of the unmanned aerial vehicle and carrying out online matching on the working condition mode of the unmanned aerial vehicle;
the identification method specifically comprises the following steps:
s1, constructing an unmanned aerial vehicle flight condition training data set;
s2, constructing an unmanned aerial vehicle flight condition training data network on the basis of the unmanned aerial vehicle flight condition training data set, and recording the working condition states of different moments and the correlation relationship between the working condition states in the flight process of the unmanned aerial vehicle;
the step S2 includes the following specific steps:
s21, traversing the training data set of the flight condition of the unmanned aerial vehicle, and calculating the similarity between any two characteristic vectors by using the formula (2):
Figure FDA0003604295900000011
wherein x isiAnd xjRespectively representing the characteristic vectors of the unmanned aerial vehicle at the moment i and the moment j in the flight process, | | xi-xj||2Is a feature vector xiAnd xjThe Euclidean distance between them, gamma is the sampling interval during signal acquisition, SimijRepresenting a feature vector xiAnd xjSimilarity between them;
s22, constructing a correlation matrix A of the flight condition of the unmanned aerial vehicle by taking the similarity among the eigenvectors as an element [ Sim ═ Sim-ij]T×T
S23, constructing an unmanned aerial vehicle flight condition training data network Net (X, A) by using the unmanned aerial vehicle flight condition training data set X and the correlation matrix A, and recording the working condition states of the unmanned aerial vehicle at different moments in the flight process and the correlation relationship between the working condition states;
s3, constructing an unmanned aerial vehicle flight condition mode classification model by using a graph variation self-encoder based on the unmanned aerial vehicle flight condition training data network obtained in the step S2;
the classification model of the flight condition mode of the unmanned aerial vehicle in the step S3 includes an encoder, a decoder, and a classifier, and the step S3 includes the following steps:
s31, the encoder is used for mapping the unmanned aerial vehicle flight condition training data network Net (X, A) to a set Z formed by embedded vectors in a low-dimensional feature space, and the formalization representation of the encoding process is shown as formula (3):
Figure FDA0003604295900000021
wherein the content of the first and second substances,
Figure FDA0003604295900000022
for a set of all embedded vectors, ztRepresenting a feature vector xtThe corresponding embedded vector in the low-dimensional feature space is sampled in the distribution represented by equation (4):
Figure FDA0003604295900000023
in the formula, muz,tAnd
Figure FDA0003604295900000024
respectively an embedding vector ztAnd is calculated by two 2-layer graph convolution networks with the same structure, namely:
μz,t=GCNμ(xt,A) (5)
logσz,t=GCNσ(xt,A) (6)
wherein, GCNμ() And GCNσ() Respectively, representing a graph convolution network that computes the expectation and variance of the embedded vector distribution, which have the same structure, defined by equation (7):
GCN(X,A)=Gconv(ReLU(Gconv(A,X;W0));W1) (7)
wherein gcnv () represents a graph convolution network; w0And W1Respectively connecting weight matrixes in the first layer graph convolution network and the second layer graph convolution network; ReLU () is an activation function, defined by equation (8):
Figure FDA0003604295900000025
s32, reconstructing the embedded vector set Z into a network relation by a decoder
Figure FDA0003604295900000026
Namely, it is
Figure FDA0003604295900000027
The decoder definition is as shown in equation (9):
Figure FDA0003604295900000028
wherein σ () is a dirac function;
s33, the classifier is used for dividing the embedded vectors corresponding to the characteristic vectors of the flight conditions of the unmanned aerial vehicle into corresponding mode classes, and the class division result of the classifier is recorded as C*={C1,C2,…,CKWhere K is the number of mode classes, the classifier is constructed from a Gaussian mixture model, formalized as
Figure FDA0003604295900000031
πkIs a mode class CkThe vector formed by the prior distribution probabilities of the respective pattern classes is denoted as pi ═ pi [ pi ]12,…,πK];
Figure FDA0003604295900000032
For the mode class C in the Gaussian mixture modelkCorresponding gaussian distribution component, mukAnd
Figure FDA0003604295900000033
are respectively CkThe expectation and variance of the corresponding gaussian distribution components;
s4, carrying out model solution on the unmanned aerial vehicle flight condition mode classification model constructed in the step S3 in an iterative calculation mode, and determining the optimal parameters of the model;
the step S4 includes the following specific steps:
s41, constructing a log-likelihood function of the flight condition of the unmanned aerial vehicle based on the mode classification model established in S3, wherein the log-likelihood function is as shown in formula (10):
Figure FDA0003604295900000034
wherein logp (x)t) For unmanned aerial vehicle eigenvector xtA log-likelihood function of; for any unmanned aerial vehicle feature vector x, the lower limit of evidence of a log-likelihood function logp (x) is obtained by using a Jensen inequality as shown in formula (11):
Figure FDA0003604295900000035
in the formula (I), the compound is shown in the specification,
Figure FDA0003604295900000036
is the evidence lower bound of the unmanned aerial vehicle feature vector x log-likelihood function, z is the embedding vector corresponding to x in the low-dimensional space, a is the adjacent vector corresponding to x in the adjacent matrix A, p (a, z, C)k) Calculated by equation (12)
p(a,z,Ck)=p(a|z)p(z|Ck)p(Ck) (12)
Wherein p (a | z) is defined by formula (13)
Figure FDA0003604295900000041
Wherein, muAAnd
Figure FDA0003604295900000042
the expectation and variance of the reconstruction relationship, respectively, are obtained by the decoder of equation (9);
p(z|Ck) Is defined by formula (14)
Figure FDA0003604295900000043
In the formula, mukAnd
Figure FDA0003604295900000044
calculated by formula (5) and formula (6), respectively;
p(Ck) Is defined by formula (15)
p(Ck)=Cat(Ck|π) (15)
In the formula, Cat (C)kPi) is an unmanned aerial vehicle mode class distribution function; q (z, C) in the formula (11)kI x, a) is the true posterior distribution p (z, C)kA posterior approximation of the variation of | x, a);
s42, transforming the evidence lower bound of the formula (11) into the formula (16) by using Monte Carlo stochastic gradient descent variational Bayes operator
Figure FDA0003604295900000045
In the formula, M is the sampling number of a Monte Carlo random gradient descent variation Bayes operator, D is the dimensionality of the characteristic vector of the unmanned aerial vehicle, R is the dimensionality of the embedded vector, and xdFor the d-th component of the drone feature vector x,
Figure FDA0003604295900000046
is muAThe d-th component in the m-th sample,
Figure FDA0003604295900000047
is composed of
Figure FDA0003604295900000048
The (r) th component of (a),
Figure FDA0003604295900000049
is composed of
Figure FDA00036042959000000410
The r component of (a), muz|rIs muzOf the r-th component, muk|rIs mukR th ofA component;
s43, initializing the unmanned aerial vehicle flight condition mode classification model in the step S3 by using a Gaussian distribution initialization mode;
giving iteration times L, and iteratively executing the steps S44-S49 until the iteration times are reached, finishing training the unmanned aerial vehicle flight condition mode classification model, and obtaining the optimal parameters of the model;
s44, calculating the expectation and the variance of each feature vector and the corresponding embedded vector of the adjacent vector thereof in the low-dimensional feature space in the unmanned aerial vehicle flight condition training data network Net (X, A) by using the formula (5) and the formula (6);
s45, randomly selecting one mode type from all mode types of the flight conditions of the unmanned aerial vehicle, and recording the mode type as CkThe model output value in this iteration is calculated using equation (17):
Figure FDA0003604295900000051
s46, selecting the mode from the related mode class C according to equation (18)kIs sampled with an embedded vector zt
Figure FDA0003604295900000052
S47, reconstructing the embedded vector in the low-dimensional feature space into a network relation by using the formula (8);
s48, calculating evidence lower bound of log likelihood function of unmanned aerial vehicle feature vector x by using formula (16)
Figure FDA0003604295900000053
S49, performing back propagation by using a gradient method, and updating a connection weight matrix in the unmanned aerial vehicle flight condition mode classification model;
s5, classifying the flight condition modes of the unmanned aerial vehicle based on the flight condition training data network of the unmanned aerial vehicle by using the flight condition mode classification model of the unmanned aerial vehicle constructed in the step S3 and the optimal parameters of the model determined in the step S4;
s6, matching the online acquired unmanned aerial vehicle flight condition data with the unmanned aerial vehicle flight condition mode classification result, and determining the real-time flight condition mode of the unmanned aerial vehicle.
2. The method for identifying the flight condition mode of the unmanned aerial vehicle as claimed in claim 1, wherein the constructing of the flight condition training dataset of the unmanned aerial vehicle in the step S1 comprises the following specific steps:
s11, acquiring and recording key state signals output by the controller, the actuator and the sensor of the unmanned aerial vehicle at a fixed frequency in the process of multiple flights, wherein the key state signals comprise: course angle, pitch angle, roll angle, corresponding angular velocity, height, vertical velocity, airspeed and main board temperature key state signals;
s12, constructing an original state vector of the unmanned aerial vehicle in the flight process by using various key state signals acquired at each moment, wherein each component in the state vector is a numerical value of each key state signal; recording the original state vector of the unmanned aerial vehicle at the moment t as stThe set of all original state vectors during the entire signal acquisition process is recorded as
Figure FDA0003604295900000061
Wherein T represents the number of critical state signal acquisitions;
s13, carrying out normalization processing on the original state vector of the unmanned aerial vehicle by using the formula (1), converting different signal components in the original state vector into the range of [ -1,1],
Figure FDA0003604295900000062
wherein x istRepresented by the original state vector stThe flight condition state characteristic vector of the unmanned aerial vehicle at the time t obtained after the processing is called as characteristic vector mu for shortsRepresenting the expectation of the distribution of data in the set S, δsStandard deviation representing data distribution in set SAnd constructing an unmanned aerial vehicle flight condition training data set by using all characteristic vectors in the whole signal acquisition process, and recording the training data set as
Figure FDA0003604295900000063
3. The method for identifying the flight condition mode of the unmanned aerial vehicle as claimed in claim 1, wherein the step S5 comprises the following steps:
s51, equation (16) can be converted to the form of equation (19):
Figure FDA0003604295900000064
in the formula (I), the compound is shown in the specification,
Figure FDA0003604295900000065
KL [ p (C) is a Gaussian prior distribution of embedded vectorsk|z)||q(Ck|x,a)]Is p (C)k| z) and q (C)kKL divergence between | x, a); the second term of equation (19) is defined by the KL divergence to be a non-negative number, and therefore the lower evidence bound
Figure FDA0003604295900000066
When maximum value is taken, the formula (20)
KL[p(Ck|z)||q(Ck|x,a)]=0 (20)
The posterior distribution of the unmanned aerial vehicle pattern classification can be calculated by equation (21)
Figure FDA0003604295900000071
S52, obtaining a classification result of the flight condition modes of the unmanned aerial vehicle by using the formula (21), namely the probability that each feature vector in the flight condition data set of the unmanned aerial vehicle is divided into each condition mode, and converting the soft classification result into a hard classification result by setting a threshold value on the basis;
s53, calculating the central feature vector of each working condition mode type in the unmanned aerial vehicle flight working condition data set, as shown in formula (22):
Figure FDA0003604295900000072
in the formula (I), the compound is shown in the specification,
Figure FDA0003604295900000073
center feature vector, N, representing the kth operating mode classkThe number of feature vectors included in the k condition mode categories.
4. The method for identifying the flight condition mode of the unmanned aerial vehicle as claimed in claim 1, wherein the step S6 comprises the following steps:
s61, in the process of executing a flight operation task by the unmanned aerial vehicle, acquiring a real-time key state signal of the unmanned aerial vehicle in the flight process on line according to the method in the step S1 and preprocessing the signal to form a to-be-matched real-time working condition state feature vector of the unmanned aerial vehicle;
s62, traversing and calculating Euclidean distances between the real-time working condition state feature vectors to be matched of the unmanned aerial vehicle and the central feature vectors of all working condition mode categories, selecting the central feature vector with the minimum Euclidean distance, and matching the real-time working condition state feature vectors to the working condition mode categories where the central feature vectors are located to perform matching;
s63, outputting the unmanned aerial vehicle implementation working condition mode class matching result to a related data analysis platform, and accordingly providing effective basis and support for unmanned aerial vehicle flight quality assessment, unmanned aerial vehicle health management, fault early warning, flight debugging and troubleshooting multiple scenes.
5. The utility model provides an unmanned aerial vehicle flight operating mode pattern recognition system which characterized in that: the unmanned aerial vehicle flight condition online matching system comprises a computer processor, a memory, an unmanned aerial vehicle flight condition training data acquisition unit, an unmanned aerial vehicle working condition mode offline classification unit and an unmanned aerial vehicle working condition mode online matching unit; the unmanned aerial vehicle flight condition mode identification method is used for achieving the unmanned aerial vehicle flight condition mode identification method of any one of claims 1 to 4.
6. The unmanned aerial vehicle flight condition pattern recognition system of claim 5, wherein: the unmanned aerial vehicle flight condition training data acquisition unit acquires key state signals output by a controller, an actuator and a sensor in the unmanned aerial vehicle at a fixed frequency in the process of multiple flights, and preprocesses the acquired key state signals through a computer processor to further construct an unmanned aerial vehicle flight condition training data set and load the unmanned aerial vehicle flight condition training data set into a computer memory; the unmanned aerial vehicle working condition mode offline classification unit constructs an unmanned aerial vehicle flight working condition mode classification model according to the unmanned aerial vehicle flight working condition training data set generated by the unmanned aerial vehicle flight working condition data acquisition unit, solves the classification model to obtain optimal parameters, and divides working condition state feature vectors in the unmanned aerial vehicle flight working condition training data set into a plurality of working condition mode categories; the unmanned aerial vehicle working condition mode online matching unit is used for matching a real-time working condition state in the flight process of the unmanned aerial vehicle with a working condition mode category obtained by the unmanned aerial vehicle working condition mode offline classification unit, outputting a flight working condition mode recognition result of the unmanned aerial vehicle to a related data analysis platform, and providing effective basis and support for various scenes such as unmanned aerial vehicle flight quality assessment, unmanned aerial vehicle health management, fault early warning, flight debugging and obstacle elimination.
CN202110311072.8A 2021-03-24 2021-03-24 Unmanned aerial vehicle flight condition mode identification method and system Active CN112966773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110311072.8A CN112966773B (en) 2021-03-24 2021-03-24 Unmanned aerial vehicle flight condition mode identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110311072.8A CN112966773B (en) 2021-03-24 2021-03-24 Unmanned aerial vehicle flight condition mode identification method and system

Publications (2)

Publication Number Publication Date
CN112966773A CN112966773A (en) 2021-06-15
CN112966773B true CN112966773B (en) 2022-05-31

Family

ID=76278297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110311072.8A Active CN112966773B (en) 2021-03-24 2021-03-24 Unmanned aerial vehicle flight condition mode identification method and system

Country Status (1)

Country Link
CN (1) CN112966773B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990721B (en) * 2021-03-24 2023-04-21 山西大学 Power user value analysis method and system based on payment behaviors
CN113761722A (en) * 2021-08-18 2021-12-07 航天科工海鹰集团有限公司 Spacecraft multi-working-condition service life prediction method based on PCA
CN113867410B (en) * 2021-11-17 2023-11-03 武汉大势智慧科技有限公司 Unmanned aerial vehicle aerial photographing data acquisition mode identification method and system
CN116993010B (en) * 2023-07-28 2024-02-06 南通大学 Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108594788A (en) * 2018-03-27 2018-09-28 西北工业大学 A kind of aircraft actuator fault detection and diagnosis method based on depth random forests algorithm
CN112070140A (en) * 2020-09-01 2020-12-11 中国人民解放军陆军工程大学 Density clustering mark-like pattern recognition method based on dimension decomposition
CN112529115A (en) * 2021-02-05 2021-03-19 支付宝(杭州)信息技术有限公司 Object clustering method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11907833B2 (en) * 2018-11-27 2024-02-20 The Boeing Company System and method for generating an aircraft fault prediction classifier
US20200342968A1 (en) * 2019-04-24 2020-10-29 GE Precision Healthcare LLC Visualization of medical device event processing
US11983625B2 (en) * 2020-06-24 2024-05-14 Intel Corporation Robust multimodal sensor fusion for autonomous driving vehicles

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108594788A (en) * 2018-03-27 2018-09-28 西北工业大学 A kind of aircraft actuator fault detection and diagnosis method based on depth random forests algorithm
CN112070140A (en) * 2020-09-01 2020-12-11 中国人民解放军陆军工程大学 Density clustering mark-like pattern recognition method based on dimension decomposition
CN112529115A (en) * 2021-02-05 2021-03-19 支付宝(杭州)信息技术有限公司 Object clustering method and system

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
Collaborative Graph Convolutional Networks: Unsupervised Learning Meets Semi-Supervised Learning;Binyuan Hui等;《The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)》;20200403;第34卷(第4期);第4215-4222页 *
Estimating uncertainty in deep learning for reporting confidence to clinicians in medical mage segmentation and diseases detection;Biraja Ghoshal1等;《computational Intelligence》;20201125;第1-34页 *
Following a moving target D Monte Carlo inferencefor dynamic Bayesian models;Walter R.Gilks等;《Journal of the Royal Statistical Society: Series B (Statistical Methodology)》;20020106;第63卷(第1期);第127-146页 *
New label propagation algorithm with pairwise constraints;Liang Bai等;《Pattern Recognition》;20200511;第106卷;第1-10页 *
state-of-the-art.《Advanced Robotics 》.2019,第33卷(第6期), *
Su Yeon Choi等.Unmanned aerial vehicles using machine learning for autonomous flight *
微型无人机飞行工况视频监测及图像处理与目标识别研究;柴西林等;《现代计算机》;20200915;第76-79页 *
无人机故障诊断技术研究进展概述;苗建国等;《仪器仪表学报》;20200701;第41卷(第9期);第56-69页 *

Also Published As

Publication number Publication date
CN112966773A (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN112966773B (en) Unmanned aerial vehicle flight condition mode identification method and system
Janakiraman et al. Anomaly detection in aviation data using extreme learning machines
US20190164047A1 (en) Object recognition using a convolutional neural network trained by principal component analysis and repeated spectral clustering
Yi et al. Grouped convolutional neural networks for multivariate time series
Idé et al. Multi-task multi-modal models for collective anomaly detection
Titouna et al. An online anomaly detection approach for unmanned aerial vehicles
CN113484875B (en) Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering
CN115204302A (en) Unmanned aerial vehicle small sample fault diagnosis system and method
CN113420640A (en) Mangrove hyperspectral image classification method and device, electronic equipment and storage medium
DE102021207269A1 (en) METHOD AND SYSTEM FOR LEARNING PERTURBATION QUANTITIES IN MACHINE LEARNING
CN112418065A (en) Equipment operation state identification method, device, equipment and storage medium
CN108985161B (en) Low-rank sparse representation image feature learning method based on Laplace regularization
CN114863151B (en) Image dimension reduction clustering method based on fuzzy theory
Shi et al. Dynamic barycenter averaging kernel in RBF networks for time series classification
CN115510950A (en) Aircraft telemetry data anomaly detection method and system based on time convolution network
Karkus et al. Differentiable mapping networks: Learning structured map representations for sparse visual localization
CN109725626B (en) Multi-rotor-wing unmanned aerial vehicle power system fault online diagnosis system and method
CN109871907B (en) Radar target high-resolution range profile identification method based on SAE-HMM model
CN115544714A (en) Time sequence dynamic countermeasure threat assessment method based on aircraft formation
Treboux et al. Towards retraining of machine learning algorithms: an efficiency analysis applied to smart agriculture
Madokoro et al. Adaptive Category Mapping Networks for all-mode topological feature learning used for mobile robot vision
CN116304966A (en) Track association method based on multi-source data fusion
Lv et al. Determination of the number of principal directions in a biologically plausible PCA model
Maeda et al. Neural network maximizing ordinally supervised multi-view canonical correlation for deterioration level estimation
Shi et al. A scalable convolutional neural network for task-specified scenarios via knowledge distillation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221208

Address after: 030006 No. 2637, Tianxiang Zhongchuang Space, 26/F, Block A, Hi tech International Building, No. 227, Changzhi Road, Taiyuan Xuefu Park, Shanxi Comprehensive Reform Demonstration Zone, Taiyuan City, Shanxi Province

Patentee after: Shanxi Shuoming Technology Co.,Ltd.

Address before: 030006 No. 92, Hollywood Road, Taiyuan, Shanxi

Patentee before: SHANXI University