CN115469679A - Unmanned aerial vehicle flight state parameter prediction method and system - Google Patents

Unmanned aerial vehicle flight state parameter prediction method and system Download PDF

Info

Publication number
CN115469679A
CN115469679A CN202211274667.1A CN202211274667A CN115469679A CN 115469679 A CN115469679 A CN 115469679A CN 202211274667 A CN202211274667 A CN 202211274667A CN 115469679 A CN115469679 A CN 115469679A
Authority
CN
China
Prior art keywords
data
unmanned aerial
aerial vehicle
flight state
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211274667.1A
Other languages
Chinese (zh)
Other versions
CN115469679B (en
Inventor
刘静
张海浪
邓可立
苏立玉
胡峪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202211274667.1A priority Critical patent/CN115469679B/en
Publication of CN115469679A publication Critical patent/CN115469679A/en
Application granted granted Critical
Publication of CN115469679B publication Critical patent/CN115469679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
    • G05D1/0816Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft to ensure stability
    • G05D1/0825Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft to ensure stability using mathematical models
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method and a system for predicting flight state parameters of an unmanned aerial vehicle, wherein an improved Transformer neural network model is built; acquiring flight state data of a typical flight state of the unmanned aerial vehicle; preprocessing the flight state data of the unmanned aerial vehicle; generating an unmanned aerial vehicle flight state data set by utilizing the preprocessed data, and then dividing the unmanned aerial vehicle flight state data set into a training set, a verification set and a test set; training the Transformer neural network model by using a training set, obtaining parameters of the Transformer neural network model by using a verification set, inputting data of H time sequences of t, t-1, … and t-H-1 in the test set into the trained Transformer neural network model, and predicting to obtain flight state parameters at the t +1 moment. The method can predict the flight state parameters of the unmanned aerial vehicle in real time and at high precision, lays a solid foundation for the precise control of the unmanned aerial vehicle, ensures the safe and reliable flight of the unmanned aerial vehicle, can execute different tasks of different industries, and expands the use scene of the unmanned aerial vehicle.

Description

Unmanned aerial vehicle flight state parameter prediction method and system
Technical Field
The invention belongs to the technical field of unmanned aerial vehicles, and particularly relates to a method and a system for predicting flight state parameters of an unmanned aerial vehicle.
Background
Currently, drones have been widely used in many industries, such as aerial photography, surveying and mapping, routing inspection, air patrol, etc. Different industries require drones to accomplish different tasks, but the most fundamental requirement for drones is that they can be precisely controlled to accomplish these tasks. Carry out accurate control to unmanned aerial vehicle, unmanned aerial vehicle's controller needs obtain real-time, accurate unmanned aerial vehicle state parameter, like unmanned aerial vehicle's speed, acceleration, attitude angle, angular velocity, angular acceleration etc..
The traditional method for estimating and predicting flight state parameters is completed based on an unmanned aerial vehicle dynamic model. The dynamic model is used for calculating acceleration information and angular acceleration information of the unmanned aerial vehicle based on Newton's second law and Euler equation under certain assumed and simplified conditions, and then obtaining other flight state parameter information such as speed, angular velocity and attitude angle through integration. The dynamics modeling method makes certain assumptions and simplifications on the actual flight of the unmanned aerial vehicle, and generally ignores unsteady parameters and certain uncertain states such as aerodynamic characteristics in the flight of the unmanned aerial vehicle, so that the acceleration and angular acceleration information obtained through a dynamics model of the unmanned aerial vehicle is not accurate enough and has certain errors. Other flight state parameters are obtained by integral calculation of the acceleration and the angular acceleration, and the integral calculation can accumulate the errors, so that the flight state parameters of the unmanned aerial vehicle obtained through the dynamic model are not accurate enough.
In order to improve the accuracy of flight state parameter estimation, a combined model of a dynamic model and a convolutional neural network is used in the prior art, wherein the dynamic model predicts a deterministic part in a flight state, and the convolutional neural network predicts an indeterminate part in the flight state; compared with the traditional pure dynamics modeling method, the method has the advantage that the accuracy of the prediction result is greatly improved. However, in the flight process of the unmanned aerial vehicle, the flow field near the unmanned aerial vehicle and the state of the unmanned aerial vehicle change with time, and the flight state at the historical moment can affect the current flight state, while the conventional dynamics model and the dynamics-convolution neural network combined model do not fully consider the time change characteristic and cannot fully utilize the historical information of the unmanned aerial vehicle to predict the current flight state parameters.
In order to fully utilize historical information of the unmanned aerial vehicle to predict current flight state parameter data, the LSTM method is used alone or in combination with traditional dynamics modeling to estimate flight state parameters of the unmanned aerial vehicle, but because of an information bottleneck of the LSTM method, when flight state data of too long time segments are input, previous information may be lost, so that the method can only utilize flight state data of shorter time segments, which affects prediction accuracy. Meanwhile, due to the use of a deep LSTM network, gradient disappearance or explosion exists, the convergence of the network is poor, the calculation is complex, the parallel capability is poor, the calculation time is long, and the real-time prediction is difficult to achieve.
The accurate prediction of the flight state parameters of the unmanned aerial vehicle is the basis of the accurate control of the unmanned aerial vehicle, so that the safe and reliable flight of the unmanned aerial vehicle is ensured, and the tasks under more complex scenes are completed, so that the real-time high-precision prediction of the flight state parameters of the unmanned aerial vehicle is of great importance to the unmanned aerial vehicle.
Disclosure of Invention
The invention aims to solve the technical problems that the flight state parameter prediction of the unmanned aerial vehicle is not accurate enough and has poor real-time performance, so that the control precision and the control frequency of the unmanned aerial vehicle are influenced.
The invention adopts the following technical scheme:
an unmanned aerial vehicle flight state parameter prediction method comprises the following steps:
s1, building an improved Transformer neural network model;
s2, acquiring flight state data of a typical flight state of the unmanned aerial vehicle;
s3, preprocessing the unmanned aerial vehicle flight state data acquired in the step S2;
s4, generating an unmanned aerial vehicle flight state data set by using the data preprocessed in the step S3, and then dividing the unmanned aerial vehicle flight state data set into a training set, a verification set and a test set;
and S5, training the Transformer neural network model obtained in the step S1 by using the training set obtained in the step S4, obtaining parameters of the Transformer neural network model by using the verification set obtained in the step S4, inputting data of H time sequences of the t, t-1, … and t-H-1 in the test set obtained in the step S4 into the trained Transformer neural network model, and predicting to obtain flight state parameters at the time of t + 1.
Specifically, in step S1, the transform neural network model includes an input, an encoder, a decoder, and a fully-connected network layer, time-series data passes through a linear mapping layer, and a timestamp code is used as an input of the encoder and the decoder, the encoder and the decoder repeat stacking N times, an output of the encoder is used as an input of a cross-correlation attention mechanism layer in the decoder, and an output of the decoder outputs prediction data after passing through the fully-connected network layer.
Further, the input of the encoder and the decoder is specifically:
and mapping the input H multiplied by m dimensional time sequence data to H multiplied by d dimension, wherein H is the length of the time sequence, m and d are the dimensions of flight state parameters, and d is more than or equal to m.
Further, the timestamp coding specifically includes:
P(k,2i)=sin(k*e -2i*ln100/d )
P(k,2i+1)=cos(k*e -(2i+1)*ln100/d )
p (k, 2 i) is time coding of an even bit dimension of the flight state data at the moment k, P (k, 2i + 1) is time coding of an odd bit dimension of the flight state data at the moment k, i represents ith dimension data, and the data after passing through the linear mapping layer have d dimensions.
Specifically, in step S2, typical flight states of the drone include climb, cruise, hover and dive, and flight state parameters include a timestamp, a control signal, a speed, an acceleration and an angular velocity data of the drone.
Specifically, step S3 specifically includes:
s301, suppressing noise in the unmanned aerial vehicle flight state data acquired in the step S2 by adopting filtering processing;
s302, resampling the data obtained in the step S301, aligning timestamps of different sensor data to generate a time series number [ T ] 1 ,T 2 ,…,T N ]For a total of N data.
Further, in step S302, flight state data T at time k k Comprises the following steps:
Figure BDA0003896538680000031
wherein [ tau ] 1k2k3k4k ]Control input signals representing the roll, pitch, throttle and yaw channels at time k, respectively, [ v ] xk ,v yk ,v zk ]Representing velocity components in different directions at time k,
Figure BDA0003896538680000041
represents the acceleration components in different directions at time k, [ omega ] xkykzk ]Representing angular velocity components in different directions at time k,
Figure BDA0003896538680000042
representing angular acceleration components in different directions at time k,
Figure BDA0003896538680000043
representing the attitude angle at time k.
Specifically, in step S4, the N time series data obtained in step S3 are subjected to overlapping sampling at time intervals L and length H to obtain a data set D with length M, and then divided into a training set accounting for 60% of the total data, a verification set accounting for 20% of the total data, and a test set accounting for 20% of the total data.
Specifically, in step S5, an Adma optimization algorithm is used to train the transform neural network model, where the LOSS function LOSS is specifically:
Figure BDA0003896538680000044
wherein, y k Is the true value at time k and,
Figure BDA0003896538680000045
is the predicted value at the moment k, and n is the number of training set samples.
In a second aspect, an embodiment of the present invention provides an unmanned aerial vehicle flight state parameter prediction system, including:
the building module is used for building an improved Transformer neural network model;
the parameter module is used for acquiring flight state parameters of a typical flight state of the unmanned aerial vehicle;
the preprocessing module is used for preprocessing the flight state data of the unmanned aerial vehicle acquired by the parameter module;
the division module is used for generating an unmanned aerial vehicle flight state data set by using data preprocessed by the preprocessing module and then dividing the unmanned aerial vehicle flight state data set into a training set, a verification set and a test set;
and the prediction module is used for training the Transformer neural network model obtained by the building module by using the training set obtained by the dividing module, obtaining parameters of the Transformer neural network model by using the verification set obtained by the dividing module, inputting the data of the t, t-1, … and t-H-1 time sequences in the test set obtained by the dividing module into the trained Transformer neural network model, and predicting to obtain flight state parameters at the t +1 moment.
Compared with the prior art, the invention has at least the following beneficial effects:
according to the method for predicting the flight state parameters of the unmanned aerial vehicle, the Transformer neural network model is improved, the model is trained by using the flight data of the typical flight state of the unmanned aerial vehicle, the flight state parameters of the unmanned aerial vehicle can be predicted in real time and at high precision, a solid foundation is laid for the precise control of the unmanned aerial vehicle, the safe and reliable flight of the unmanned aerial vehicle is ensured, different tasks of different industries can be executed, and the use scene of the unmanned aerial vehicle is expanded.
Furthermore, the improved Transformer neural network model provides timestamp coding and the input of a unified coder decoder, and has better parallel performance and higher prediction precision compared with the most original model, so that the real-time high-precision prediction of the state parameters of the unmanned aerial vehicle can be realized.
Furthermore, the model maps the input H multiplied by m dimension time sequence data to H multiplied by d dimension, and not only maps the input data to higher dimension, but also is more beneficial to learning complex relation among the state parameters of the unmanned aerial vehicle.
Furthermore, because the attention mechanism in the transform neural network model does not consider the relation between input variables, and a strong time sequence exists between flight data, in order to better represent the time sequence relation, the invention provides a time stamp coding method.
Further, typical flight states of the unmanned aerial vehicle comprise climbing, cruising, hovering and diving, and the method collects the typical flight state data for improving training of the Transformer neural network model, so that the model can learn the typical flight state of the unmanned aerial vehicle, and the application range of the method is greatly expanded.
Furthermore, the acquired flight data is filtered to eliminate noise signals in the data; because the sampling frequency of the data acquisition of different sensors on the unmanned aerial vehicle is different, the data of gathering are resampled, can align the time stamp of different sensor data.
Further, the flight data after preprocessing comprises flight state parameters of all dimensions. The model is trained by using all-dimensional flight state parameters as the input of the network, so that the model can learn to obtain a global model, and the prediction accuracy of the model can be improved.
Further, the overlapping sampling can expand the training data and avoid overfitting of the model.
Furthermore, the Adma optimization algorithm can be adopted to enable the model convergence to be more stable. It is to be understood that, the beneficial effects of the second aspect may refer to the relevant description in the first aspect, and are not described herein again.
In conclusion, the method and the device can predict the flight state parameters of the unmanned aerial vehicle in real time and high precision, lay a solid foundation for the precise control of the unmanned aerial vehicle, ensure the safe and reliable flight of the unmanned aerial vehicle, execute different tasks in different industries and expand the use scene of the unmanned aerial vehicle.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic diagram of an improved Transformer neural network model according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In the description of the present invention, it should be understood that the terms "comprises" and/or "comprising" indicate the presence of the stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used to describe preset ranges, etc. in embodiments of the present invention, these preset ranges should not be limited to these terms. These terms are only used to distinguish preset ranges from each other. For example, a first preset range may also be referred to as a second preset range, and similarly, a second preset range may also be referred to as a first preset range, without departing from the scope of embodiments of the present invention.
The word "if," as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection," depending on context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
Various structural schematics according to the disclosed embodiments of the invention are shown in the drawings. The figures are not drawn to scale, wherein certain details are exaggerated and possibly omitted for clarity of presentation. The shapes of various regions, layers and their relative sizes and positional relationships shown in the drawings are merely exemplary, and deviations may occur in practice due to manufacturing tolerances or technical limitations, and a person skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions, according to actual needs.
The invention provides a method for predicting flight state parameters of an unmanned aerial vehicle, which makes full use of the historical flight state information of the unmanned aerial vehicle and avoids the use of a dynamic model method based on assumption and simplification, thereby achieving high-precision prediction; the improved Transformer neural network model has good parallel computing capability, so that the method can meet the requirement of real-time prediction; a solid foundation is laid for safe and reliable flight of the unmanned aerial vehicle and execution of complex tasks in complex environments, and therefore the use scene of the unmanned aerial vehicle is greatly expanded.
Referring to fig. 1, the method for predicting flight state parameters of an unmanned aerial vehicle according to the present invention includes the following steps:
s1, building an improved Transformer neural network model for predicting flight state parameters;
referring to fig. 2, the transform neural network model adopts a time coding method and a linear mapping layer, and unifies the input of the encoder and the decoder; the input time series data passes through a linear mapping layer, codes and ascends dimensions of the data to describe complex relations among the time series data, and then adds time stamp codes to the time series data to serve as input of the coder and the decoder. The time stamp coding can effectively describe the precedence order relation between time sequence data.
The Transformer neural network model comprises an input, an encoder, a decoder and a full-connection network layer,
the encoder consists of a multi-head self-attention layer, a residual normalization layer, a feedforward network layer and a normalization layer, and can be repeated for N times to better describe complex relationships among time sequence data.
The input of the decoder is the same as that of the encoder, the input data passes through a multi-head self-attention and residual error normalization layer, the multi-head mutual attention layer fuses the output data of the encoder and the input data of the decoder, and then the output of the decoder is obtained through the residual error normalization layer, the feedforward network layer and the residual error normalization layer. The feedforward network layer is formed by adding an activation function between two linear network layers. The decoder also repeats N times to better describe the complex relationships between the time series data. The output of the decoder passes through the full connection layer, and finally the predicted flight state parameters are obtained.
The time stamp coding keeps the time sequence of the time sequence data; the linear mapping layer maps the input time sequence to a higher dimension and performs dimension increasing on data, so that the complex relation among the time sequence data is better represented;
the decoder uses the same input as the encoder, avoids using autoregressive to carry out sequential prediction, reduces accumulated errors, and can improve the parallel efficiency of the algorithm and accelerate the prediction time by avoiding the sequential prediction.
After the time series data are subjected to dimension raising through a linear mapping layer, the time series data are added with time encoding data to be used as input of an encoder and a decoder.
Temporal coding is used as follows:
P(k,2i)=sin(k*e -2i*ln100/d )
P(k,2i+1)=cos(k*e -(2i+1)*ln100/d )
where k denotes time stamp data, i denotes ith dimension data, d denotes a dimension of data after passing through the linear mapping layer, and d =512.
S2, acquiring flight state parameters of a typical flight state of the unmanned aerial vehicle;
typical flight states of the unmanned aerial vehicle include climbing, cruising, hovering, diving and other flight states; the time stamp, the control signal, the speed, the acceleration and the angular speed data of the unmanned aerial vehicle are acquired by different sensors respectively and then are sent to an LOG module of a flight control system for recording, the flight state historical data recorded by the LOG module are read, then the angular speed is integrated to obtain attitude angle data, and the angular speed is differentiated to obtain angular acceleration data.
For each different flight state, 25mins of flight data are collected, for 100mins of data.
S3, preprocessing the unmanned aerial vehicle flight state data acquired in the step S2;
s301, suppressing noise in the flight state data of the unmanned aerial vehicle by adopting filtering processing;
the filtering includes hardware filtering and software filtering, and the filtering algorithm is not limited to kalman filtering.
S302, due to the fact that the sampling frequencies of different sensors are different, the time intervals of the acquired data are different, and the data acquired in the step S301 need to be resampled to align different sensorsTime stamping of the device data, and then generating a time series number [ T ] 1 ,T 2 ,…,T N ]For a total of N data.
Flight state data T at time k k Comprises the following steps:
Figure BDA0003896538680000091
wherein [ tau ] 1k2k3k4k ]Control input signals representing the roll, pitch, throttle and yaw channels at time k, respectively, [ v ] xk ,v yk ,v zk ]Representing velocity components in different directions at time k,
Figure BDA0003896538680000092
represents the acceleration components in different directions at time k, [ omega ] xkykzk ]Representing angular velocity components in different directions at time k,
Figure BDA0003896538680000093
representing angular acceleration components in different directions at time k,
Figure BDA0003896538680000094
representing the attitude angle at time k.
S4, generating an unmanned aerial vehicle flight state data set by using the data preprocessed in the step S3, and then dividing the unmanned aerial vehicle flight state data set into a training set, a verification set and a test set;
the data set generated in S4 is obtained by performing overlap sampling on N time series data obtained by preprocessing in step S3 at a time interval of L and a length of H to obtain a data set D with a length of M, and then, according to the condition that the training set accounts for 60% of the total data, the verification set accounts for 20% of the total data, and the test set accounts for 20% of the total data, the training set, the verification set, and the test set are obtained by random sampling.
Data set D is:
D={[T 1 ,T 2 ,…,T H ],[T 1+L ,T 2+L ,…,T H+L ],…,[T N-H+1 ,T N-H+2 ,…,T N ]}。
and S5, training the Transformer neural network model obtained in the step S1 by using the training set obtained in the step S4, obtaining parameters of the Transformer neural network model by using the verification set obtained in the step S4, and predicting flight state parameters at the moment t +1 by using the trained model by using the data of H time sequences of the t, t-1, … and t-H-1 in the test set obtained in the step S4.
S501, input and output of transform neural network model
The Transformer neural network model has the input of data 1s before time T, the time interval of the preprocessed time series data is 0.01s, 100 time series data are input, and each time series data T k Is 19-dimensional, and the input data is a vector of 100 × 19 dimensions.
Output T of Transformer neural network model t+1 The flight state parameters at the time t +1 are 15-dimensional, and the output data format is a vector with 1 × 15 dimensions.
Output T of Transformer neural network model t+1 The method comprises the following specific steps:
Figure BDA0003896538680000101
wherein, [ v ] x,t+1 ,v y,t+1 ,v z,t+1 ]Representing velocity components in different directions at time t +1,
Figure BDA0003896538680000102
represents the acceleration components in different directions at time t +1, [ omega ] x,t+1y,t+1z,t+1 ]Representing angular velocity components in different directions at time t +1,
Figure BDA0003896538680000103
representing angular acceleration components in different directions at time t +1,
Figure BDA0003896538680000104
representing the attitude angle at time t + 1.
S502, parameter initialization
Initializing parameters of the whole network, and selecting optimal values of the hyper-parameters of the network according to a large number of experiments on a verification set; the number of layers of the encoder and the decoder is 8, the dimensionality of a linear mapping layer is 128, the number of the multi-head attention layers is 8, and the dimensionality of a full connection layer is 1024.
S503, model training and testing
The Relu function is selected as an activation function in the network, and the Mean Square Error (MSE) is selected as a loss function when the flight state parameters of the unmanned aerial vehicle are predicted.
Figure BDA0003896538680000105
Wherein, y k Is the true value for the time instant k,
Figure BDA0003896538680000111
and n is the number of training set samples, wherein n is a predicted value at the moment k.
And an Adma optimizer is adopted, a self-adaptive adjustment strategy is used for the learning rate, a loss function is minimized, and a training set is used for completing the training of the model. Selection of the hyper-parameters is accomplished using the validation set.
S504, estimation accuracy evaluation
The Root mean square error (RSE) is used as a prediction accuracy evaluation index, and the specific steps are as follows:
Figure BDA0003896538680000112
and (3) on the test set, predicting the flight state parameters at the t +1 moment by using the trained Transformer neural network model by using the data at the t moment and before the t moment.
The predicted flight state parameters comprise speeds in three directions, accelerations in three directions, angular velocities in three directions, angular accelerations in three directions and three attitude angles, which are 15-dimensional in total.
In another embodiment of the present invention, a system for predicting flight state parameters of an unmanned aerial vehicle is provided, where the system can be used to implement the method for predicting flight state parameters of an unmanned aerial vehicle.
The method comprises the following steps of building a module, and building an improved Transformer neural network model;
the parameter module is used for acquiring flight state data of a typical flight state of the unmanned aerial vehicle;
the preprocessing module is used for preprocessing the flight state data of the unmanned aerial vehicle acquired by the parameter module;
the division module is used for generating an unmanned aerial vehicle flight state data set by using data preprocessed by the preprocessing module, and then dividing the unmanned aerial vehicle flight state data set into a training set, a verification set and a test set;
and the prediction module is used for training the Transformer neural network model obtained by the building module by using the training set obtained by the dividing module, obtaining parameters of the Transformer neural network model by using the verification set obtained by the dividing module, inputting the data of the t, t-1, … and t-H-1 time sequences in the test set obtained by the dividing module into the trained Transformer neural network model, and predicting to obtain flight state parameters at the t +1 moment.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Since the outputs of the dynamical models are with respect to three directional accelerations and three directional angular accelerations, the predicted root mean square error pairs for the different prediction models are shown in table 1:
TABLE 1 predicted RMS error comparison of different prediction models
Figure BDA0003896538680000121
The root mean square errors of the dynamic model, the dynamic model-convolutional neural network hybrid model, the LSTM model and the three directional angular velocities and angular accelerations of the present invention are compared by table 1, and as seen from table 1, the error of the present invention is the smallest.
Meanwhile, the time spent by predicting once by different models is compared, as shown in table 2:
TABLE 2 comparison of prediction times for different prediction models
Figure BDA0003896538680000131
Table 2 compares the dynamic model, the dynamic model-convolutional neural network hybrid model, the LSTM model, and the prediction time of the present invention, and it is seen from the table that the prediction time of the present invention is 9ms, which meets the real-time requirement of the unmanned aerial vehicle (the control frequency of the general controller is 100 Hz).
In conclusion, the method and the system for predicting the flight state parameters of the unmanned aerial vehicle can realize high-precision real-time prediction of flight state data.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention should not be limited thereby, and any modification made on the basis of the technical idea proposed by the present invention falls within the protection scope of the claims of the present invention.

Claims (10)

1. An unmanned aerial vehicle flight state parameter prediction method is characterized by comprising the following steps:
s1, building an improved Transformer neural network model;
s2, acquiring flight state data of a typical flight state of the unmanned aerial vehicle;
s3, preprocessing the flight state data of the unmanned aerial vehicle acquired in the step S2;
s4, generating an unmanned aerial vehicle flight state data set by using the data preprocessed in the step S3, and then dividing the unmanned aerial vehicle flight state data set into a training set, a verification set and a test set;
and S5, training the Transformer neural network model obtained in the step S1 by using the training set obtained in the step S4, obtaining parameters of the Transformer neural network model by using the verification set obtained in the step S4, inputting data of H time sequences of the t, t-1, … and t-H-1 in the test set obtained in the step S4 into the trained Transformer neural network model, and predicting to obtain flight state parameters at the time of t + 1.
2. The method of claim 1, wherein in step S1, the transform neural network model includes an input, an encoder, a decoder, and a fully-connected network layer, the time-series data is passed through the linear mapping layer, and then the time-stamp coding is used as the input of the encoder and the decoder, the encoder and the decoder are stacked repeatedly N times, the output of the encoder is used as the input of a cross-correlation attention mechanism layer in the decoder, and the output of the decoder is passed through the fully-connected network layer to output the prediction data.
3. The method for predicting the flight parameters of an unmanned aerial vehicle according to claim 2, wherein the inputs of the encoder and the decoder are specifically:
and mapping the input H multiplied by m dimensional time sequence data to H multiplied by d dimension, wherein H is the length of the time sequence, m and d are the dimensions of flight state parameters, and d is more than or equal to m.
4. The unmanned aerial vehicle flight state parameter prediction method according to claim 2, wherein the timestamp coding specifically comprises:
P(k,2i)=sin(k*e -2i*ln100/d )
P(k,2i+1)=cos(k*e -(2i+1)*ln100/d )
p (k, 2 i) is time coding of an even bit dimension of the flight state data at the moment k, P (k, 2i + 1) is time coding of an odd bit dimension of the flight state data at the moment k, i represents ith dimension data, and the data after passing through the linear mapping layer have d dimensions.
5. The method of claim 1, wherein in step S2, the typical flight status of the drone includes climb, cruise, hover and dive, and the flight status parameters include timestamp, control signal, speed, acceleration and angular velocity data of the drone.
6. The unmanned aerial vehicle flight state parameter prediction method according to claim 1, wherein step S3 specifically is:
s301, suppressing noise in the unmanned aerial vehicle flight state data acquired in the step S2 by adopting filtering processing;
s302, resampling the data obtained in the step S301, aligning the time stamps of different sensor data, and generating a time series number [ T 1 ,T 2 ,…,T N ]For a total of N data.
7. The unmanned aerial vehicle flight state parameter prediction method according to claim 6, wherein in step S302, flight state data T at time k is obtained k Comprises the following steps:
Figure FDA0003896538670000021
wherein [ tau ] 1k2k3k4k ]Control input signals representing the roll, pitch, throttle and yaw channels at time k, respectively, [ v ] xk ,v yk ,v zk ]Indicating the time of kThe velocity components in the different directions of the wave,
Figure FDA0003896538670000022
represents the acceleration components in different directions at time k, [ omega ] xkykzk ]Representing angular velocity components in different directions at time k,
Figure FDA0003896538670000023
representing angular acceleration components in different directions at time k,
Figure FDA0003896538670000025
representing the attitude angle at time k.
8. The unmanned aerial vehicle flight state parameter prediction method according to claim 1, wherein in step S4, the N time series data obtained in step S3 are subjected to overlapping sampling of time interval L and length H to obtain a data set D with length M, and then the data set D is divided into a training set accounting for 60% of the total data, a verification set accounting for 20% of the total data, and a test set accounting for 20% of the total data.
9. The unmanned aerial vehicle flight state parameter prediction method of claim 1, wherein in step S5, an Adma optimization algorithm minimization of LOSS function LOSS is used to train the Transformer neural network model, and the LOSS function LOSS is specifically:
Figure FDA0003896538670000024
wherein, y k Is the true value for the time instant k,
Figure FDA0003896538670000031
is the predicted value at the moment k, and n is the number of training set samples.
10. An unmanned aerial vehicle flight status parameter prediction system, characterized in that includes:
the building module is used for building an improved Transformer neural network model;
the parameter module is used for acquiring flight state data of a typical flight state of the unmanned aerial vehicle;
the preprocessing module is used for preprocessing the flight state data of the unmanned aerial vehicle acquired by the parameter module;
the division module is used for generating an unmanned aerial vehicle flight state data set by using data preprocessed by the preprocessing module, and then dividing the unmanned aerial vehicle flight state data set into a training set, a verification set and a test set;
and the prediction module is used for training the Transformer neural network model obtained by the building module by using the training set obtained by the dividing module, obtaining parameters of the Transformer neural network model by using the verification set obtained by the dividing module, inputting the data of the t, t-1, … and t-H-1 time sequences in the test set obtained by the dividing module into the trained Transformer neural network model, and predicting to obtain flight state parameters at the t +1 moment.
CN202211274667.1A 2022-10-18 2022-10-18 Unmanned aerial vehicle flight state parameter prediction method and system Active CN115469679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211274667.1A CN115469679B (en) 2022-10-18 2022-10-18 Unmanned aerial vehicle flight state parameter prediction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211274667.1A CN115469679B (en) 2022-10-18 2022-10-18 Unmanned aerial vehicle flight state parameter prediction method and system

Publications (2)

Publication Number Publication Date
CN115469679A true CN115469679A (en) 2022-12-13
CN115469679B CN115469679B (en) 2024-09-06

Family

ID=84336977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211274667.1A Active CN115469679B (en) 2022-10-18 2022-10-18 Unmanned aerial vehicle flight state parameter prediction method and system

Country Status (1)

Country Link
CN (1) CN115469679B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115758891A (en) * 2022-11-22 2023-03-07 四川大学 Wing profile flow field prediction method based on Transformer decoder network
CN118133691A (en) * 2024-05-07 2024-06-04 中国民航大学 Flight parameter prediction model construction method, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034376A (en) * 2018-07-18 2018-12-18 东北大学 A kind of unmanned plane during flying trend prediction method and system based on LSTM
WO2019192172A1 (en) * 2018-04-04 2019-10-10 歌尔股份有限公司 Attitude prediction method and apparatus, and electronic device
US20190354644A1 (en) * 2018-05-18 2019-11-21 Honeywell International Inc. Apparatuses and methods for detecting anomalous aircraft behavior using machine learning applications
CN113190036A (en) * 2021-04-02 2021-07-30 华南理工大学 Unmanned aerial vehicle flight trajectory prediction method based on LSTM neural network
CN114757086A (en) * 2021-12-17 2022-07-15 北京航空航天大学 Multi-rotor unmanned aerial vehicle real-time remaining service life prediction method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019192172A1 (en) * 2018-04-04 2019-10-10 歌尔股份有限公司 Attitude prediction method and apparatus, and electronic device
US20190354644A1 (en) * 2018-05-18 2019-11-21 Honeywell International Inc. Apparatuses and methods for detecting anomalous aircraft behavior using machine learning applications
CN109034376A (en) * 2018-07-18 2018-12-18 东北大学 A kind of unmanned plane during flying trend prediction method and system based on LSTM
CN113190036A (en) * 2021-04-02 2021-07-30 华南理工大学 Unmanned aerial vehicle flight trajectory prediction method based on LSTM neural network
CN114757086A (en) * 2021-12-17 2022-07-15 北京航空航天大学 Multi-rotor unmanned aerial vehicle real-time remaining service life prediction method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
赵嶷飞;杨明泽;: "基于动作捕捉的无人机运动状态识别", 科学技术与工程, no. 27, 28 September 2018 (2018-09-28) *
韩建福;杜昌平;叶志贤;宋广华;郑耀;: "基于双BP神经网络的扑翼飞行器气动参数辨识", 计算机应用, no. 2, 30 December 2019 (2019-12-30) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115758891A (en) * 2022-11-22 2023-03-07 四川大学 Wing profile flow field prediction method based on Transformer decoder network
CN118133691A (en) * 2024-05-07 2024-06-04 中国民航大学 Flight parameter prediction model construction method, electronic equipment and storage medium
CN118133691B (en) * 2024-05-07 2024-07-12 中国民航大学 Flight parameter prediction model construction method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115469679B (en) 2024-09-06

Similar Documents

Publication Publication Date Title
CN115469679B (en) Unmanned aerial vehicle flight state parameter prediction method and system
CN107479368B (en) Method and system for training unmanned aerial vehicle control model based on artificial intelligence
Cortés et al. Robust rendezvous for mobile autonomous agents via proximity graphs in arbitrary dimensions
Han et al. Nonlinear modeling for a water-jet propulsion USV: An experimental study
CN102540882A (en) Aircraft track inclination angle control method based on minimum parameter studying method
Bai et al. Multi-innovation gradient iterative locally weighted learning identification for a nonlinear ship maneuvering system
CN112099506A (en) Tracking control method and system for under-actuated unmanned ship time-varying formation
CN111694913A (en) Ship AIS (automatic identification System) track clustering method and device based on convolution self-encoder
CN115329459A (en) Underwater vehicle modeling method and system based on digital twinning
CN112835368A (en) Multi-unmanned-boat collaborative formation control method and system
CN104808662B (en) A kind of control method for suppressing ship course disturbance based on data-driven
CN109556609A (en) A kind of collision prevention method and device based on artificial intelligence
Asignacion et al. Frequency-based wind gust estimation for quadrotors using a nonlinear disturbance observer
My et al. An Artificial Neural Networks (ANN) Approach for 3 Degrees of Freedom Motion Controlling
CN112432643B (en) Driving data generation method and device, electronic equipment and storage medium
Zhang et al. Heterogeneous cooperative trajectory tracking control between surface and underwater unmanned vehicles
CN112733971A (en) Pose determination method, device and equipment of scanning equipment and storage medium
CN115542746B (en) Energy control reentry guidance method and device for hypersonic aircraft
CN114936669B (en) Mixed ship rolling prediction method based on data fusion
CN115826583A (en) Automatic driving vehicle formation method based on point cloud map
CN116009583A (en) Pure vision-based distributed unmanned aerial vehicle cooperative motion control method and device
CN112161626B (en) High-flyability route planning method based on route tracking mapping network
CN110514199B (en) Loop detection method and device of SLAM system
CN115577511B (en) Short-term track prediction method, device and system based on unmanned aerial vehicle motion state
CN116358564B (en) Unmanned aerial vehicle bee colony centroid motion state tracking method, system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant