CN115469679A - Unmanned aerial vehicle flight state parameter prediction method and system - Google Patents
Unmanned aerial vehicle flight state parameter prediction method and system Download PDFInfo
- Publication number
- CN115469679A CN115469679A CN202211274667.1A CN202211274667A CN115469679A CN 115469679 A CN115469679 A CN 115469679A CN 202211274667 A CN202211274667 A CN 202211274667A CN 115469679 A CN115469679 A CN 115469679A
- Authority
- CN
- China
- Prior art keywords
- data
- flight state
- unmanned aerial
- aerial vehicle
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000003062 neural network model Methods 0.000 claims abstract description 45
- 238000012549 training Methods 0.000 claims abstract description 29
- 238000012360 testing method Methods 0.000 claims abstract description 20
- 238000012795 verification Methods 0.000 claims abstract description 18
- 238000007781 pre-processing Methods 0.000 claims abstract description 14
- 230000001133 acceleration Effects 0.000 claims description 25
- 238000013507 mapping Methods 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 230000007246 mechanism Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000012952 Resampling Methods 0.000 claims description 2
- 239000007787 solid Substances 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000010606 normalization Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 4
- 238000003860 storage Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000009194 climbing Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000009189 diving Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/08—Control of attitude, i.e. control of roll, pitch, or yaw
- G05D1/0808—Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
- G05D1/0816—Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft to ensure stability
- G05D1/0825—Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft to ensure stability using mathematical models
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
- G05D1/106—Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Aviation & Aerospace Engineering (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
技术领域technical field
本发明属于无人机技术领域,具体涉及一种无人机飞行状态参数预测方法及系统。The invention belongs to the technical field of unmanned aerial vehicles, and in particular relates to a method and system for predicting flight state parameters of an unmanned aerial vehicle.
背景技术Background technique
当前,无人机已经被广泛应用在许多行业,如航拍、测绘、巡检、空中巡逻等。不同行业需要无人机完成不同任务,但对无人机最根本的要求是无人机可以被精准的控制来完成这些任务。要对无人机进行精准控制,无人机的控制器需要得到实时、精准的无人机状态参数,如无人机的速度、加速度、姿态角、角速度、角加速度等。At present, drones have been widely used in many industries, such as aerial photography, surveying and mapping, inspection, air patrol and so on. Different industries require UAVs to complete different tasks, but the most fundamental requirement for UAVs is that UAVs can be precisely controlled to complete these tasks. To precisely control the drone, the controller of the drone needs to obtain real-time and accurate state parameters of the drone, such as the speed, acceleration, attitude angle, angular velocity, and angular acceleration of the drone.
传统的估计和预测飞行状态参数的方法是基于无人机动力学模型完成的。动力学模型是在一定的假设和简化条件下,基于牛顿第二定律和欧拉方程来计算无人机的加速度信息和角加速度信息,然后通过积分得到其它飞行状态参数信息,如速度,角速度和姿态角等。动力学建模方法对无人机的实际飞行做了一些假设和简化,一般忽略了无人机飞行中的非定常参数和一些不确定性状态,如空气动力学特性,所以通过无人机动力学模型得到的加速度和角加速度信息是不够精确的,存在一定的误差。其它飞行状态参数通过对加速度和角加速度积分计算得到,积分运算会把这种误差进行累积,所以通过动力学模型得到无人机飞行状态参数是不够精确的。Traditional methods of estimating and predicting flight state parameters are based on UAV dynamics models. The dynamic model is to calculate the acceleration information and angular acceleration information of the UAV based on Newton's second law and Euler's equation under certain assumptions and simplified conditions, and then obtain other flight state parameter information through integration, such as velocity, angular velocity and attitude angle etc. The dynamic modeling method makes some assumptions and simplifications for the actual flight of the UAV, and generally ignores the unsteady parameters and some uncertain states in the flight of the UAV, such as aerodynamic characteristics, so through the UAV dynamics The acceleration and angular acceleration information obtained by the model is not accurate enough, and there are certain errors. Other flight state parameters are calculated by integrating the acceleration and angular acceleration, and the integral operation will accumulate this error, so it is not accurate enough to obtain the flight state parameters of the UAV through the dynamic model.
为了提高飞行状态参数估计的精度,现有使用动力学模型和卷积神经网络的联合模型,其中动力学模型预测飞行状态中确定性部分,卷积神经网络预测飞行状态中不确定性部分;这种方法较传统的单纯动力学建模方法,在预测结果的精度上有很大的提升。但是,由于在无人机飞行过程中,无人机附近的流场和无人机的状态是随时间变化的,历史时刻的飞行状态会影响当前的飞行状态,而传统的动力学模型和这种动力学-卷积神经网络联合模型,没有充分考虑这种时间变化特性,没能充分利用无人机的历史信息来预测当前飞行状态参数。In order to improve the accuracy of flight state parameter estimation, a joint model of dynamics model and convolutional neural network is currently used, in which the dynamics model predicts the deterministic part of the flight state, and the convolutional neural network predicts the uncertain part of the flight state; this Compared with the traditional simple dynamic modeling method, this method has greatly improved the accuracy of prediction results. However, since the flow field near the UAV and the state of the UAV change with time during the flight of the UAV, the flight state at the historical moment will affect the current flight state, while the traditional dynamic model and this A dynamics-convolutional neural network joint model does not fully consider this time-varying characteristic, and fails to make full use of the historical information of the UAV to predict the current flight state parameters.
为了充分利用无人机历史信息来预测当前飞行状态参数据,LSTM方法被单独使用或联合传统的动力学建模一起使用来估计无人机的飞行状态参数,但是由于LSTM方法存在信息瓶颈,输入过长时间片段的飞行状态数据时,前面的信息可能会丢失,从而导致这种方法只能利用较短时间片段的飞行状态数据,这会影响预测的精度。同时,由于使用深层LSTM网络,会存在梯度消失或爆照,网络的收敛性差,且计算复杂,并行能力差,计算耗时较长,很难做到实时预测。In order to make full use of the historical information of the UAV to predict the current flight state parameters, the LSTM method is used alone or in combination with traditional dynamic modeling to estimate the flight state parameters of the UAV. However, due to the information bottleneck of the LSTM method, the input When the flight status data of a long time segment is too long, the previous information may be lost, so that this method can only use the flight status data of a shorter time segment, which will affect the prediction accuracy. At the same time, due to the use of a deep LSTM network, there will be gradient disappearance or explosion, the convergence of the network is poor, and the calculation is complex, the parallel ability is poor, the calculation takes a long time, and it is difficult to achieve real-time prediction.
无人机飞行状态参数的精准预测是无人机精准控制的基础,进而保证了无人机安全可靠飞行,完成在更多的复杂的场景下的任务,所以实时高精度的预测无人机飞行状态参数对无人机而言至关重要。Accurate prediction of UAV flight state parameters is the basis for precise control of UAVs, thereby ensuring safe and reliable flight of UAVs, and completing tasks in more complex scenarios, so real-time high-precision prediction of UAV flight State parameters are crucial for drones.
发明内容Contents of the invention
本发明所要解决的技术问题在于针对上述现有技术中的不足,提供一种无人机飞行状态参数预测方法及系统,用于解决无人机飞行状态参数预测不够精确和实时性差,进而影响到无人机的控制精度和控制频率的技术问题。The technical problem to be solved by the present invention is to provide a method and system for predicting flight state parameters of UAVs in view of the deficiencies in the above-mentioned prior art, which are used to solve the problem of inaccurate and poor real-time prediction of UAV flight state parameters, which in turn affects the The control accuracy and control frequency of the UAV are technical issues.
本发明采用以下技术方案:The present invention adopts following technical scheme:
一种无人机飞行状态参数预测方法,包括以下步骤:A method for predicting flight state parameters of an unmanned aerial vehicle, comprising the following steps:
S1、搭建改进的Transformer神经网络模型;S1. Build an improved Transformer neural network model;
S2、获取无人机典型飞行状态的飞行状态数据;S2. Obtain the flight state data of the typical flight state of the drone;
S3、对步骤S2获取的无人机飞行状态数据进行预处理;S3. Preprocessing the flight state data of the unmanned aerial vehicle acquired in step S2;
S4、利用步骤S3预处理后的数据生成无人机飞行状态数据集,然后将无人机飞行状态数据集划分为训练集、验证集和测试集;S4, utilize the preprocessed data of step S3 to generate the UAV flight state data set, and then divide the UAV flight state data set into a training set, a verification set and a test set;
S5、使用步骤S4得到的训练集对步骤S1得到的Transformer神经网络模型进行训练,利用步骤S4得到的验证集得到Transformer神经网络模型的参数,将步骤S4得到的测试集中第t,t-1,…,t-H-1共H个时间序列的数据输入训练好的Transformer神经网络模型,预测得到t+1时刻的飞行状态参数。S5, use the training set obtained in step S4 to train the Transformer neural network model obtained in step S1, use the verification set obtained in step S4 to obtain the parameters of the Transformer neural network model, and use the test set t obtained in step S4, t-1, …, t-H-1 A total of H time series data are input into the trained Transformer neural network model, and the flight status parameters at time t+1 are predicted.
具体的,步骤S1中,Transformer神经网络模型包括输入、编码器、解码器和全连接网络层,将时间序列数据经过线性映射层,再加上时间戳编码作为编码器和解码器的输入,编码器和解码器均重复N次堆叠,编码器的输出作为解码器中互相关注意力机制层的输入,解码器的输出经过全连接网络层后,输出预测数据。Specifically, in step S1, the Transformer neural network model includes an input, an encoder, a decoder, and a fully-connected network layer. The time series data is passed through a linear mapping layer, and time stamp encoding is added as the input of the encoder and decoder, and the encoding Both the decoder and the decoder are stacked N times, the output of the encoder is used as the input of the cross-correlation attention mechanism layer in the decoder, and the output of the decoder passes through the fully connected network layer to output the prediction data.
进一步的,编码器和解码器的输入具体为:Further, the input of the encoder and decoder is specifically:
将输入的H×m维时间序列数据映射到H×d维,H是时间序列长度,m和d是飞行状态参数的维度,d≥m。Map the input H×m dimensional time series data to H×d dimension, H is the length of time series, m and d are the dimensions of flight state parameters, d≥m.
进一步的,时间戳编码具体为:Further, the timestamp encoding is specifically:
P(k,2i)=sin(k*e-2i*ln100/d)P(k,2i)=sin(k*e -2i*ln100/d )
P(k,2i+1)=cos(k*e-(2i+1)*ln100/d)P(k,2i+1)=cos(k*e -(2i+1)*ln100/d )
其中,P(k,2i)为k时刻的飞行状态数据偶数位维度的时间编码,P(k,2i+1)为k时刻的飞行状态数据奇数位维度的时间编码,i表示第i维数据,经过线性映射层之后数据共d维。Among them, P(k, 2i) is the time code of the even-numbered dimension of the flight state data at time k, P(k, 2i+1) is the time code of the odd-numbered dimension of the flight state data at time k, and i represents the i-th dimension data , after the linear mapping layer, the data has a total of d dimensions.
具体的,步骤S2中,无人机典型飞行状态包括爬升、巡航、盘旋和俯冲,飞行状态参数包括无人机的时间戳、控制信号、速度、加速度和角速度数据。Specifically, in step S2, typical flight states of the UAV include climb, cruise, hover and dive, and the flight state parameters include the time stamp, control signal, speed, acceleration and angular velocity data of the UAV.
具体的,步骤S3具体为:Specifically, step S3 is specifically:
S301、采用滤波处理抑制步骤S2获取的无人机飞行状态数据中的噪声;S301, using filtering processing to suppress the noise in the flight state data of the UAV acquired in step S2;
S302、对步骤S301得到的数据进行重采样,对齐不同传感器数据的时间戳,生成时间序列数[T1,T2,…,TN]共N个数据。S302. Resample the data obtained in step S301, align time stamps of different sensor data, and generate N data in total of time series numbers [T 1 , T 2 , . . . , T N ].
进一步的,步骤S302中,k时刻的飞行状态数据Tk为:Further, in step S302, the flight state data T k at time k is:
其中,[τ1k,τ2k,τ3k,τ4k]分别表示k时刻滚转、俯仰、油门和偏航通道的控制输入信号,[vxk,vyk,vzk]表示k时刻不同方向的速度分量,表示k时刻不同方向的加速度分量,[ωxk,ωyk,ωzk]表示k时刻不同方向的角速度分量,表示k时刻不同方向的角加速度分量,表示k时刻姿态角。Among them, [τ 1k ,τ 2k ,τ 3k ,τ 4k ] represent the control input signals of roll, pitch, throttle and yaw channels at time k respectively, and [v xk ,v yk ,v zk ] represent the control input signals of different directions at time k speed component, Represents the acceleration components in different directions at time k, [ω xk , ω yk , ω zk ] represents the angular velocity components in different directions at time k, Indicates the angular acceleration components in different directions at time k, Indicates the attitude angle at time k.
具体的,步骤S4中,将步骤S3得到的N个时间序列数据,进行时间间隔L,长度H的重叠采样,得到长度为M的数据集D,然后划分,训练集占总数据的60%,验证集占总数据的20%,测试集占总数据的20%。Specifically, in step S4, the N time series data obtained in step S3 are overlapped with time interval L and length H to obtain a data set D with length M, and then divided. The training set accounts for 60% of the total data. The validation set is 20% of the total data, and the test set is 20% of the total data.
具体的,步骤S5中,使用Adma优化算法最小化损失函数LOSS对Transformer神经网络模型进行训练,损失函数LOSS具体为:Specifically, in step S5, the Transformer neural network model is trained using the Adma optimization algorithm to minimize the loss function LOSS, and the loss function LOSS is specifically:
其中,yk为k时刻的真实值,为k时刻的预测值,n为训练集样本个数。Among them, y k is the real value at time k, is the predicted value at time k, and n is the number of samples in the training set.
第二方面,本发明实施例提供了一种无人机飞行状态参数预测系统,包括:In a second aspect, an embodiment of the present invention provides a system for predicting flight state parameters of an unmanned aerial vehicle, including:
搭建模块,搭建改进的Transformer神经网络模型;Build modules and build an improved Transformer neural network model;
参数模块,获取无人机典型飞行状态的飞行状态参数;The parameter module is used to obtain the flight state parameters of the typical flight state of the UAV;
预处理模块,对参数模块获取的无人机飞行状态数据进行预处理;The preprocessing module preprocesses the flight status data of the UAV acquired by the parameter module;
划分模块,利用预处理模块预处理后的数据生成无人机飞行状态数据集,然后将无人机飞行状态数据集划分为训练集、验证集和测试集;The division module uses the data preprocessed by the preprocessing module to generate the UAV flight state data set, and then divides the UAV flight state data set into a training set, a verification set and a test set;
预测模块,使用划分模块得到的训练集对搭建模块得到的Transformer神经网络模型进行训练,利用划分模块得到的验证集得到Transformer神经网络模型的参数,将划分模块得到的测试集中第t,t-1,…,t-H-1共H个时间序列的数据输入训练好的Transformer神经网络模型,预测得到t+1时刻的飞行状态参数。For the prediction module, use the training set obtained by the division module to train the Transformer neural network model obtained by the construction module, use the verification set obtained by the division module to obtain the parameters of the Transformer neural network model, and divide the test set obtained by the division module into t, t-1 ,…, t-H-1, a total of H time series data input the trained Transformer neural network model, and predict the flight status parameters at time t+1.
与现有技术相比,本发明至少具有以下有益效果:Compared with the prior art, the present invention has at least the following beneficial effects:
本发明一种无人机飞行状态参数预测方法,通过改进Transformer神经网络模型,使用无人机典型飞行状态的飞行数据来训练该模型,能够实时高精度地预测无人机的飞行状态参数,为无人机精准控制奠定坚实基础,保证无人机安全可靠飞行,同时可以执行不同行业的不同任务,拓展无人机的使用场景。The present invention is a method for predicting the flight state parameters of an unmanned aerial vehicle. By improving the Transformer neural network model and using the flight data of the typical flight state of the unmanned aerial vehicle to train the model, the flight state parameters of the unmanned aerial vehicle can be predicted in real time and with high precision. The precise control of drones lays a solid foundation to ensure safe and reliable flight of drones. At the same time, it can perform different tasks in different industries and expand the use scenarios of drones.
进一步的,改进的Transformer神经网络模型,提出来时间戳编码和统一编码器解码器的输入,较最原始的模型具有更好的并行性能和更高的预测精度,因此可以实现无人机状态参数的实时高精度预测。Furthermore, the improved Transformer neural network model, which proposes time stamp encoding and unified encoder-decoder input, has better parallel performance and higher prediction accuracy than the most original model, so it can realize UAV state parameters Real-time high-precision forecasting.
进一步的,模型将输入的H×m维时间序列数据映射到H×d维,既将输入数据映射到更高维度,这样更有利于学习表示无人机状态参数间复杂的关系。Furthermore, the model maps the input H×m dimensional time series data to H×d dimension, which maps the input data to a higher dimension, which is more conducive to learning to represent the complex relationship between UAV state parameters.
进一步的,由于Transformer神经网络模型中的注意力机制不考虑输入变量之间关系,而飞行数据间存在着很强的时间顺序,为了更好的表示这一时间顺序关系,本发明提出了一种时间戳编码方法。Furthermore, since the attention mechanism in the Transformer neural network model does not consider the relationship between input variables, and there is a strong time sequence between flight data, in order to better represent this time sequence relationship, the present invention proposes a Timestamp encoding method.
进一步的,无人机典型的飞行状态包括爬升、巡航、盘旋和俯冲,本发明采集了这几种典型的飞行状态数据用于改进Transformer神经网络模型的训练,这样可以使得模型学习无人机的典型飞行状态,这极大的拓展了本发明的应用范围。Further, the typical flight states of UAVs include climbing, cruising, circling and diving. The present invention collects these typical flight state data for improving the training of the Transformer neural network model, so that the model can learn the Typical flight status, which greatly expands the scope of application of the present invention.
进一步的,对于采集得到的飞行数据进行滤波处理可以消除数据中存在的噪声信号;由于无人机上不同传感器的采集数据的采样频率不同,对于采集到的数据进行重采样,可以对齐不同传感器数据的时间戳。Furthermore, filtering the collected flight data can eliminate the noise signal in the data; since the sampling frequency of the data collected by different sensors on the UAV is different, resampling the collected data can align the different sensor data. timestamp.
进一步的,经过预处理之后的飞行数据,包括了所有维度的飞行状态参数。使用全部维度的飞行状态参数作为网络的输入来训练模型,可以使得模型学习得到一个全局模型,进而可以提升模型的预测精度。Further, the preprocessed flight data includes flight state parameters in all dimensions. Using the flight status parameters of all dimensions as the input of the network to train the model can enable the model to learn a global model, which in turn can improve the prediction accuracy of the model.
进一步的,重叠采样可以使得扩充训练数据,避免模型出现过拟合。Further, overlapping sampling can expand the training data and avoid over-fitting of the model.
进一步的,采用Adma优化算法可以使得模型收敛更加稳定。可以理解的是,上述第二方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。Furthermore, the Adma optimization algorithm can make the model convergence more stable. It can be understood that, for the beneficial effects of the second aspect above, reference may be made to the relevant description in the first aspect above, and details are not repeated here.
综上所述,本发明能够实时高精度地预测无人机的飞行状态参数,为无人机精准控制奠定坚实基础,保证无人机安全可靠飞行,同时可以执行不同行业的不同任务,拓展无人机的使用场景。In summary, the present invention can predict the flight state parameters of UAVs in real time and with high precision, lay a solid foundation for precise control of UAVs, ensure safe and reliable flight of UAVs, and at the same time perform different tasks in different industries and expand the range of drones. Human-machine usage scenarios.
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings and embodiments.
附图说明Description of drawings
图1为本发明流程示意图;Fig. 1 is a schematic flow chart of the present invention;
图2为本发明改进Transformer神经网络模型示意图。Fig. 2 is a schematic diagram of the improved Transformer neural network model of the present invention.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are some of the embodiments of the present invention, but not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
在本发明的描述中,需要理解的是,术语“包括”和“包含”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。In the description of the present invention, it should be understood that the terms "comprising" and "comprising" indicate the presence of described features, integers, steps, operations, elements and/or components, but do not exclude one or more other features, Presence or addition of wholes, steps, operations, elements, components and/or collections thereof.
还应当理解,在本发明说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本发明。如在本发明说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。It should also be understood that the terminology used in the description of the present invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the present invention. As used in this specification and the appended claims, the singular forms "a", "an" and "the" are intended to include plural referents unless the context clearly dictates otherwise.
还应当进一步理解,在本发明说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。It should also be further understood that the term "and/or" used in the description of the present invention and the appended claims refers to any combination and all possible combinations of one or more of the associated listed items, and includes these combinations , for example, A and/or B, may mean: A exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this article generally indicates that the contextual objects are an "or" relationship.
应当理解,尽管在本发明实施例中可能采用术语第一、第二、第三等来描述预设范围等,但这些预设范围不应限于这些术语。这些术语仅用来将预设范围彼此区分开。例如,在不脱离本发明实施例范围的情况下,第一预设范围也可以被称为第二预设范围,类似地,第二预设范围也可以被称为第一预设范围。It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present invention to describe preset ranges, etc., these preset ranges should not be limited to these terms. These terms are only used to distinguish preset ranges from one another. For example, without departing from the scope of the embodiments of the present invention, the first preset range may also be called the second preset range, and similarly, the second preset range may also be called the first preset range.
取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。Depending on the context, the word "if" as used herein may be interpreted as "at" or "when" or "in response to determining" or "in response to detecting". Similarly, depending on the context, the phrases "if determined" or "if detected (the stated condition or event)" could be interpreted as "when determined" or "in response to the determination" or "when detected (the stated condition or event) )" or "in response to detection of (a stated condition or event)".
在附图中示出了根据本发明公开实施例的各种结构示意图。这些图并非是按比例绘制的,其中为了清楚表达的目的,放大了某些细节,并且可能省略了某些细节。图中所示出的各种区域、层的形状及它们之间的相对大小、位置关系仅是示例性的,实际中可能由于制造公差或技术限制而有所偏差,并且本领域技术人员根据实际所需可以另外设计具有不同形状、大小、相对位置的区域/层。Various structural schematic diagrams according to the disclosed embodiments of the present invention are shown in the accompanying drawings. The figures are not drawn to scale, with certain details exaggerated and possibly omitted for clarity of presentation. The shapes of various regions and layers shown in the figure and their relative sizes and positional relationships are only exemplary, and may deviate due to manufacturing tolerances or technical limitations in practice, and those skilled in the art may Regions/layers with different shapes, sizes, and relative positions can be additionally designed as needed.
本发明提供了一种无人机飞行状态参数预测方法,充分利用了无人机的历史飞行状态信息,避免了使用基于假设和简化的动力学模型方法,从而达到高精度的预测;采用改进的Transformer神经网络模型具有很好的并行计算能力,从而使得本方法可以达到实时预测的要求;为无人机安全可靠飞行、执行复杂环境下的复杂任务奠定了坚实基础,从而极大的拓展无人机的使用场景。The invention provides a method for predicting the flight state parameters of the UAV, which makes full use of the historical flight state information of the UAV, avoids the use of dynamic model methods based on assumptions and simplification, and thus achieves high-precision prediction; adopts the improved The Transformer neural network model has very good parallel computing capabilities, so that this method can meet the requirements of real-time prediction; it has laid a solid foundation for the safe and reliable flight of UAVs and the execution of complex tasks in complex environments, thus greatly expanding the unmanned machine usage scenarios.
请参阅图1,本发明一种无人机飞行状态参数预测方法,包括以下步骤:Please refer to Fig. 1, a kind of unmanned aerial vehicle flight state parameter prediction method of the present invention comprises the following steps:
S1、搭建改进的Transformer神经网络模型,用于预测飞行状态参数;S1. Build an improved Transformer neural network model for predicting flight state parameters;
请参阅图2,Transformer神经网络模型中采用时间编码方法和线性映射层,同时统一编码器和解码器的输入;输入的时间序列数据经过线性映射层,对数据进行编码和升维,来描述时序数据之间的复杂关系,然后加上时间戳编码,作为编码器和解码器的输入。时间戳编码可以有效的描述时序数据之间的先后顺序关系。Please refer to Figure 2. The Transformer neural network model adopts the time encoding method and the linear mapping layer, and at the same time unifies the input of the encoder and decoder; the input time series data passes through the linear mapping layer to encode and increase the dimension of the data to describe the time series Complex relationships between data are then encoded with timestamps as input to encoders and decoders. Timestamp coding can effectively describe the sequential relationship between time series data.
Transformer神经网络模型包括输入、编码器、解码器和全连接网络层,The Transformer neural network model includes input, encoder, decoder and fully connected network layers,
编码器由多头自注意力层、残差归一化层、前馈网络层和归一化层组成,编码器可以重复N次以更好的描述时序数据之间复杂的关系。The encoder consists of a multi-head self-attention layer, a residual normalization layer, a feedforward network layer and a normalization layer. The encoder can be repeated N times to better describe the complex relationship between time series data.
解码器的输入和编码器相同,输入数据经过多头自注意力、残差归一化层,多头互注意力层融合了编码器输出数据和解码器的输入数据,然后经过残差归一化层,前馈网络层和残差归一化层得到解码器的输出。前馈网络层由两个线性网络层中间加一个激活函数组成。解码器也重复N次来更好的描述时序数据之间复杂的关系。解码器的输出经过全连接层,最后得到预测的飞行状态参数。The input of the decoder is the same as that of the encoder. The input data passes through the multi-head self-attention and residual normalization layer. The multi-head mutual attention layer fuses the output data of the encoder and the input data of the decoder, and then passes through the residual normalization layer. , the feedforward network layer and the residual normalization layer get the output of the decoder. The feedforward network layer consists of two linear network layers with an activation function in between. The decoder also repeats N times to better describe the complex relationship between time series data. The output of the decoder goes through fully connected layers, and finally the predicted flight state parameters are obtained.
时间戳编码使时间序列数据的先后时间顺序得以保持;线性映射层将输入的时间序列映射到更高维度的,对数据进行升维,从而更好的表示时间序列数据间复杂关系;Timestamp coding keeps the chronological order of time series data; the linear mapping layer maps the input time series to a higher dimension, and increases the dimension of the data, so as to better represent the complex relationship between time series data;
解码器使用和编码器相同的输入,避免使用自回归来顺序预测,减少累积误差,避免顺序预测也可以提高算法的并行效率,加速预测时间。The decoder uses the same input as the encoder, avoids the use of autoregressive for sequential prediction, reduces cumulative errors, and avoids sequential prediction can also improve the parallel efficiency of the algorithm and speed up the prediction time.
时间序列数据经过线性映射层升维之后,与时间编码数据相加作为编码器和解码器的输入,本发明的编码器和解码器均使用原始Transformer神经网络模型的编码器和解码器,解码器输出经过全连接层得到最终的输出结果。After the time series data is increased by the linear mapping layer, it is added to the time-coded data as the input of the encoder and the decoder. The encoder and the decoder of the present invention all use the encoder and the decoder of the original Transformer neural network model, and the decoder The output goes through the fully connected layer to get the final output result.
时间编码使用如下:Time encoding is used as follows:
P(k,2i)=sin(k*e-2i*ln100/d)P(k,2i)=sin(k*e -2i*ln100/d )
P(k,2i+1)=cos(k*e-(2i+1)*ln100/d)P(k,2i+1)=cos(k*e -(2i+1)*ln100/d )
其中,k表示时间戳数据,i表示第i维数据,d表示经过线性映射层之后数据的维度,d=512。Among them, k represents the timestamp data, i represents the i-th dimension data, d represents the dimension of the data after the linear mapping layer, and d=512.
S2、获取无人机典型飞行状态的飞行状态参数;S2. Obtain the flight state parameters of the typical flight state of the UAV;
无人机典型飞行状态包括爬升、巡航、盘旋、俯冲等飞行状态;无人机的时间戳、控制信号、速度、加速度、角速度数据,分别由不同传感器采集得到,然后发送到飞控系统的LOG模块进行记录,读取LOG模块记录的飞行状态历史数据,之后对角速度进行积分得到姿态角数据,对角速度微分得到角加速度数据。The typical flight states of UAVs include climbing, cruising, hovering, and diving; the time stamp, control signal, speed, acceleration, and angular velocity data of the UAV are collected by different sensors, and then sent to the LOG of the flight control system. The module records, reads the historical flight status data recorded by the LOG module, and then integrates the angular velocity to obtain the attitude angle data, and differentiates the angular velocity to obtain the angular acceleration data.
针对每种不同的飞行状态,均采集25mins的飞行数据,共100mins数据。For each different flight state, 25mins of flight data are collected, a total of 100mins of data.
S3、对步骤S2获取的无人机飞行状态数据进行预处理;S3. Preprocessing the flight state data of the unmanned aerial vehicle acquired in step S2;
S301、采用滤波处理抑制无人机飞行状态数据中的噪声;S301. Using filtering processing to suppress noise in the flight status data of the drone;
滤波包括硬件滤波和软件滤波,滤波算法不局限于卡尔曼滤波。Filtering includes hardware filtering and software filtering, and the filtering algorithm is not limited to Kalman filtering.
S302、由于不同传感器的采样频率不同,采集得到数据的时间间隔也不同,需要对步骤S301得到的数据进行重采样,以对齐不同传感器数据的时间戳,然后生成时间序列数[T1,T2,…,TN]共N个数据。S302. Since the sampling frequencies of different sensors are different, the time intervals for collecting data are also different. It is necessary to resample the data obtained in step S301 to align the time stamps of different sensor data, and then generate a time series number [T 1 , T 2 ,…,T N ] a total of N data.
k时刻的飞行状态数据Tk为:The flight state data T k at time k is:
其中,[τ1k,τ2k,τ3k,τ4k]分别表示k时刻滚转、俯仰、油门和偏航通道的控制输入信号,[vxk,vyk,vzk]表示k时刻不同方向的速度分量,表示k时刻不同方向的加速度分量,[ωxk,ωyk,ωzk]表示k时刻不同方向的角速度分量,表示k时刻不同方向的角加速度分量,表示k时刻姿态角。Among them, [τ 1k ,τ 2k ,τ 3k ,τ 4k ] represent the control input signals of roll, pitch, throttle and yaw channels at time k respectively, and [v xk ,v yk ,v zk ] represent the control input signals of different directions at time k speed component, Represents the acceleration components in different directions at time k, [ω xk , ω yk , ω zk ] represents the angular velocity components in different directions at time k, Indicates the angular acceleration components in different directions at time k, Indicates the attitude angle at time k.
S4、利用步骤S3预处理后的数据生成无人机飞行状态数据集,然后将无人机飞行状态数据集划分为训练集、验证集和测试集;S4, utilize the preprocessed data of step S3 to generate the UAV flight state data set, and then divide the UAV flight state data set into a training set, a verification set and a test set;
S4中生成的数据集是对步骤S3中预处理得到N个时间序列数据,进行时间间隔为L,长度为H的重叠采样,得到长度为M的数据集D,然后按照,训练集占总数据的60%,验证集占总数据的20%,测试集占总数据的20%,随机采样得到训练集、验证集和测试集。The data set generated in S4 is the N time series data obtained by preprocessing in step S3, the time interval is L, the length is H overlap sampling, and the data set D with length M is obtained, and then according to, the training set accounts for the total data 60% of the total data, the verification set accounts for 20% of the total data, and the test set accounts for 20% of the total data. Random sampling is used to obtain the training set, verification set and test set.
数据集D为:Dataset D is:
D={[T1,T2,…,TH],[T1+L,T2+L,…,TH+L],…,[TN-H+1,TN-H+2,…,TN]}。D={[T 1 ,T 2 ,…,T H ],[T 1+L ,T 2+L ,…,T H+L ],…,[T N-H+1 ,T N-H+ 2 ,...,T N ]}.
S5、使用步骤S4得到的训练集对步骤S1得到的Transformer神经网络模型进行训练,利用步骤S4得到的验证集得到Transformer神经网络模型的参数,使用步骤S4得到的测试集中第t,t-1,…,t-H-1共H个时间序列的数据,利用训练好的模型对t+1时刻飞行状态参数进行预测。S5, use the training set obtained in step S4 to train the Transformer neural network model obtained in step S1, use the verification set obtained in step S4 to obtain the parameters of the Transformer neural network model, use the test set t obtained in step S4, t-1, ..., t-H-1 total H time series data, use the trained model to predict the flight status parameters at time t+1.
S501、Transformer神经网络模型的输入输出S501. Input and output of Transformer neural network model
Transformer神经网络模型的输入为t时刻前1s的数据,预处理之后的时间序列数据的时间间隔为0.01s,输入共100个时间序列数据,每个时间序列数据Tk是19维,输入数据为100×19维的向量。The input of the Transformer neural network model is the data 1s before time t, and the time interval of the time series data after preprocessing is 0.01s. A total of 100 time series data are input, each time series data T k is 19 dimensions, and the input data is A 100-by-19-dimensional vector.
Transformer神经网络模型的输出Tt+1为t+1时刻的飞行状态参数,共15维,输出数据格式为1×15维的向量。The output T t+1 of the Transformer neural network model is the flight state parameters at time t+1, with a total of 15 dimensions, and the output data format is a 1×15-dimensional vector.
Transformer神经网络模型的输出Tt+1具体为:The output T t+1 of the Transformer neural network model is specifically:
其中,[vx,t+1,vy,t+1,vz,t+1]表示t+1时刻不同方向的速度分量,表示t+1时刻不同方向的加速度分量,[ωx,t+1,ωy,t+1,ωz,t+1]表示t+1时刻不同方向的角速度分量,表示t+1时刻不同方向的角加速度分量,表示t+1时刻的姿态角。Among them, [v x,t+1 ,v y,t+1 ,v z,t+1 ] represent the velocity components in different directions at time t+1, Indicates the acceleration components in different directions at time t+1, [ω x,t+1 ,ω y,t+1 ,ω z,t+1 ] indicates the angular velocity components in different directions at time t+1, Indicates the angular acceleration components in different directions at time t+1, Indicates the attitude angle at time t+1.
S502、参数初始化S502, parameter initialization
初始化整个网络的参数,网络的超参数根据在验证集上的大量实验选取最优值;其中,编码器和解码器层数均为8层,线性映射层维度为128,多头注意力层头数为8,全连接层维度1024。Initialize the parameters of the entire network, and select the optimal value of the hyperparameters of the network according to a large number of experiments on the verification set; among them, the number of layers of the encoder and decoder is 8 layers, the dimension of the linear mapping layer is 128, and the number of heads of the multi-head attention layer It is 8, and the dimension of the fully connected layer is 1024.
S503、模型训练和测试S503, model training and testing
网络中的激活函数选择了Relu函数,在预测无人机飞行状态参数时,选择了均方误差(Mean square Error,MSE)作为损失函数。The activation function in the network selects the Relu function, and when predicting the flight state parameters of the drone, the mean square error (Mean square error, MSE) is selected as the loss function.
其中,yk为k时刻的真实值,为k时刻的预测值,n为训练集样本个数。Among them, y k is the real value at time k, is the predicted value at time k, and n is the number of samples in the training set.
采用Adma优化器,学习率使用自适应调整策略,最小化损失函数,使用训练集完成模型的训练。使用验证集完成超参数的选择。Using the Adma optimizer, the learning rate uses an adaptive adjustment strategy to minimize the loss function, and the training set is used to complete the training of the model. The selection of hyperparameters is done using the validation set.
S504、预测精度评价S504. Prediction accuracy evaluation
使用均方根误差(Root mean square,RSE)作为预测精度评价指标,具体如下:Root mean square error (Root mean square, RSE) is used as the prediction accuracy evaluation index, as follows:
在测试集上,使用t时刻及其之前的数据,利用训练好的Transformer神经网络模型对t+1时刻飞行状态参数的预测。On the test set, use the data at time t and before, and use the trained Transformer neural network model to predict the flight state parameters at time t+1.
预测得到的飞行状态参数包含三个方向的速度,三个方向的加速度,三个方向的角速度,三个方向的角加速度和三个姿态角,共15维。The predicted flight state parameters include velocities in three directions, accelerations in three directions, angular velocities in three directions, angular accelerations in three directions and three attitude angles, totaling 15 dimensions.
本发明再一个实施例中,提供一种无人机飞行状态参数预测系统,该系统能够用于实现上述无人机飞行状态参数预测方法,具体的,该无人机飞行状态参数预测系统包括搭建模块、参数模块、预处理模块、划分模块以及预测模块。In yet another embodiment of the present invention, a UAV flight state parameter prediction system is provided, which can be used to implement the above-mentioned UAV flight state parameter prediction method. Specifically, the UAV flight state parameter prediction system includes building module, parameter module, preprocessing module, partition module, and prediction module.
其中,搭建模块,搭建改进的Transformer神经网络模型;Among them, build modules and build an improved Transformer neural network model;
参数模块,获取无人机典型飞行状态的飞行状态数据;The parameter module is used to obtain the flight state data of the typical flight state of the UAV;
预处理模块,对参数模块获取的无人机飞行状态数据进行预处理;The preprocessing module preprocesses the flight status data of the UAV acquired by the parameter module;
划分模块,利用预处理模块预处理后的数据生成无人机飞行状态数据集,然后将无人机飞行状态数据集划分为训练集、验证集和测试集;The division module uses the data preprocessed by the preprocessing module to generate the UAV flight state data set, and then divides the UAV flight state data set into a training set, a verification set and a test set;
预测模块,使用划分模块得到的训练集对搭建模块得到的Transformer神经网络模型进行训练,利用划分模块得到的验证集得到Transformer神经网络模型的参数,将划分模块得到的测试集中第t,t-1,…,t-H-1共H个时间序列的数据输入训练好的Transformer神经网络模型,预测得到t+1时刻的飞行状态参数。For the prediction module, use the training set obtained by the division module to train the Transformer neural network model obtained by the construction module, use the verification set obtained by the division module to obtain the parameters of the Transformer neural network model, and divide the test set obtained by the division module into t, t-1 ,…, t-H-1, a total of H time series data input the trained Transformer neural network model, and predict the flight status parameters at time t+1.
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。通常在此处附图中的描述和所示的本发明实施例的组件可以通过各种不同的配置来布置和设计。因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of embodiments of the present invention, but not all embodiments. The components of the embodiments of the invention generally described and illustrated in the drawings herein may be arranged and designed in a variety of different configurations. Accordingly, the following detailed description of the embodiments of the invention provided in the accompanying drawings is not intended to limit the scope of the claimed invention, but merely represents selected embodiments of the invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.
由于动力学模型的输出是关于三个方向加速度和三个方向角加速度,不同预测模型的预测的均方根误差对比如表1所示:Since the output of the dynamic model is about the acceleration in three directions and the angular acceleration in three directions, the comparison of the root mean square errors of the predictions of different prediction models is shown in Table 1:
表1不同预测模型的预测的均方根误差对比Table 1 Comparison of root mean square errors of predictions of different prediction models
通过表1对比了动力学模型、动力学模型-卷积神经网络混合模型、LSTM模型和本发明的三个方向角速度和角加速度的均方根误差,从表1中看出,本发明的误差最小。By table 1, the root mean square error of dynamic model, dynamic model-convolutional neural network hybrid model, LSTM model and three direction angular velocities of the present invention and angular acceleration is compared, as seen from table 1, the error of the present invention minimum.
同时,本发明对比了不同模型预测一次所花费的时间,如表2所示:Simultaneously, the present invention compares the time that different models predict one time, as shown in Table 2:
表2不同预测模型的预测时间对比Table 2 Comparison of forecasting time of different forecasting models
表2对比了动力学模型、动力学模型-卷积神经网络混合模型、LSTM模型和本发明的预测时间,从表中看出,本发明的预测时间为9ms,达到无人机的实时性要求(一般控制器的控制频率为100Hz)。Table 2 compares dynamics model, dynamics model-convolutional neural network hybrid model, LSTM model and the prediction time of the present invention, as can be seen from the table, the prediction time of the present invention is 9ms, reaches the real-time requirement of unmanned aerial vehicle (The control frequency of the general controller is 100Hz).
综上所述,本发明一种无人机飞行状态参数预测方法及系统,能够实现对飞行状态数据的高精度实时预测。In summary, the present invention provides a method and system for predicting flight state parameters of UAVs, which can realize high-precision real-time prediction of flight state data.
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowcharts and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor of other programmable data processing equipment to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing equipment produce a An apparatus for realizing the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions The device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device, causing a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process, thereby The instructions provide steps for implementing the functions specified in the flow diagram procedure or procedures and/or block diagram procedures or blocks.
以上内容仅为说明本发明的技术思想,不能以此限定本发明的保护范围,凡是按照本发明提出的技术思想,在技术方案基础上所做的任何改动,均落入本发明权利要求书的保护范围之内。The above content is only to illustrate the technical ideas of the present invention, and cannot limit the protection scope of the present invention. Any changes made on the basis of the technical solutions according to the technical ideas proposed in the present invention shall fall within the scope of the claims of the present invention. within the scope of protection.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211274667.1A CN115469679B (en) | 2022-10-18 | 2022-10-18 | A method and system for predicting flight state parameters of unmanned aerial vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211274667.1A CN115469679B (en) | 2022-10-18 | 2022-10-18 | A method and system for predicting flight state parameters of unmanned aerial vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115469679A true CN115469679A (en) | 2022-12-13 |
CN115469679B CN115469679B (en) | 2024-09-06 |
Family
ID=84336977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211274667.1A Active CN115469679B (en) | 2022-10-18 | 2022-10-18 | A method and system for predicting flight state parameters of unmanned aerial vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115469679B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115758891A (en) * | 2022-11-22 | 2023-03-07 | 四川大学 | Wing profile flow field prediction method based on Transformer decoder network |
CN118133691A (en) * | 2024-05-07 | 2024-06-04 | 中国民航大学 | A flight parameter prediction model construction method, electronic device and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109034376A (en) * | 2018-07-18 | 2018-12-18 | 东北大学 | A kind of unmanned plane during flying trend prediction method and system based on LSTM |
WO2019192172A1 (en) * | 2018-04-04 | 2019-10-10 | 歌尔股份有限公司 | Attitude prediction method and apparatus, and electronic device |
US20190354644A1 (en) * | 2018-05-18 | 2019-11-21 | Honeywell International Inc. | Apparatuses and methods for detecting anomalous aircraft behavior using machine learning applications |
CN113190036A (en) * | 2021-04-02 | 2021-07-30 | 华南理工大学 | Unmanned aerial vehicle flight trajectory prediction method based on LSTM neural network |
CN114757086A (en) * | 2021-12-17 | 2022-07-15 | 北京航空航天大学 | Multi-rotor unmanned aerial vehicle real-time remaining service life prediction method and system |
-
2022
- 2022-10-18 CN CN202211274667.1A patent/CN115469679B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019192172A1 (en) * | 2018-04-04 | 2019-10-10 | 歌尔股份有限公司 | Attitude prediction method and apparatus, and electronic device |
US20190354644A1 (en) * | 2018-05-18 | 2019-11-21 | Honeywell International Inc. | Apparatuses and methods for detecting anomalous aircraft behavior using machine learning applications |
CN109034376A (en) * | 2018-07-18 | 2018-12-18 | 东北大学 | A kind of unmanned plane during flying trend prediction method and system based on LSTM |
CN113190036A (en) * | 2021-04-02 | 2021-07-30 | 华南理工大学 | Unmanned aerial vehicle flight trajectory prediction method based on LSTM neural network |
CN114757086A (en) * | 2021-12-17 | 2022-07-15 | 北京航空航天大学 | Multi-rotor unmanned aerial vehicle real-time remaining service life prediction method and system |
Non-Patent Citations (2)
Title |
---|
赵嶷飞;杨明泽;: "基于动作捕捉的无人机运动状态识别", 科学技术与工程, no. 27, 28 September 2018 (2018-09-28) * |
韩建福;杜昌平;叶志贤;宋广华;郑耀;: "基于双BP神经网络的扑翼飞行器气动参数辨识", 计算机应用, no. 2, 30 December 2019 (2019-12-30) * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115758891A (en) * | 2022-11-22 | 2023-03-07 | 四川大学 | Wing profile flow field prediction method based on Transformer decoder network |
CN118133691A (en) * | 2024-05-07 | 2024-06-04 | 中国民航大学 | A flight parameter prediction model construction method, electronic device and storage medium |
CN118133691B (en) * | 2024-05-07 | 2024-07-12 | 中国民航大学 | A flight parameter prediction model construction method, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115469679B (en) | 2024-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107479368B (en) | Method and system for training unmanned aerial vehicle control model based on artificial intelligence | |
JP7086111B2 (en) | Feature extraction method based on deep learning used for LIDAR positioning of autonomous vehicles | |
CN115469679B (en) | A method and system for predicting flight state parameters of unmanned aerial vehicle | |
CN106325264B (en) | A kind of the isolabilily evaluation method of UAV Flight Control System | |
JP2021515724A (en) | LIDAR positioning to infer solutions using 3DCNN network in self-driving cars | |
CN101833338B (en) | Control method of underactuated motion in vertical plane for unmanned underwater vehicle | |
US11704554B2 (en) | Automated training data extraction method for dynamic models for autonomous driving vehicles | |
JP2021515178A (en) | LIDAR positioning for time smoothing using RNN and LSTM in self-driving vehicles | |
Horn et al. | Neural network-based trajectory optimization for unmanned aerial vehicles | |
CN111694913B (en) | Ship AIS track clustering method and device based on convolution self-encoder | |
CN105136145A (en) | Kalman filtering based quadrotor unmanned aerial vehicle attitude data fusion method | |
CN103900574A (en) | Attitude estimation method based on iteration volume Kalman filter | |
CN117606491B (en) | A combined positioning and navigation method and device for an autonomous underwater vehicle | |
US10935938B1 (en) | Learning from operator data for practical autonomy | |
CN115329459A (en) | Modeling method and system of underwater vehicle based on digital twin | |
CN107169299B (en) | A tracking method for formation targets in decentralized maneuvering mode | |
CN105973237B (en) | Emulation dynamic trajectory based on practical flight data interpolating parses generation method | |
Asignacion et al. | Frequency-based wind gust estimation for quadrotors using a nonlinear disturbance observer | |
CN117744540A (en) | Underwater operation hydrodynamic characteristic trend prediction method of underwater unmanned aircraft | |
CN116026325A (en) | Navigation method and related device based on neural process and Kalman filtering | |
CN111257853A (en) | An online calibration method of lidar for autonomous driving system based on IMU pre-integration | |
My et al. | An Artificial Neural Networks (ANN) Approach for 3 Degrees of Freedom Motion Controlling | |
Pasha et al. | MEMS fault-tolerant machine learning algorithm assisted attitude estimation for fixed-wing UAVs | |
CN115218927B (en) | Unmanned aerial vehicle IMU sensor fault detection method based on secondary Kalman filtering | |
CN115826583A (en) | A self-driving vehicle formation method based on point cloud map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |