CN101853531B - Helicopter flight state identification method based on presort technology and RBF (Radial Basis Function) neural network - Google Patents

Helicopter flight state identification method based on presort technology and RBF (Radial Basis Function) neural network Download PDF

Info

Publication number
CN101853531B
CN101853531B CN201010190352XA CN201010190352A CN101853531B CN 101853531 B CN101853531 B CN 101853531B CN 201010190352X A CN201010190352X A CN 201010190352XA CN 201010190352 A CN201010190352 A CN 201010190352A CN 101853531 B CN101853531 B CN 101853531B
Authority
CN
China
Prior art keywords
mrow
flight
msub
speed
altitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201010190352XA
Other languages
Chinese (zh)
Other versions
CN101853531A (en
Inventor
王少萍
李凯
张超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201010190352XA priority Critical patent/CN101853531B/en
Publication of CN101853531A publication Critical patent/CN101853531A/en
Application granted granted Critical
Publication of CN101853531B publication Critical patent/CN101853531B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention discloses a helicopter flight state identification method based on a presort technology and RBF (Radial Basis Function) neural networks, which comprises the following steps of: classifying a flight state to be identified into 10 types and designing an RBF neural network for further identifying the flight state of each type; when identifying the flight state of certain flight date, ranking the flight date into the 10 types according to a certain flight parameters firstly and then inputting the flight data into RBF neural networks corresponding to each type of the flight state to carry out accurate identification of the flight states. The invention reduces the states to be identified by each neural network by the presort technology, thereby enhancing the recognition rate of the helicopter flight states by utilizing the neural networks.

Description

Helicopter flight state identification method based on pre-classification technology and RBF neural network
Technical Field
The invention belongs to the field of research on helicopter flight state identification technology, and particularly relates to a helicopter flight state identification method based on a pre-classification technology and a RBF neural network.
Background
Because the helicopter has a large number of moving parts, the accident rate of the helicopter is about 40 times that of a fixed-wing airplane, and therefore, the research on fault diagnosis and life prediction of the helicopter becomes particularly important. And acquiring the flight state of the helicopter is one of the prerequisites for carrying out fault diagnosis and life prediction of the helicopter.
The current method for acquiring the flight state mainly depends on manual operation, and the flight state of the helicopter in flight is acquired through the voice of a pilot and a manual state switch pulse signal. This method has the following disadvantages: firstly, the method of manually obtaining the flight state increases the burden of a pilot during flight, and is not beneficial to flight safety; secondly, the pilot is susceptible to disturbances which give false information, thus causing deviations between the acquired flight state and the actual flight state.
When the helicopter flies, various sensors can be installed to measure various flight parameters, and the measurement results are stored as corresponding flight data. By processing these flight data to obtain the flight status during flight, the problems associated with manually obtaining the flight status can be avoided.
However, the relationship between the flight state and each monitored flight parameter often follows a complex nonlinear relationship because the helicopter flight parameters inevitably introduce external interference and cross-coupling effects during measurement, and the flight parameters often dynamically change over time, even under the same flight state, various parameters change. The neural network has good capability of approximating nonlinear mapping and fault tolerance and generalization, and is widely used in the field of state recognition research. The flight data of the helicopter during flight are identified through the neural network, and the flight state corresponding to the flight data is obtained, so that the method is a method for obtaining the flight state of the helicopter. However, the flight state of the helicopter is dozens of flight states, and the recognition rate of the neural network is reduced along with the increase of the state to be recognized, which becomes a problem to be solved urgently in recognizing the flight state of the helicopter by using the neural network.
Disclosure of Invention
The purpose of the invention is: the helicopter flight state identification method based on the pre-classification technology and the Radial Basis Function (RBF) neural network is provided to solve the problem that in the process of identifying the helicopter state by using the neural network technology, the identification rate is reduced along with the increase of the flight state to be identified.
The helicopter flight state identification method based on the pre-classification technology and the RBF neural network provided by the invention comprises the following specific steps of:
classifying flight states of the helicopter to be identified; the method comprises the following steps of firstly dividing the flight state of a helicopter to be identified into a turning type and a non-turning type according to whether turning is performed or not, then dividing the flight state of the non-turning state into three flight states of high altitude, high speed in low altitude and low speed in low altitude, subdividing the flight state of the high altitude, the high speed in low altitude and the low speed in low altitude according to a speed range, specifically classifying the flight state of the low altitude and the low speed in two types according to the minimum speed, the transition speed, the long endurance speed and the maximum speed of the helicopter: minimum speed, minimum speed to transition speed, subdivide the high altitude high speed flight state into six types: transition speed, long voyage speed, maximum speed, and two types arbitrarily divided between the long voyage speed and the long voyage speed; finally, dividing all flight states of the helicopter to be identified into ten subclasses;
for some speed change and height change states, the adopted method is to divide the speed change and height change states into a plurality of state subclasses;
step two, designing a Radial Basis Function (RBF) neural network for further identifying the flight state for each subclass obtained in the step one;
if only one state is contained in a certain subclass, designing an RBF neural network for further identification on the subclass is not needed;
step three, processing flight data to be subjected to flight state identification; firstly, removing outliers, limiting and smoothing the flight data, then fitting the flight data needing to use the change rate to obtain the change rate, and finally performing normalization processing on the flight data;
step four, pre-classifying the flight data processed in the step three into ten subclasses of flight states divided in the step one according to the yaw angle transformation rate, the barometric altitude and the indicated airspeed:
firstly, dividing flight data into a turning state and a non-turning state according to the yaw angle change rate delta COSI, wherein the turning state is when the delta COSI is more than 0 or the delta COSI is less than 0, and the non-turning state is when the delta COSI is 0;
secondly, according to the barometric altitude Hp and a threshold value kH indicating the airspeed Vip andkViaccording to the air pressure, the flight data in the non-turning state is highThe comparison between the degree and the indicated airspeed with the threshold value is classified, if Hp is more than or equal to kHpAnd Vi is more than or equal to kViThen the method is classified into a high-altitude high-speed flight state if Hp is less than kHpAnd Vi is more than or equal to kViThe flight condition is classified as low-altitude high-speed flight condition if Hp is less than kHpAnd Vi < kViThe low-altitude low-speed flight state is classified;
finally, classifying the flight data divided into high-altitude high-speed flight state and low-altitude low-speed flight state into subclasses divided in the first step according to the range of the indicated airspeed Vi;
and fifthly, inputting the flight data classified into each subclass into the RBF neural network designed for each subclass, and further identifying the corresponding flight state.
The helicopter flight state identification method has the advantages and positive effects that:
(1) the flight state is identified through the RBF neural network according to the flight data of the helicopter during flight, so that the problems of high possibility of interference and low identification rate when an artificial method is used are avoided, and the accuracy of flight state identification is improved;
(2) by the aid of the pre-classification technology, the problem that when the flight state of the helicopter is identified by the neural network, the state identification rate is reduced along with the increase of the number of states to be identified is solved.
Drawings
FIG. 1 is a flow chart of the steps of a helicopter flight status identification method of the present invention;
FIG. 2 is a schematic diagram of the RBF neural network structure employed in the present invention;
FIG. 3 is a graphical representation of the results of a test conducted on three sets of flight data using the method of the present invention;
fig. 4 is a flight envelope of a helicopter.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Taking flight data of a certain model of helicopter as an example, the flight state of the helicopter is identified by using the helicopter flight state identification method based on the pre-classification technology and the RBF neural network, as shown in fig. 1, the specific steps are as follows:
step one, classifying 35 flight states to be identified;
table 1 shows the flight states of the helicopter of the model, and the total number of the flight states is 42. The flight states to be identified are 35, and the 7 states 4, 7, 8, 9, 10, 11 and 32 in table 1 do not need to be identified.
TABLE 1 flight status of helicopter
Figure GDA0000133600080000031
The 35 states are divided into two categories, namely turning states and non-turning states according to whether the helicopter turns or not. In the non-turning state, the high/low altitude and the high/low speed can be classified into three categories, namely three categories of high altitude high speed, low altitude high speed and low altitude low speed, and the high altitude low speed state does not exist. As shown in the flight envelope of the helicopter shown in FIG. 4, the range of the air pressure height of the helicopter is about 100-600 meters, and the flight speed is about 0-230 km/h, so that the threshold value for distinguishing the high altitude from the low altitude of the helicopter is 270 meters, and the threshold value for distinguishing the high altitude from the low altitude is 75 km/h. The high altitude indicates that the air pressure height of the helicopter is greater than or equal to 270 meters, the low altitude indicates that the air pressure height of the helicopter is less than 270 meters, the low speed indicates that the indicating airspeed of the helicopter is less than 75 kilometers per hour, and the high speed indicates that the indicating airspeed of the helicopter is greater than or equal to 75 kilometers per hour. In the above three major categories of states, the three conditions are subdivided according to the speed values, as shown in Table 2, wherein the classification is based on the division of the medium speed range according to the minimum speed (< 4km/h), the minimum speed to the transition speed (4-74 km/h), the transition speed (75-94 km/h), the long-endurance speed (94-130 km/h), the long-endurance speed (190-215 km/h), the interval between the long-endurance speed and the long-endurance speed for facilitating the refinement of the flight state, the interval is divided into two categories (130-170 km/h and 170-190 km/h) and the maximum speed (maximum power), and finally the 35 flight states are divided into 0-9 minor categories. For helicopters of different models, although the speed ranges are different, the classification can be carried out according to the minimum speed, the transition speed, the long endurance speed and the maximum speed of the helicopter according to the method. Wherein, the class of low altitude and high speed includes only one flight state, so the class does not need to be subdivided according to the flight speed.
TABLE 2 description of the classification methods
Figure GDA0000133600080000041
In order to improve the classification accuracy as much as possible, for some variable speed and variable height states, the method is adopted to classify the states into several state subclasses. As shown in table 2, the horizontal deceleration state of the helicopter, for example, having a flight state code of 30, is classified into the flight states of subclasses 2, 3, 4, and 5 because of the gear shift.
Step two, designing an RBF neural network for further identifying the flight state for each subclass of 0-9:
in the step one, all flight states are divided into 10 subclasses which are numbered from 0 to 9, wherein the subclass 7 only contains one state, so that identification is not needed, and other 9 subclasses need to further identify the flight states by using an RBF neural network.
The RBF (Radial Basis Function) neural network is composed of an input layer, a hidden layer and an output layer, wherein one of the input layer and the hidden layerThe output layer and the hidden layer are formed by linear function mapping. The structure is shown in figure 2. Wherein x1~xnIs an input vector of an input layer with dimensions n, h1~hmThe number of nodes is m for hidden layer nodes; y is1~ykAnd the output layer vector dimension is k, wherein n, m and k are integers more than 0.
The available parameters for identifying the flight state are shown in the table 3, wherein 1-23 are flight parameters recorded when the helicopter flies, and 24-26 are the change rates of the flight parameters indicating airspeed, radio altitude and yaw angle.
TABLE 3 parameter List available for flight State identification
Figure GDA0000133600080000051
In the embodiment of the invention, an RBF neural network for identifying each small class of flight states needs to be further designed. The choice of input layer vector dimension n, number of hidden layer nodes m, and output layer vector dimension k for the RBF neural network for each subclass of flight state identification is shown in table 4. The state type is the corresponding subclass number in table 2, the input layer vector dimension n is equal to the type of the neural network input vector for further identifying the flight state of the subclass, and is also the number of flight parameters required for identifying the flight state of the subclass in table 3, taking the state type 0 as an example, the input layer vector dimension of the RBF neural network for further identifying the flight state of the subclass is 3, the types of the input layer vectors are 1, 9 and 25, and the code number corresponds to the available parameter code number for identifying the flight state of table 3. The output layer vector dimension k is equal to the number of flight states corresponding to the subclass, as shown in the state class column of table 2. The number of hidden layer nodes can be first selected according to an empirical formula, which is shown as formula (11):
m = n + k + d - - - ( 11 )
wherein: m is the number of nodes of the hidden layer, n is the vector dimension of the input layer, k is the vector dimension of the output layer, and d is a constant between 1 and 10. The number of hidden layer nodes can be modified according to the actual training condition of the neural network, if the neural network is converged slowly, the number of hidden layer nodes can be reduced, and if the accuracy of the neural network does not meet the requirement, the number of hidden layer nodes can be increased appropriately.
TABLE 4 RBF neural network parameter List for various flight status identification
Figure GDA0000133600080000053
Figure GDA0000133600080000061
Output y of RBF neural networkkAs shown in fig. 2, as shown in equation (12):
<math> <mrow> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>w</mi> <mi>ik</mi> </msub> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula w11~w1k……wm1~wmkIs the weight of the neural network; h is1~hmFor the radial basis function of the neural network, a gaussian function is selected, and the expression of the gaussian function is shown as formula (13):
<math> <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mo>[</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>-</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&sigma;</mi> <mi>i</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>]</mo> <mo>,</mo> </mrow> </math> i=1,2,......m (13)
in the formulaAn input vector of dimension n, ciIs the center of the ith radial basis function, is
Figure GDA0000133600080000065
Vectors of the same dimension, σiIs the ith perceptron variable, | | | | | represents the euler norm, and m, n, k are integers greater than 0.
RBF neural network structure parameter c in pair (13) formulaiAnd σiRBF nerves determined by gradient descent method using K-means algorithmTraining the network to determine (12) type neutral network weight w11~w1k……wm1~wmkFurther, the design of the RBF neural network for further identifying each subclass of flight states is completed, and the following description takes the class of flight states with the subclass number of 0 in table 2 as an example.
In table 2, the class with the subclass number of 0 corresponds to the class of the class 0 in table 4, and the class includes flight states of 15 (ascending turn or hovering), 35 (horizontal turn or hovering at long-endurance speed), 36 (horizontal turn at 180km \ h), and 37 (turn at maximum cruising speed), which are encoded again, for example, the state 15 is 1, the state 35 is 2, the state 36 is 3, and the state 37 is 4. The output layer vector dimension of the RBF neural network is the same as the number of flight states contained in the subclass, and for the state class 0, the output layer vector dimension of the RBF neural network for further identifying the flight state is 4. When training the subclass of RBF neural network, the output layer is made to have each dimension of output vector corresponding to a flight state, e.g. y1Representing the flight state 15, the neural network is trained by equation (12).
When training is finished and a group of flight data is judged, the similarity degree of each dimension vector of the neural network output layer and the encoding value is compared, a threshold value can be set, whether the output vector is in the range of the threshold value or not is judged, and if the output vector of a certain dimension is in accordance with the encoding value, the identification result is the flight state corresponding to the output vector.
Step three: processing flight data to be subjected to flight state identification:
because the helicopter is provided with a plurality of sensors, a severe working environment and a large amount of interference, before the flight data is identified, the flight data needs to be processed, and the processing method comprises the following steps:
(1) removing outliers, limiting and smoothing the flight data:
the outlier refers to a variation gradient between a value of a certain sampling point in the flight data and a value of a sampling point before and after the sampling point, which cannot be reached by the helicopter in a sampling period under the actual flight condition, and the outlier is generated because of data loss or serious interference. Setting the value of one flight data p in a group of flight data of the flight parameters at a sampling time t as p (t), setting the value of the previous sampling time as p (t-1), and obtaining the change gradient delta p in the sampling period as formula (14):
Δp=|p(t)-p(t-1)| (14)
the gradient value and the maximum gradient value delta p which can be reached by the flight parameter in a sampling periodmaxBy comparison, if Δ p is not less than Δ pmaxIf p (t) is the outlier, removing the outlier;
the amplitude limiting is to eliminate points which do not accord with the actual flight condition of the helicopter in the flight data, and to set the value of one flight data p in a group of flight data of the flight parameters at the sampling time t as p (t), and to set the value of the flight data p and the maximum value p which can be reached by the flight parameters when the helicopter actually fliesmaxAnd minimum value pminFor comparison, if p (t) > pmaxOr p (t) < pminIf so, eliminating the value of the flight parameter at the time t from the flight data;
smoothing is the filtering of the flight data by a filtering technique to filter out noise signals generated by the sensors as a result of interference during the flight data acquisition process. The filtering method used for the flight data is a mean wave rate method: setting the value of one flight data p in a group of flight data of the flight parameters at a sampling time t as p (t), taking r points before and after the sampling time t, and making the value of the point p equal to the average value of the values of the 2r +1 points, namely:
<math> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mi>t</mi> <mo>-</mo> <mi>r</mi> </mrow> <mrow> <mi>t</mi> <mo>+</mo> <mi>r</mi> </mrow> </munderover> <mi>p</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> <mi>r</mi> <mo>+</mo> <mn>1</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> </mrow> </math>
(2) fitting the flight data needing to use the change rate to obtain the change rate:
the flight parameters used to identify the flight condition include the rate of change of speed (referring to the airspeed indicated in table 3), altitude (referring to the barometric altitude in table 3), and yaw angle, and therefore, the speed, altitude, and yaw angle data need to be fitted to obtain the corresponding rate of change.
And (3) taking each point in the flight data of the three flight parameters and n points before and after each point to perform least square fitting, namely fitting a straight line by using a least square method, wherein the slope of the straight line is the change rate to be obtained.
Let j be the number of points participating in the fitting, i.e., j is 2r +1, and r is a natural number. Let the functional form of the straight line be y ═ a + bt. The data measured experimentally are (t)1,y1),(t2,y2),...,(tj,yj): to t1,t2,…,tjThe optimum value (regression value) of y is a + bt1,a+bt2,…,a+btj. Deriving the values of a, b by least squares, such that y is a measure of yiAnd the regression value a + btiThe sum of the squares of the differences is taken to be very small, i.e.:
<math> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>j</mi> </munderover> <msup> <mrow> <mo>[</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <msub> <mi>bt</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <mi>min</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>16</mn> <mo>)</mo> </mrow> </mrow> </math>
the requirement for selecting a and b to minimize equation (16) is:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mfrac> <mo>&PartialD;</mo> <mrow> <mo>&PartialD;</mo> <mi>a</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>j</mi> </munderover> <msup> <mrow> <mo>[</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <msub> <mi>bt</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mfrac> <mo>&PartialD;</mo> <mrow> <mo>&PartialD;</mo> <mi>b</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>j</mi> </munderover> <msup> <mrow> <mo>[</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <msub> <mi>bt</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>17</mn> <mo>)</mo> </mrow> </mrow> </math>
further derived are:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>j</mi> </munderover> <mn>2</mn> <mo>[</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <msub> <mi>bt</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mrow> <mo>(</mo> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>j</mi> </munderover> <mn>2</mn> <mo>[</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <msub> <mi>bt</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mrow> <mo>(</mo> <mo>-</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>18</mn> <mo>)</mo> </mrow> </mrow> </math>
after finishing, one can obtain:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>ak</mi> <mo>+</mo> <mi>b</mi> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>j</mi> </munderover> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>j</mi> </munderover> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mi>a</mi> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>j</mi> </munderover> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>+</mo> <mi>b</mi> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>j</mi> </munderover> <msubsup> <mi>t</mi> <mi>i</mi> <mn>2</mn> </msubsup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>j</mi> </munderover> <msub> <mi>t</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>19</mn> <mo>)</mo> </mrow> </mrow> </math>
from this it can be solved:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>b</mi> <mo>=</mo> <mfrac> <mrow> <mi>&Sigma;</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> <mi>&Sigma;</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>j&Sigma;</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <mi>&Sigma;</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mi>j&Sigma;</mi> <msubsup> <mi>t</mi> <mi>i</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> </mtd> </mtr> <mtr> <mtd> <mi>a</mi> <mo>=</mo> <mfrac> <mrow> <mi>&Sigma;</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> <mi>&Sigma;</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>&Sigma;</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> <msubsup> <mi>t</mi> <mi>i</mi> <mn>2</mn> </msubsup> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <mi>&Sigma;</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mi>j&Sigma;</mi> <msubsup> <mi>t</mi> <mi>i</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>20</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula, a and b are called regression coefficients, t is time, y is a numerical value of speed (or height and yaw angle), and the b value of each point obtained after fitting is the change rate of the point.
(3) Carrying out normalization processing on the flight data:
the input sum of a certain neuron in the neural network is used as the input of an excitation function, and the result of the excitation function operation is used as the output of the neuron. The excitation function is a Sigmoid function, and the Sigmoid function is characterized in that a localized response is generated to the input excitation. The change in the function of the region away from 0 is extremely flat, i.e. the Sigmoid function responds meaningfully only if the input falls in a small designated region around zero, with response values between 0 and 1. And when the input value is too large or too small, the neuron will be close to saturation. Since there are twenty or more flight parameters for identifying the flight state, and the dimensions of the parameters are different and the sizes are different from each other, it is necessary to normalize the flight data of the flight parameters before using the parameters. The normalization formula uses the following formula:
y = x - 1 2 ( x max + x min ) 1 2 ( x max - x min ) - - - ( 21 )
in the formula: y is a flight data value after normalization processing; x is a certain flight data value before processing; x is the number ofmaxThe maximum value of the flight data corresponding to the flight parameter is obtained; x is the number ofminAt its minimum. The value range of each flight parameter is converted into [ -1, 1] through normalization processing]The range of (1).
Step four: pre-classifying the flight data processed in the first three steps into 0-9 ten-class flight states divided in the first step according to a yaw conversion rate, an air pressure altitude and an indicated airspeed:
the flight data are classified into two types of turning and non-turning according to the rate of change of the yaw angle Δ COSI, and the flight data classified into the turning state are classified into the 0 th type in table 2 when Δ COSI > (or Δ COSI < (in the case of turning state, in the case of Δ COSI ═ 0, in the case of non-turning state).
Let k be the threshold values of barometric altitude Hp and indicated airspeed ViHpAnd kViTake kHpIs 270 m, kVi75 km/h; dividing the flight data divided into non-turning states into three categories of high altitude high speed, low altitude high speed and low altitude low speed according to the comparison condition of the barometric altitude and the indicated airspeed with the threshold, and if Hp is more than or equal to kHpAnd Vi is more than or equal to kViThen the method is classified into a high-altitude high-speed flight state if Hp is less than kHpAnd Vi is more than or equal to kViThe flight condition is classified as low-altitude high-speed flight condition if Hp is less than kHpAnd Vi < kViIt is classified into a low-altitude low-speed flight state.
Dividing the flight data into high-altitude and high-speed flight data according to the total distance displacement wfClassifying the flight data with the maximum total distance displacement into the 1 st class in the table 2, and classifying the rest flight data into the 2 nd to 6 th classes in the table 2 according to the range of the indicated airspeed Vi; the flight data classified into the low altitude and high speed state is classified as category 7 in table 2; the flight data are classified into flight data in a low-altitude and low-speed state, and are classified into 8 th to 9 th categories in a table 2 according to the range of the indicated airspeed.
Through the division, the flight data needing the flight state identification is pre-classified. Wherein the threshold value k of the yaw angle rate of changeΔCOSIAnd a threshold k for barometric altitude and indicated airspeedViAnd kHpThe selection of which is determined according to the model of the specific helicopter.
Step five: inputting the flight data classified into each subclass into the RBF neural network designed for each subclass, and further identifying the corresponding flight state.
In the embodiment of the invention, the code values of three flight states are set as follows: the identification result is considered to be correct when the difference between the output value and the coded value is less than 0.5. As shown in fig. 3, 1) in the schematic diagram of the result of the long-range speed horizontal flight (sideslip angle 0 °), the target state code value is 0, the flight data obtained by 6000 sampling points is applied to the state identification method of the present invention, the determined flight state with the output state code value between the threshold values of 0.5 and-0.5 is the long-range speed horizontal flight (sideslip angle 0 °), and the flight state beyond the threshold values cannot be identified. Similarly, 2) in the schematic diagram of the long-range speed flat flight (left sliding angle 10 °), the target state code value is 1, 7000 sampling points are taken, and the threshold value of the output state code value is 0.5 and 1.5; 3) in the schematic diagram of the long-range speed flat flight (the right sliding angle is 10 degrees), the target state code value is-1, 5000 sampling points are taken, and the threshold value of the output state code value is-0.5 and-1.5. Finally, the statistical accuracy of the identification results of the three flight states is respectively 97.5%, 100% and 96.1%, the average identification accuracy of the flight states is 98.1%, and the identification rate meets the requirement of flight data state classification.

Claims (7)

1. A helicopter flight state identification method based on a pre-classification technology and an RBF neural network is characterized by comprising the following steps:
classifying flight states of the helicopter to be identified; the method comprises the following steps of firstly dividing the flight state of a helicopter to be identified into a turning type and a non-turning type according to whether turning is performed or not, then dividing the flight state of the non-turning state into three flight states of high altitude, high speed in low altitude and low speed in low altitude, subdividing the flight state of the high altitude, the high speed in low altitude and the low speed in low altitude according to a speed range, specifically classifying the flight state of the low altitude and the low speed in two types according to the minimum speed, the transition speed, the long endurance speed and the maximum speed of the helicopter: minimum speed, minimum speed to transition speed, subdivide the high altitude high speed flight state into six types: transition speed, long voyage speed, maximum speed, and two types arbitrarily divided between the long voyage speed and the long voyage speed; finally, dividing all flight states of the helicopter to be identified into ten subclasses;
for some speed change and height change states, the adopted method is to divide the speed change and height change states into a plurality of state subclasses;
the air pressure height of the low-altitude helicopter is less than 270 meters, and the air pressure height of the high-altitude helicopter is greater than or equal to 270 meters; the low speed means that the indicated airspeed of the helicopter is less than 75 km/h, and the high speed means that the indicated airspeed of the helicopter is more than or equal to 75 km/h;
step two, designing a Radial Basis Function (RBF) neural network for further identifying the flight state for each subclass obtained in the step one;
if only one state is contained in a certain subclass, designing an RBF neural network for further identification on the subclass is not needed;
step three, processing flight data to be subjected to flight state identification; firstly, removing outliers, limiting and smoothing the flight data, then fitting the flight data needing to use the change rate to obtain the change rate, and finally performing normalization processing on the flight data;
step four, pre-classifying the flight data processed in the step three into ten subclasses of flight states divided in the step one according to the yaw angle transformation rate, the barometric altitude and the indicated airspeed:
firstly, dividing flight data into a turning state and a non-turning state according to the yaw angle change rate delta COSI, wherein the turning state is when the delta COSI is more than 0 or the delta COSI is less than 0, and the non-turning state is when the delta COSI is 0;
secondly, according to the barometric altitude Hp and a threshold value k indicating the airspeed ViHpAnd kViClassifying the flight data in the non-turning state according to the comparison condition of the air pressure altitude and the indicated airspeed with the threshold value, and if Hp is more than or equal to kHpAnd Vi is more than or equal to kViThen the method is classified into a high-altitude high-speed flight state if Hp is less than kHpAnd Vi is more than or equal to kViThe flight condition is classified as low-altitude high-speed flight condition if Hp is less than kHpAnd Vi < kViThe low-altitude low-speed flight state is classified;
finally, classifying the flight data divided into high-altitude high-speed flight state and low-altitude low-speed flight state into subclasses divided in the first step according to the range of the indicated airspeed Vi;
and fifthly, inputting the flight data classified into each subclass into the RBF neural network designed for each subclass, and further identifying the corresponding flight state.
2. A helicopter flight state recognition method according to claim 1, wherein said radial basis function RBF neural network designed for further recognition of flight state in step two is comprised of an input layer, a hidden layer and an output layer;
the input layer vector dimension n is equal to the number of flight parameters required for identifying the subclass of flight states;
the vector dimension k of the output layer is equal to the number of flight states corresponding to the subclass;
the number m of hidden layer nodes is selected according to an empirical formula, wherein the empirical formula is as follows:
Figure FDA0000133600070000021
wherein d is a constant between 1 and 10;
for a flight subclass which needs to be further identified, the flight subclass comprises more than one flight state, and each dimension of output vector of an output layer of the RBF neural network is trained to correspond to one flight state of the flight subclass according to an output formula of the RBF neural network;
wherein, the output y of the RBF neural networkkComprises the following steps:
<math> <mrow> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>w</mi> <mi>ik</mi> </msub> <msub> <mi>h</mi> <mi>i</mi> </msub> </mrow> </math>
wherein, w11~w1k……wm1~wmkIs the weight of the neural network; h is1~hmFor the radial basis function of the neural network, a gaussian function is selected, whose expression is:
<math> <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mo>[</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mover> <mi>x</mi> <mo>&RightArrow;</mo> </mover> <mo>-</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&sigma;</mi> <mi>i</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>]</mo> <mo>,</mo> </mrow> </math> i=1,2,......m
wherein,
Figure FDA0000133600070000024
an input vector of dimension n, ciIs the center of the ith radial basis function, is
Figure FDA0000133600070000025
Vectors of the same dimension, σiIs the ith perceptron variable, | | | | | represents the euler norm; n, m and k are integers more than 0;
for RBF neural network structure parameter ciAnd σiDetermining by using a K mean algorithm, training the RBF neural network by a gradient descent method, and determining the weight w of the neural network11~w1k……wm1~wmk
3. A helicopter flight state recognition method according to claim 1, wherein said step three of performing de-noising, clipping and smoothing on the flight data specifically comprises:
firstly, removing wild points; assuming that the value of flight data p at a sampling time t is p (t), and the value of the previous sampling time is p (t-1), the gradient Δ p of change in the sampling period is: Δ p ═ p (t) -p (t-1) |;
the variation gradient Δ p and the maximum gradient value Δ p which can be reached by the flight data in a sampling periodmaxBy comparison, if Δ p is not less than Δ pmaxIf p (t) is the outlier, removing the outlier;
secondly, amplitude limiting; setting the value of flight data p at sampling time t as p (t), and combining the value with the maximum value p which can be reached by the flight data when the helicopter actually fliesmaxAnd minimum value pminFor comparison, if p (t) > pmaxOr p (t) < pminRemoving p (t) from the flight data;
thirdly, smoothing; the flight data are filtered by adopting a filtering technology, and the used filtering method is a mean wave rate method:
setting the value of flight data p at a sampling time t as p (t), taking r points before and after the flight data p, and enabling the value of the point p to be equal to the average value of the values of the 2r +1 points:
<math> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mi>t</mi> <mo>-</mo> <mi>r</mi> </mrow> <mrow> <mi>t</mi> <mo>+</mo> <mi>r</mi> </mrow> </munderover> <mi>p</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> <mi>r</mi> <mo>+</mo> <mn>1</mn> </mrow> </math>
wherein r is a natural number.
4. A helicopter flight state recognition method according to claim 1, wherein said step three of fitting the flight data for which a rate of change is required to be used to obtain a rate of change comprises:
fitting flight data of three flight parameters of the indicated speed, the barometric altitude and the yaw angle to obtain a speed change rate, a height change rate and a yaw angle change rate;
for each flight data of the flight parameters indicating the speed, the barometric altitude and the yaw angle, taking n pieces of flight data before and after the flight data, and fitting a straight line by using a least square method, wherein the slope of the straight line is the change rate to be obtained;
assuming that the functional form of the straight line is y ═ a + bt, the experimentally measured data is (t)1,y1),(t2,y2),…,(tj,yj) To t1,t2,…,tjThe regression value of y is a + bt1,a+bt2,…,a+btjJ is the number of points participating in fitting, j is 2r +1, and r is a natural number; deriving the values of a, b by least squares, such that y is a measure of yiAnd the regression value a + btiThe sum of the squares of the differences is taken to be very small:
<math> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>j</mi> </munderover> <msup> <mrow> <mo>[</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <msub> <mi>bt</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <mi>min</mi> </mrow> </math>
the requirements for choosing a, b to be minimal are:
<math> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mfrac> <mo>&PartialD;</mo> <mrow> <mo>&PartialD;</mo> <mi>a</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>j</mi> </munderover> <msup> <mrow> <mo>[</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <msub> <mi>bt</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mfrac> <mo>&PartialD;</mo> <mrow> <mo>&PartialD;</mo> <mi>b</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>j</mi> </munderover> <msup> <mrow> <mo>[</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <msub> <mi>bt</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> </math>
from which it is solved:
<math> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>b</mi> <mo>=</mo> <mfrac> <mrow> <mi>&Sigma;</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> <mi>&Sigma;</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>j&Sigma;</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <mi>&Sigma;</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mi>j&Sigma;</mi> <msubsup> <mi>t</mi> <mi>i</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> </mtd> </mtr> <mtr> <mtd> <mi>a</mi> <mo>=</mo> <mfrac> <mrow> <mi>&Sigma;</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> <mi>&Sigma;</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>&Sigma;</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> <msubsup> <mi>t</mi> <mi>i</mi> <mn>2</mn> </msubsup> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <mi>&Sigma;</mi> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mi>j&Sigma;</mi> <msubsup> <mi>t</mi> <mi>i</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> </mtd> </mtr> </mtable> </mfenced> </math>
wherein, a and b are regression coefficients, t represents time, y is a numerical value indicating speed or barometric altitude or yaw angle, and the b value of each point obtained after fitting is the change rate of the flight data.
5. A helicopter flight state recognition method according to claim 1, wherein said normalizing the flight data in step three specifically comprises:
the method comprises the following steps that a Sigmoid function is selected as a drive function in the neural network, the response value of the Sigmoid function is between 0 and 1, the flight data are normalized, and a normalization formula is adopted:
y = x - 1 2 ( x max + x min ) 1 2 ( x max - x min )
wherein y is the flight data value after normalization processing, x is a certain flight data value before processing, and xmaxIs the maximum value, x, of the flight dataminIs the minimum value of the flight data;
the value range of each flight data is converted to the range of [ -1, 1] by the normalization process.
6. A helicopter flight state recognition method according to claim 1, wherein said step four of further classifying the flight data divided into high altitude, high speed and low altitude, low speed flight states into subclasses divided in step one according to the range of the indicated airspeed Vi, specifically: in the flight data of the high-altitude high-speed flight state, the total distance is shifted by wfThe flight data reaching the maximum are classified into the maximum speed class under the high-altitude high-speed flight state divided in the step one, and the rest flight data under the high-altitude high-speed flight state are further classified into five classes of transition speed, long-endurance speed and two classes arbitrarily divided between the long-endurance speed and the long-endurance speed according to the range of the indicated airspeed Vi; and the flight data of the low-altitude and low-speed stateRanges based on the indicated airspeed fall into the category of minimum speed or minimum speed to transition speed.
7. A helicopter flight state recognition method according to claim 1, characterized in that in step four said threshold k of barometric altitude Hp isHp270 meters, indicating a threshold value k for the airspeed ViViIs 75 km/h.
CN201010190352XA 2010-05-25 2010-05-25 Helicopter flight state identification method based on presort technology and RBF (Radial Basis Function) neural network Expired - Fee Related CN101853531B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010190352XA CN101853531B (en) 2010-05-25 2010-05-25 Helicopter flight state identification method based on presort technology and RBF (Radial Basis Function) neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010190352XA CN101853531B (en) 2010-05-25 2010-05-25 Helicopter flight state identification method based on presort technology and RBF (Radial Basis Function) neural network

Publications (2)

Publication Number Publication Date
CN101853531A CN101853531A (en) 2010-10-06
CN101853531B true CN101853531B (en) 2012-09-05

Family

ID=42804996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010190352XA Expired - Fee Related CN101853531B (en) 2010-05-25 2010-05-25 Helicopter flight state identification method based on presort technology and RBF (Radial Basis Function) neural network

Country Status (1)

Country Link
CN (1) CN101853531B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104049640B (en) * 2014-06-27 2016-06-15 金陵科技学院 Unmanned vehicle attitude robust fault tolerant control method based on Neural Network Observer
CN107682109B (en) * 2017-10-11 2019-07-30 北京航空航天大学 A kind of interference signal classifying identification method suitable for UAV Communication system
US11747360B2 (en) * 2017-10-11 2023-09-05 Embraer S.A. Neural network system whose training is based on a combination of model and flight information for estimation of aircraft air data
CN108364067B (en) * 2018-01-05 2023-11-03 华南师范大学 Deep learning method based on data segmentation and robot system
CN108304915B (en) * 2018-01-05 2020-08-11 大国创新智能科技(东莞)有限公司 Deep learning neural network decomposition and synthesis method and system
CN108805175A (en) * 2018-05-21 2018-11-13 郑州大学 A kind of flight attitude clustering method of aircraft and analysis system
CN109101034B (en) * 2018-07-27 2020-08-18 清华大学 Flight control method for vertical/short-distance takeoff and landing aircraft
CN109657989A (en) * 2018-12-20 2019-04-19 南京航空航天大学 Helicopter high-speed overload input stage health state evaluation method
CN109975780B (en) * 2019-04-17 2022-12-06 西安电子工程研究所 Helicopter model identification algorithm based on pulse Doppler radar time domain echo
CN110262227A (en) * 2019-04-19 2019-09-20 南京航空航天大学 A kind of inertance element method for independently controlling for Helicopter Main anti-reflection resonance vibration isolation
CN111062092B (en) * 2019-12-25 2023-11-03 中国人民解放军陆军航空兵学院陆军航空兵研究所 Helicopter flight spectrum compiling method and device
CN111693066A (en) * 2020-03-12 2020-09-22 重庆大学 Slip identification and intelligent compensation method for mine heading machine
CN111504341B (en) * 2020-04-30 2023-09-19 中国直升机设计研究所 Helicopter flight state identification method
CN111776250B (en) * 2020-06-02 2022-07-26 南京航空航天大学 Spacecraft assembly error compensation control method based on interferometric neural network
CN113093568A (en) * 2021-03-31 2021-07-09 西北工业大学 Airplane automatic driving operation simulation method based on long-time and short-time memory network
CN113076510A (en) * 2021-04-12 2021-07-06 南昌航空大学 Helicopter flight state identification method based on one-dimensional convolutional neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5483446A (en) * 1993-08-10 1996-01-09 Mitsubishi Jidosha Kogyo Kabushiki Kaisha Method and apparatus for estimating a vehicle maneuvering state and method and apparatus for controlling a vehicle running characteristic
CN101656883A (en) * 2009-09-17 2010-02-24 浙江大学 Real-time compensation method based on motion prediction of least squares support vector machine (LS-SVM)
CN101695190A (en) * 2009-10-20 2010-04-14 北京航空航天大学 Three-dimensional wireless sensor network node self-locating method based on neural network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05324013A (en) * 1992-05-20 1993-12-07 Toshiba Corp System modeling method
US7526463B2 (en) * 2005-05-13 2009-04-28 Rockwell Automation Technologies, Inc. Neural network using spatially dependent data for controlling a web-based process

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5483446A (en) * 1993-08-10 1996-01-09 Mitsubishi Jidosha Kogyo Kabushiki Kaisha Method and apparatus for estimating a vehicle maneuvering state and method and apparatus for controlling a vehicle running characteristic
CN101656883A (en) * 2009-09-17 2010-02-24 浙江大学 Real-time compensation method based on motion prediction of least squares support vector machine (LS-SVM)
CN101695190A (en) * 2009-10-20 2010-04-14 北京航空航天大学 Three-dimensional wireless sensor network node self-locating method based on neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JP特开平5-324013A 1993.12.07

Also Published As

Publication number Publication date
CN101853531A (en) 2010-10-06

Similar Documents

Publication Publication Date Title
CN101853531B (en) Helicopter flight state identification method based on presort technology and RBF (Radial Basis Function) neural network
US20210078589A1 (en) Electronic control unit testing optimization
CN109034376B (en) L STM-based unmanned aerial vehicle flight state prediction method and system
US10739736B2 (en) Apparatus and method for event detection and duration determination
CN110307982B (en) Bearing fault classification method based on CNN and Adaboost
CN103489005B (en) A kind of Classification of High Resolution Satellite Images method based on multiple Classifiers Combination
CN104596780B (en) Diagnosis method for sensor faults of motor train unit braking system
CN111680875B (en) Unmanned aerial vehicle state risk fuzzy comprehensive evaluation method based on probability baseline model
CN108100301B (en) Test flight data processing method for objective test of helicopter simulator
CN108694408B (en) Driving behavior recognition method based on deep sparse filtering convolutional neural network
CN111216126B (en) Multi-modal perception-based foot type robot motion behavior recognition method and system
CN112699793A (en) Fatigue driving detection optimization identification method based on random forest
CN116244657A (en) Train axle temperature abnormality identification method based on generation of countermeasure network and ensemble learning
CN106056147A (en) System and method for establishing target division remote damage assessment of different vehicle types based artificial intelligence radial basis function neural network method
Li et al. A lightweight and explainable data-driven scheme for fault detection of aerospace sensors
US7206674B1 (en) Information display system for atypical flight phase
CN113642114B (en) Personified random following driving behavior modeling method capable of making mistakes
KR102404498B1 (en) Industrial gearbox failure diagnosis apparatus and method using convolutional neural network based on adaptive time-frequency representation
CN112330114B (en) Aircraft hazard identification method based on mixed deep neural network
CN102122349B (en) Method for building multi-classification support vector machine classifier based on Bhattacharyya distance and directed acyclic graph
CN117076999A (en) Complex flight action small sample identification method and device based on double one-dimensional convolution attention mechanism
CN111474538A (en) Target classification method based on fuzzy logic reasoning
CN113222229B (en) Non-cooperative unmanned aerial vehicle track prediction method based on machine learning
CN105116323A (en) Motor fault detection method based on RBF
CN118414623A (en) Method for classifying maneuvers performed by an aircraft by segmenting a time series of measurements acquired during the flight of the aircraft

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120905

Termination date: 20130525