CN110728357B - IMU data denoising method based on recurrent neural network - Google Patents

IMU data denoising method based on recurrent neural network Download PDF

Info

Publication number
CN110728357B
CN110728357B CN201910888811.2A CN201910888811A CN110728357B CN 110728357 B CN110728357 B CN 110728357B CN 201910888811 A CN201910888811 A CN 201910888811A CN 110728357 B CN110728357 B CN 110728357B
Authority
CN
China
Prior art keywords
data
neural network
time
network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910888811.2A
Other languages
Chinese (zh)
Other versions
CN110728357A (en
Inventor
金世俊
杨凤
高鹏举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201910888811.2A priority Critical patent/CN110728357B/en
Publication of CN110728357A publication Critical patent/CN110728357A/en
Application granted granted Critical
Publication of CN110728357B publication Critical patent/CN110728357B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an IMU data denoising method based on a recurrent neural network, which comprises the steps of firstly establishing an IMU data time sequence sample data set, then constructing the recurrent neural network, and training and optimizing to obtain a weight file; finally, obtaining a deployment model; the invention introduces a time sequence prediction method into the measurement data of an inertial measurement unit (MEMS IMU) of a micro-mechanical electronic system, designs a nonlinear relation of a long-time memory network (LSTM) extraction time sequence by regarding the measurement data as a time sequence, and achieves the effect of removing noise by the sequence prediction method. Compared with the traditional denoising method based on the statistical model, the method has better robustness and generalization capability, and meanwhile, the randomness of the original data cannot be changed. The method has important application value in the aspects of inertia technology, integrated navigation and the like.

Description

IMU data denoising method based on recurrent neural network
Technical Field
The invention belongs to the field of micro-inertial instrument detection and deep learning, and relates to an IMU data denoising method based on a recurrent neural network.
Background
An Inertial Measurement Unit (IMU) of a micro-mechanical electronic system is a device for measuring the three-axis angular velocity and acceleration of an object, and can independently calculate the position, the course and the speed of the object through time integration without being influenced by the surrounding environment. The combined navigation scheme is provided by being widely applied to fusion with a Global Navigation Satellite System (GNSS) due to the characteristics of low price, volume advantage, autonomous measurement and the like, and the positioning information is independently output under the condition that satellite signals are weak and shielded. However, since raw measurement of the inertial measurement unit includes various non-linear and random errors, and the integral over time becomes larger and larger, denoising of IMU raw measurement data is a key step for improving the Inertial Navigation System (INS).
1. The identification and modeling methods for MEMS IMU signal random noise can be divided into two categories. One type is the traditional statistical model method represented by wavelet denoising and Allen variance. The traditional method mostly uses signal analysis methods such as median filtering, wavelet transformation and the like to carry out modeling, sensor data obtained from a data perspective is essentially a group of data changing along with time, and the analysis of error data also needs to consider the correlation with time, so that the precision is poor, and the noise removal effect is poor.
Disclosure of Invention
In order to solve the problems, the invention discloses an IMU data denoising method based on a neural network, aiming at original data measured by IMU seriously affected by noise, a long-time memory network is designed to extract time characteristics, so that the randomness of the original data can not be changed while a good noise smoothing effect is kept. The technical scheme of the invention comprises the following steps: an IMU data denoising method based on a recurrent neural network comprises the following steps:
the method comprises the following steps that 1, a robot experiment platform based on a motion sensor control system acquires data measured by an inertia measurement unit in a mode of sending data by a serial port of a lower computer, and establishes a preliminary IMU measurement data sample data set, wherein meanwhile, in order to facilitate subsequent model training and verification, the whole data set needs to be randomly divided into a training set and a test set according to a certain proportion;
the data obtained in the step 1 and measured by a large number of inertia measurement units are data of three-axis dynamic acceleration and three-axis angular velocity when the experimental robot rotates in situ in a dynamic environment; the frequency of data uploaded by the serial port of the lower computer is set to be a certain frequency, and a large amount of data within a certain time is measured under the condition of in-situ rotation.
Step 2: converting original data into historical data in a time step by using a sliding window method, wherein the historical data is characterized, and the data at the next moment is in a data format of a label and is used as a time sequence prediction problem data set;
step 3, designing a network structure based on the recurrent neural network, adopting a long-time memory network as a recurrent neural network main body, and enabling the input dimensionality to be the same as the dimensionality of the feature vector obtained in the step 2; adding a layer of full connection layer and an activation function behind the single-layer LSTM network; then, a fine tuning parameter model is added to the data set established in the step 2;
and 4, in order to select a proper time step, training a plurality of models under different time steps by using the model parameters. Selecting an optimal time step according to the standard deviation of the data after model processing;
and 5: after all the parameters are obtained through the steps 3 and 4, training the long-and-short-time memory network by adopting a time-based back propagation algorithm, and training and verifying the prediction effect of the long-and-short-time memory network on the characteristic vector data set obtained in the step 2;
step 6: after selecting the sequence prediction model with the best performance through the step 6, deploying the algorithm model; after the model is deployed, only a forward calculation process is needed, and a backward propagation process during training is not needed; in practical application, the method can be analyzed from the perspective of the mean value and the standard deviation.
The invention further improves that: the robot experiment platform including the motion sensor control system mounted in step 1 has a hardware structure including, but not limited to, an MPU6050 motion sensor.
The invention is further improved in that: the data measured in the step 1 comprise the Euler angle, the three-axis acceleration component, the three-axis angular velocity component and the frequency of the attitude, and the output and sampling time of the left wheel and the right wheel corresponding to the encoders; where the invention requires processing data are the three-axis acceleration component and the three-axis angular velocity component.
The invention further improves that: the output of the acceleration component measured by the motion sensor in step 1 is a 16-bit signed integer, which needs to be converted into the real-world acceleration, and the formula for converting the ACCX reading into the real-world acceleration is ax = ACCX × 2g/32768. In addition, the measured angular velocity component needs to be converted into radians, which is expressed by gx = GCCX 2000 pi/32768 180.
The invention is further improved in that: in the step 1, 80% of data is used as a training set, and the rest 20% is used as a test set, which can be adjusted according to actual needs.
The invention further improves that: in the step 2, firstly, the data is standardized and normalized; secondly, generating data for training, taking historical data of a time step as a characteristic, and taking data of the next position as a label of the sample; and making a time step sliding window to generate a data format which can be used for time sequence prediction.
The invention further improves that: inputting a three-dimensional vector with a format of [ n _ samples, timepieces, features ] into the recurrent neural network established in the step 3; wherein n _ samples is the total number of data generated by the sliding window, timepieces is the time step in step 4, features is the characteristic dimension, and the data dimension is 1 in the invention.
The invention further improves that: the recurrent neural network established in the step 3 takes a single-layer LSTM as a main body, and the influence of the control history state in the neural network unit on the current state is controlled by using an input gate, a forgetting gate and an output gate in the calculation process, so that the time characteristic is extracted; the hidden layer is set to have a units value of 1; the LSTM network parameter comprises that the Batch _ size is 128, the iteration number is 20, the loss function is a mean square error loss function, and the optimization algorithm is Adma; and then, directly outputting a predicted value by a full connection layer, wherein the activation function is Linear. The errors for the IMU signals can be processed in a time series and they should be modeled over time. The processing of time series is often solved by using a Recurrent Neural Network (RNN) in deep learning. For long-term memory learning, a special type of long-short term memory network (LSTM) of RNN is more common. LSTM introduces a "gate structure" to solve the long-term memory problem, which makes it capable of selective memory for long periods of time. This particular design structure makes it more suitable for predicting or processing time-based sequence data.
The LSTM can learn from sequence samples to obtain reliable time characteristic representation, and compared with the traditional denoising method based on a statistical model, the algorithm has better robustness and generalization capability, and meanwhile, the randomness of original data cannot be changed. The method has important application value in the aspects of inertia technology, integrated navigation and the like.
The invention further improves that: in step 4, a plurality of models are trained between step 5 and step 50 by using the parameter model finely adjusted in step 3, and as seen from the standard deviation, when the step is 25, the standard deviation of the data processed by the models is the lowest, so that the optimal step 35 is obtained.
9. The IMU data denoising method based on the recurrent neural network as claimed in claim 1, wherein the training of the time-based long-and-short memory network in step 5 comprises the steps of: the initial parameters of the long and short time memory network are initialized randomly, and the loss function adopts a mean square error loss function:
Figure 671322DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 24681DEST_PATH_IMAGE002
is a predictive value of the output of the LSTM network,
Figure 476522DEST_PATH_IMAGE003
is the tag value.
Figure 668469DEST_PATH_IMAGE004
Are network parameters. The specific parameters comprise parameters of a forgetting gate, an input gate and an output gate; the error is propagated backwards along the time direction, and the parameters are updated iteratively until the loss function reaches the global optimum.
The invention further improves that: the deployment process of the model in the step 6 is as follows:
step 601: after the measurement data are uploaded through the serial port, processing the original data to obtain acceleration values and angular velocity values under world coordinate systems at different moments;
step 602: and (3) transmitting the measured values into a trained long-and-short time memory network according to a time sequence, and finally calculating prediction results at different moments in a forward direction after the long-and-short time memory network comprehensively inputs a step length characteristic, and smoothing the original data to obtain final denoised data.
The invention has the beneficial effects that:
1. the method takes IMU measurement data as a time sequence, solves the problem of signal denoising as sequence prediction, and can better learn and extract the noise characteristics of the sequence.
2. The invention designs a long-time memory network training model and extracts the nonlinear characteristics of IMU data. And predicting the output of the next time through the sequence characteristics of the historical time. Compared with the traditional method, the method can better simulate the noise characteristic and greatly improve the noise removal effect.
The method ensures good denoising effect, does not change the randomness of the original data, greatly improves the precision of the motion sensor, and can improve the precision of a navigation system.
3. In order to achieve the purpose, the invention provides the following technical scheme: the network applied to IMU data denoising used in the invention has simple structure, high efficiency and easy expansion. The LSTM can learn from sequence samples to obtain reliable time characteristic representation, and compared with the traditional denoising method based on a statistical model, the algorithm has better robustness and generalization capability, and meanwhile, the randomness of original data cannot be changed. The method has important application value in the aspects of inertia technology, integrated navigation and the like.
Drawings
FIG. 1 is a flow chart of an IMU data denoising method based on an LSTM neural network provided by the invention;
FIG. 2 is a diagram of a neural network architecture of the present invention;
FIG. 3 is a table of predicted value standard deviations for different step sizes of the training model of the present invention.
Detailed Description
The present invention is further described below in conjunction with the detailed description and the descriptive drawings, and it will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the present invention belongs, unless otherwise defined. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. The preferred embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
As shown in fig. 1-2, the method for denoising IMU data based on a neural network according to the present embodiment includes the following steps:
the method comprises the following steps that 1, a robot experiment platform based on a motion sensor control system acquires data measured by an inertia measurement unit in a mode of sending data by a serial port of a lower computer, and establishes a preliminary IMU measurement data sample data set, wherein meanwhile, in order to facilitate subsequent model training and verification, the whole data set needs to be randomly divided into a training set and a test set according to a certain proportion; specifically, 80% of the data are used as training set, and the rest 20% are used as test set
The data obtained in the step 1 and measured by a large number of inertia measurement units are data of three-axis dynamic acceleration and three-axis angular velocity when the experimental robot rotates in situ in a dynamic environment; setting the frequency of data uploaded by the serial port of the lower computer to be a certain frequency, and measuring a large amount of data within a certain time under the condition of in-situ rotation.
The robot experiment platform including the motion sensor control system carried in the step 1 has a hardware structure including an MPU6050 motion sensor.
The data measured in the step 1 comprise the Euler angle, the three-axis acceleration component, the three-axis angular velocity component and the frequency of the attitude, and the output and sampling time of the left wheel and the right wheel corresponding to the encoders; where the invention is required to process data are the three-axis acceleration components and the three-axis angular velocity components.
The output of the acceleration component measured by the motion sensor in step 1 is a 16-bit signed integer, which needs to be converted into the real-world acceleration, and the formula for converting the ACCX reading into the real-world acceleration is ax = ACCX × 2g/32768. In addition, the measured angular velocity component needs to be converted to radians, with the formula gx = GCCX 2000 pi/32768 180.
Step 2: converting original data into historical data in a time step by using a sliding window method, wherein the historical data is characterized, and the data at the next moment is in a data format of a label and is used as a time sequence prediction problem data set; firstly, standardizing and normalizing data; secondly, generating data for training, wherein historical data of a time step is taken as a characteristic, and data of the latter position is taken as a label of a sample; and making a time step sliding window to generate a data format which can be used for time sequence prediction.
Step 3, designing a network structure based on the recurrent neural network, adopting a long and short time memory network as a recurrent neural network main body, and enabling the input dimensionality to be the same as the dimensionality of the characteristic vector obtained in the step 2; adding a layer of full connection layer and an activation function behind the single-layer LSTM network; then, a fine tuning parameter model is added to the data set established in the step 2;
the built recurrent neural network inputs a three-dimensional vector with a format of [ n _ samples, timepieces, features ]; wherein n _ samples is the total number of data generated by the sliding window, timepieces is the time step in step 4, features is the characteristic dimension, and the data dimension is 1 in the invention.
The built recurrent neural network takes a single-layer LSTM as a main body, an input gate, a forgetting gate and an output gate are used for controlling the influence of a control history state in a neural network unit on the current state in the calculation process, and time characteristics are extracted; the hidden layer is set to have a units value of 1; the LSTM network parameter comprises that the Batch _ size is 128, the iteration number is 20, the loss function is a mean square error loss function, and the optimization algorithm is Adma; and the subsequent full-connection layer directly outputs a predicted value, and the activation function is Linear.
And 4, in order to select a proper time step, training a plurality of models under different time steps by using the model parameters. Selecting an optimal time step according to the standard deviation of the data after model processing;
where the refined parametric model in step 3 is used, a number of models are trained between step 5 and 50, as shown in fig. 3: from the standard deviation perspective, at step size 25, the standard deviation of the data processed through the model is the lowest, so the optimal step size is taken 35.
And 5: after all the parameters are obtained through the steps 3 and 4, training the long-and-short-time memory network by adopting a time-based back propagation algorithm, and training and verifying the prediction effect of the long-and-short-time memory network on the characteristic vector data set obtained in the step 2;
the time-based long-time memory network training in the step 5 comprises the following steps: the initial parameters of the long and short time memory network are initialized randomly, and the loss function adopts a mean square error loss function:
Figure 889366DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 242987DEST_PATH_IMAGE002
is a predictive value of the output of the LSTM network,
Figure 182124DEST_PATH_IMAGE003
is the tag value.
Figure 289013DEST_PATH_IMAGE004
Are network parameters. The specific parameters comprise parameters of a forgetting gate, an input gate and an output gate; the error is propagated backwards along the time direction, and the parameters are updated iteratively until the loss function reaches the global optimum.
Step 6: after selecting the sequence prediction model with the most excellent performance through the step 6, deploying an algorithm model; after the model is deployed, only a forward calculation process is needed, and a backward propagation process during training is not needed; in practical application, the method can be analyzed from the perspective of the mean value and the standard deviation. The deployment process of the model in the step 6 is as follows:
step 601: after the measurement data are uploaded through the serial port, processing the original data to obtain acceleration values and angular velocity values under world coordinate systems at different moments;
step 602: and (3) transmitting the measured values into a trained long-and-short time memory network according to a time sequence, and finally calculating prediction results at different moments in a forward direction after the long-and-short time memory network comprehensively inputs a step length characteristic, and smoothing the original data to obtain final denoised data.
The technical means disclosed in the scheme of the invention are not limited to the technical means disclosed in the above embodiments, but also include the technical means formed by any combination of the above technical features. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.

Claims (10)

1. An IMU data denoising method based on a neural network is characterized by comprising the following steps: the method comprises the following steps that 1, a robot experiment platform based on a motion sensor control system acquires data measured by an inertia measurement unit in a mode of sending data by a serial port of a lower computer, and establishes a preliminary IMU measurement data sample data set, wherein meanwhile, in order to facilitate subsequent model training and verification, the whole data set needs to be randomly divided into a training set and a test set according to a certain proportion;
the data obtained in the step 1 and measured by a large number of inertia measurement units are data of three-axis dynamic acceleration and three-axis angular velocity when the experimental robot rotates in situ in a dynamic environment; setting the frequency of data uploaded by a serial port of a lower computer to be a certain frequency, and measuring a large amount of data within a certain time under the condition of in-situ rotation;
step 2: converting original data into historical data in a time step by using a sliding window method, wherein the historical data is characterized, and the data at the next moment is in a data format of a label and is used as a time sequence prediction problem data set; firstly, standardizing and normalizing data; secondly, generating data for training, wherein historical data of a time step is taken as a characteristic, and data of the latter position is taken as a label of a sample; making a time step sliding window to generate a data format which can be used for time sequence prediction;
step 3, designing a network structure based on the recurrent neural network, adopting a long and short time memory network as a recurrent neural network main body, and enabling the input dimensionality to be the same as the dimensionality of the characteristic vector obtained in the step 2; a layer of full connection layer and an activation function are added after the single-layer LSTM network; then, a fine tuning parameter model is added to the data set established in the step 2;
step 4, in order to select a proper time step, training a plurality of models under different time steps by using the model parameters;
selecting an optimal time step according to the standard deviation of the data after model processing;
and 5: after all parameters are obtained through the step 3 and the step 4, training the long and short time memory network by adopting a time-based back propagation algorithm, and training and verifying the prediction effect of the long and short time memory network on the characteristic vector data set obtained in the step 2;
step 6: after selecting the sequence prediction model with the best performance through the step 6, deploying the algorithm model; after the model is deployed, only a forward calculation process is needed, and a backward propagation process during training is not needed; in practical application, the method can be analyzed from the perspective of the mean value and the standard deviation.
2. The cyclic neural network-based IMU data denoising method according to claim 1, wherein the hardware structure of the robot experiment platform including the motion sensor control system loaded in step 1 includes but is not limited to an MPU6050 motion sensor.
3. The cyclic neural network-based IMU data denoising method of claim 1, wherein the data measured in step 1 comprises Euler angles of attitude, three-axis acceleration components, three-axis angular velocity components, frequencies, outputs of left and right wheel corresponding encoders, and sampling times; where the invention requires processing data are the three-axis acceleration component and the three-axis angular velocity component.
4. The cyclic neural network-based IMU data denoising method of claim 1, wherein the output of the acceleration component measured by the motion sensor in step 1 is a 16-bit signed integer, which needs to be converted into real-world acceleration, and the formula for converting ACCX reading into real-world acceleration is ax = ACCX × 2g/32768; where the measured angular velocity component also needs to be converted to radians, the formula gx = GCCX 2000 pi/32768 180.
5. The cyclic neural network-based IMU data denoising method of claim 1, wherein 80% of data is used as a training set and the remaining 20% is used as a testing set in step 1.
6. The cyclic neural network-based IMU data denoising method of claim 1, wherein the cyclic neural network built in the step 3 inputs a three-dimensional vector with a format of [ n _ samples, timepieces, features ]; wherein n _ samples is the total number of data generated by the sliding window, timepieces is the time step in step 4, features is the characteristic dimension, and the data dimension is 1 in the invention.
7. The IMU data denoising method based on the recurrent neural network as claimed in claim 1, wherein the recurrent neural network constructed in step 3 takes a single layer LSTM as a main body, and an input gate, a forgetting gate and an output gate are used in a calculation process to control the influence of a historical control state in a neural network unit on a current state, so as to extract time characteristics; the hidden layer is set to have a units value of 1; the LSTM network parameter comprises that the Batch _ size is 128, the iteration number is 20, the loss function is a mean square error loss function, and the optimization algorithm is Adma; and the subsequent full-connection layer directly outputs a predicted value, and the activation function is Linear.
8. The cyclic neural network-based IMU data denoising method of claim 1, wherein step 4 uses the finely tuned parametric model of step 3 to train multiple models between step 5 and step 50, and from the standard deviation, at step 25, the standard deviation of the data processed by the models is the lowest, so the optimal step 35 is taken.
9. The cyclic neural network-based IMU data denoising method of claim 1, wherein the time-based long and short memory network training in step 5 comprises the steps of: the initial parameters of the long and short time memory network are initialized randomly, and the loss function adopts a mean square error loss function:
Figure DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE004
is a predicted value of the output of the LSTM network,
Figure DEST_PATH_IMAGE006
is the tag value;
Figure DEST_PATH_IMAGE008
the specific parameters comprise parameters of a forgetting gate, an input gate and an output gate; and (4) reversely propagating the error along the time direction, and iteratively updating the parameters until the loss function is optimal.
10. The cyclic neural network-based IMU data denoising method of claim 1, wherein the model deployment process in step 6 is as follows:
step 601: after the measurement data are uploaded through the serial port, processing the original data to obtain acceleration values and angular velocity values under world coordinate systems at different moments;
step 602: and transmitting the measured values into a trained long-and-short time memory network according to a time sequence, finally, after the long-and-short time memory network integrates the input characteristics of one step length, calculating the prediction results at different moments in the forward direction, and smoothing the original data to obtain the final de-noised data.
CN201910888811.2A 2019-09-19 2019-09-19 IMU data denoising method based on recurrent neural network Active CN110728357B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910888811.2A CN110728357B (en) 2019-09-19 2019-09-19 IMU data denoising method based on recurrent neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910888811.2A CN110728357B (en) 2019-09-19 2019-09-19 IMU data denoising method based on recurrent neural network

Publications (2)

Publication Number Publication Date
CN110728357A CN110728357A (en) 2020-01-24
CN110728357B true CN110728357B (en) 2022-11-18

Family

ID=69219263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910888811.2A Active CN110728357B (en) 2019-09-19 2019-09-19 IMU data denoising method based on recurrent neural network

Country Status (1)

Country Link
CN (1) CN110728357B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111561929B (en) * 2020-04-26 2022-05-03 南京工业大学 Time delay and noise reduction method for vehicle-mounted MEMS inertial sensor
CN111895986A (en) * 2020-06-30 2020-11-06 西安建筑科技大学 MEMS gyroscope original output signal noise reduction method based on LSTM neural network
CN112671419B (en) * 2020-12-17 2022-05-03 北京邮电大学 Wireless signal reconstruction method, device, system, equipment and storage medium
CN113252060B (en) * 2021-05-31 2021-09-21 智道网联科技(北京)有限公司 Vehicle track calculation method and device based on neural network model
CN113447021B (en) * 2021-07-15 2023-04-25 北京理工大学 MEMS inertial navigation system positioning enhancement method based on LSTM neural network model
CN116628421B (en) * 2023-05-19 2024-01-30 北京航空航天大学 IMU (inertial measurement Unit) original data denoising method based on self-supervision learning neural network model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409242A (en) * 2018-09-28 2019-03-01 东南大学 A kind of black smoke vehicle detection method based on cyclic convolution neural network
CN109492838A (en) * 2019-01-16 2019-03-19 中国地质大学(武汉) A kind of stock index price expectation method based on deep-cycle neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409242A (en) * 2018-09-28 2019-03-01 东南大学 A kind of black smoke vehicle detection method based on cyclic convolution neural network
CN109492838A (en) * 2019-01-16 2019-03-19 中国地质大学(武汉) A kind of stock index price expectation method based on deep-cycle neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于门控循环单元神经网络的金融时间序列预测;张金磊等;《广西师范大学学报(自然科学版)》;20190425(第02期);全文 *

Also Published As

Publication number Publication date
CN110728357A (en) 2020-01-24

Similar Documents

Publication Publication Date Title
CN110728357B (en) IMU data denoising method based on recurrent neural network
US11704554B2 (en) Automated training data extraction method for dynamic models for autonomous driving vehicles
CA2704107A1 (en) A method and system for data analysis and synthesis
CN107110650A (en) The method of estimation of affined navigational state in terms of observability
Sinha et al. Estimating ocean surface currents with machine learning
Li et al. Underwater terrain-aided navigation system based on combination matching algorithm
CN109615860A (en) A kind of signalized intersections method for estimating state based on nonparametric Bayes frame
CN103604430A (en) Marginalized cubature Kalman filter (CKF)-based gravity aided navigation method
US9659122B2 (en) Aerodynamic design optimization using information extracted from analysis of unstructured surface meshes
Lin et al. An improved MCMC-based particle filter for GPS-aided SINS in-motion initial alignment
CN104280047A (en) Gyroscope shift filtering system and method integrating multiple sensors
Yan et al. An adaptive nonlinear filter for integrated navigation systems using deep neural networks
Chauhan et al. Review of aerodynamic parameter estimation techniques
Du et al. A hybrid fusion strategy for the land vehicle navigation using MEMS INS, odometer and GNSS
CN111469781B (en) For use in output of information processing system method and apparatus of (1)
CN114608568A (en) Multi-sensor-based information instant fusion positioning method
Zhou et al. A discrete quaternion particle filter based on deterministic sampling for IMU attitude estimation
Golroudbari et al. Generalizable end-to-end deep learning frameworks for real-time attitude estimation using 6DoF inertial measurement units
CN116502777A (en) Transformer-based four-dimensional track long-time prediction method and device
Rezaie et al. Shrinked (1− α) ensemble Kalman filter and α Gaussian mixture filter
US11599751B2 (en) Methods and apparatus to simulate sensor data
Hao et al. Particle filter for INS in-motion alignment
CN108646719B (en) Weak fault detection method and system
Fernandes et al. Gnss/mems-ins integration for drone navigation using ekf on lie groups
Schuhmacher et al. Investigation of the Robustness of Neural Density Fields

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant