CN106959121A - A kind of application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning - Google Patents
A kind of application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning Download PDFInfo
- Publication number
- CN106959121A CN106959121A CN201710113294.2A CN201710113294A CN106959121A CN 106959121 A CN106959121 A CN 106959121A CN 201710113294 A CN201710113294 A CN 201710113294A CN 106959121 A CN106959121 A CN 106959121A
- Authority
- CN
- China
- Prior art keywords
- magnetic compass
- error
- reversely
- tuning
- autonomous
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C25/00—Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
Landscapes
- Engineering & Computer Science (AREA)
- Manufacturing & Machinery (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Measuring Magnetic Variables (AREA)
Abstract
The present invention proposes a kind of learning algorithm that transfinites of autonomous reversely tuning, for the compensation of magnetic compass complicated error.Initially set up the implicit error model of magnetic compass based on the learning machine that transfinites, then network parameter is determined using the learning algorithm that transfinites, the reverse tuning mechanism of deep learning is used for reference, above-mentioned network parameter is reversely finely tuned using network residual error, the error model trained is finally utilized(Neutral net)Magnetic compass error is compensated.The learning algorithm that transfinites is accomplished that input layer and hidden layer connection weight are randomly selected, and this also reduces network performance to a certain extent while training speed is improved.For this case, the present invention proposes a kind of learning algorithm that transfinites of autonomous reversely tuning, the algorithm is reversely finely tuned using network residual error to the connection weight of random initializtion, and the autonomous optimizing of hidden layer neuron number is realized, substantially increase magnetic compass error compensation precision while ensureing learning efficiency.
Description
Technical field
Field is compensated the present invention relates to magneto-electronic compass error.
Background technology
Navigation orientation all plays more and more important effect in daily life and military field, and magnetic compass belongs to passive passive
Navigation, good concealment, signal is unobstructed, and in the absence of accumulated error, one of core technology as navigation orientation.Latest developments
The AMR reluctive transducers small volume got up, fast response time, as main trend.But AMR reluctive transducers are not only existed
Spacing is small between normal axis effect, and sensing element and follow-up modulate circuit, the magnetic field as caused by integrated circuit pin material
Aberration problems are more prominent relative to fluxgate sensor.This undoubtedly adds calibration difficulty, to Error Compensation Technology requirement more
It is high.
Currently a popular way be by analyzing Error Mechanism, using earth's magnetic field amplitude and direction invariant feature,
Display error model is set up, then error parameter is estimated by distinct methods.Such as:Ellipse hypothesis method, step calibration method,
Ellipsoid subjunctive, Filters with Magnitude Constraints method, position inversion method, dot product not political reform etc., by Analysis on Mechanism, set up explicit error model
(or measurement model), reattempts distinct methods and carries out offline or On-line Estimation to model parameter.Yet with Magnetic Sensor error
The complexity in source, any explicit error model is all difficult to include all error components.Researcher begins attempt to utilize god
Implicit error model is trained through network.
A kind of three axle magneto-electronic compass error compensation methods based on deep learning of patent CN104931028A, although propose
Implicit errors model is trained with deep learning to reduce error, but deep learning method needs substantial amounts of error information to instruct
Practice, it is very high to sample size requirements, and also the training time is long, over-fitting easily occurs now in the case where sample size is small, causes
Model generalization ability declines.
The content of the invention
The present invention is a kind of method available for magnetic compass nonlinear error compensation, a kind of transfinite of autonomous reversely tuning
Application process of the algorithm in magnetic compass error compensation is practised, implicit error model is trained, deposited with compensating magnetic compass measurement
Nonlinearity erron, improve magnetic compass orientation accuracy.
A kind of application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning, including following step
Suddenly:
S1, magnetic compass implicit error model of the foundation based on the learning machine that transfinites;
S2, using transfiniting, learning algorithm determines network parameter;
S3, the reference reverse tuning mechanism of deep learning, are reversely finely tuned using network residual error to above-mentioned network parameter;
S4, using the error model (neutral net) trained magnetic compass error is compensated.
Preferably, the step S1 sets up the implicit error model of magnetic compass, and to the implicit error model of the magnetic compass
It is trained, to compensate the nonlinearity erron that magnetic compass measurement is present, improves magnetic compass orientation accuracy.
In any of the above-described scheme preferably, in the implicit error model of magnetic compass
Magnetic course angle α is defined as the projection of carrier direction of advance in the horizontal plane and local meridianal angle, edge
Turn clockwise, span is 0~360 °, and magnetic heading angle φ can by constituting the three axis magnetometer and accelerometer of magnetic compass
Calculating is obtained, and magnetic declination is represented with δ, then course angle is
α=φ+δ (1)
In any of the above-described scheme preferably, in the implicit error model of magnetic compass
There is soft magnetism interference and normal axis effect, course angle measurement in three axis magnetometerIt is non-linear between actual heading α
Error function relation
In any of the above-described scheme preferably, the step S3 passes through connection weight of the network residual error to random initializtion
Reversely finely tuned, realize the autonomous optimizing of hidden nodes.
In any of the above-described scheme preferably, magnetic compass error compensation is divided into two steps in the step S4:
(S401) error model is trained;
(S402) error compensation during use;
In any of the above-described scheme preferably, error model training includes two stages in the step S401:
(S701) random initializtion input connection weight wijAnd bi, output connection weight β is calculated according to formulaki;
(S702) according to network residual error, use for reference deep learning reverse tuning and connection weight is reversely finely tuned.
In any of the above-described scheme preferably, the error model training, when N is odd number, performs the first stage
(S701), when N is even number, second stage (S702) is performed.
In any of the above-described scheme preferably, the error model training, when N is odd number, when performing the first stage,
ELM networks lack output layer biasing, and input weight wijB is biased with hidden layeriRandomly generating to adjust, whole network
Only it is left output weight betakiIt needs to be determined that;
Assuming that hidden layer has L neuron, training set includes N number of different sample (xi,yi), wherein inputting xi=(xi1,xi2…
xin)∈Rn, export yi=(yi1,yi2…yim)∈Rm, according to nonlinear activation function g (x), can be approached with zero error from same
N number of input sample of one continuous system so that
Wherein, wi=(wi1,wi2…win), xj=(x1j,x2j…xnj)T;(3) formula is rewritable is
Y=H β (4)
WhereinHidden layer is exported
The output of ELM networks is made to be equal to shown in sample label such as formula (5)
T=Y=H β (5)
In most cases, the number of hidden neuron is far smaller than the number N of training sample, and therefore the solution of (5) formula is to make
Solution minimum loss function C
According to least-norm solution criterion, there are Minimal Norm Least Square Solutions in (6) formula
β=H+T (7)
Wherein H+For output matrix H generalized inverse.
In any of the above-described scheme preferably, the error model training, when N is even number, performs second stage, bag
Include following sub-step:
(S1001) connection weight w is obtained according to the first stageij、biAnd βki,
The corresponding network of magnetic compass measured value is obtained by propagated forward to export;
(S1002) deviation, as network residual error err between calculating network output and magnetic compass reference value;
(S1003) activation value for assuming each layer of network is respectively a1, a2 and a3, and network losses function is e;
(S1004) reverse tuning is carried out, reduces loss function e.
In any of the above-described scheme preferably, in the step (S1004), by seeking loss function to connection weight
Local derviation obtains least disadvantage function e, comprises the following steps:
(S1101) during propagated forward, the activation value of each layer is:
WhereinThe network residual error err of each nodei=Ti-a3i, loss function
(S1102) second layer needed for reverse derived function and third layer residual error are passed through:
(S1103) gradient of input layer and output layer is:
(S1104) reverse tuning is carried out using gradient descent method:
In any of the above-described scheme preferably, according to step (S1004), reversely tuning result updates network cost function,
If exceeding predetermined value K less than desired value, or iterations, then training terminates;Otherwise preceding step continuation is returned to.
In any of the above-described scheme preferably, the magnetic compass error compensation uses transfiniting based on autonomous reversely tuning
Learning algorithm trains Nonlinear Error Models, recycles the model that true magnetic field value is returned in the measurement magnetic field inversion after distortion, subtracts
Small course angular error calculation,
WhereinIt is the magnetic course angle after compensation,For activation primitive, X is input vector, and L is represented
Which corresponding layer.
In any of the above-described scheme preferably, it is magnetic compass is suitably local installed in carrier inside, then magnetic sieve
Disk course angle measurement inputs the Nonlinear Error Models trained above, obtained according to formula (12) after normalized
Course angle output after compensation
The nonlinearity erron that the present invention exists for magnetic compass, sets up implicit error model, it is proposed that autonomous reversely tuning
The learning algorithm that transfinites, each layer initial weight not completely random produce, this not only substantially reduces network training and is absorbed in part
Minimum probability, also improves network training efficiency;Reversely autonomous tuning further solves how to determine optimal hidden node
The problem of number.
Brief description of the drawings
Fig. 1 is learning algorithm the answering in magnetic compass error compensation of transfiniting as the autonomous reverse tuning according to the present invention
With the learning error model that transfinites of the autonomous reverse tuning of a preferred embodiment of method.
Fig. 2 is learning algorithm the answering in magnetic compass error compensation of transfiniting as the autonomous reverse tuning according to the present invention
With the learning network training flow chart that transfinites based on autonomous reversely tuning of a preferred embodiment of method.
Fig. 3 is learning algorithm the answering in magnetic compass error compensation of transfiniting as the autonomous reverse tuning according to the present invention
With the magnetic compass error compensation schematic diagram of a preferred embodiment of method.
Fig. 4 is learning algorithm the answering in magnetic compass error compensation of transfiniting as the autonomous reverse tuning according to the present invention
With the magnetic compass model schematic of a preferred embodiment of method.
Fig. 5 is learning algorithm the answering in magnetic compass error compensation of transfiniting as the autonomous reverse tuning according to the present invention
With applicating flow chart of the learning algorithm in magnetic compass error compensation that transfinite of a preferred embodiment of method.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar device or the device with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, is only used for explaining the present invention, and is not construed as limiting the claims.
A kind of application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning, including following step
Suddenly:
S1, magnetic compass implicit error model of the foundation based on the learning machine that transfinites;
S2, using transfiniting, learning algorithm determines network parameter;
S3, the reference reverse tuning mechanism of deep learning, are reversely finely tuned using network residual error to above-mentioned network parameter;
S4, using the error model (neutral net) trained magnetic compass error is compensated.As shown in Figure 5.
The step S1 sets up the implicit error model of magnetic compass, and the implicit error model of the magnetic compass is trained,
To compensate the nonlinearity erron that magnetic compass measurement is present, magnetic compass orientation accuracy is improved.
In any of the above-described scheme preferably, in the implicit error model of magnetic compass
Magnetic course angle α is defined as the projection of carrier direction of advance in the horizontal plane and local meridianal angle, edge
Turn clockwise, span is 0~360 °, and magnetic heading angle φ can by constituting the three axis magnetometer and accelerometer of magnetic compass
Calculating is obtained, and magnetic declination is represented with δ, then course angle is
α=φ+δ (1)
In the implicit error model of magnetic compass
There is soft magnetism interference and normal axis effect, course angle measurement in three axis magnetometerIt is non-linear between actual heading α
Error function relation
The step S3 is reversely finely tuned by network residual error to the connection weight of random initializtion, realizes hidden layer nerve
First autonomous optimizing of number.In Fig. 4, δ is magnetic declination, be earth surface any point magnetic meridian and geographic meridian between folder
Angle, because north geographic pole and magnetic north pole are not fully overlapped;φ is magnetic heading angle, is the folder between the carrier longitudinal axis and magnetic north pole
Angle;α is course angle, is the angle between the carrier longitudinal axis and the earth arctic, also referred to as true course angle.Relation between three is:α=
φ+δ。
A kind of learning error model schematic that transfinites of autonomous reversely tuning is given in Fig. 1, by measurement course angle input
Into an increment type extreme learning machine, output refers to course angle.Point increment type extreme learning machine can regard one 3 layers as with abstract
Network structure, is input layer, hidden layer, output layer respectively.Hidden layer node number influences larger to error model training precision,
The general form iterative query by increment of optimal nodes, the network that+1 iteration of kth is obtained (has k+1 hidden layer god
Through member) output error is less than the network output error that kth time iteration is obtained, the problem of solving hidden node selection hardly possible.But k
There is the output weight w of many neurons in+1 hidden neuronijVery little, these neurons export the shadow played to final network
Very little is rung, the iterations of incremental extreme learning machine is greatly reduced by reducing the neuron of these " useless ", study is improved
Efficiency, uses for reference the reverse tuning thought of deep learning algorithm connection weight, according to the learning machine network output residual error that transfinites, using reverse
Derivation and gradient decline carries out reverse tuning to connection weight.This method both reduces iterations, and network training has been ensured again
Precision.
Magnetic compass error compensation is divided into two steps in the step S4:
(S401) error model is trained;
(S402) error compensation during use;
In any of the above-described scheme preferably, error model training includes two stages in the step S401:
(S701) random initializtion input connection weight wijAnd bi, output connection weight β is calculated according to formulaki;
(S702) according to network residual error, use for reference deep learning reverse tuning and connection weight is reversely finely tuned.Above-mentioned
In either a program preferably, the error model training, when N is odd number, performs the first stage (S701), when N is even number
When, perform second stage (S702).
In any of the above-described scheme preferably, the error model training, when N is odd number, when performing the first stage,
ELM networks lack output layer biasing, and input weight wijB is biased with hidden layeriRandomly generating to adjust, whole network
Only it is left output weight betakiIt needs to be determined that;
Assuming that hidden layer has L neuron, training set includes N number of different sample (xi,yi), wherein inputting xi=(xi1,xi2…
xin)∈Rn, export yi=(yi1,yi2…yim)∈Rm, according to nonlinear activation function g (x), can be approached with zero error from same
N number of input sample of one continuous system so that
Wherein, wi=(wi1,wi2…win), xj=(x1j,x2j…xnj)T;(3) formula is rewritable is
Y=H β (4)
WhereinHidden layer is exported
The output of ELM networks is made to be equal to shown in sample label such as formula (5)
T=Y=H β (5)
In most cases, the number of hidden neuron is far smaller than the number N of training sample, and therefore the solution of (5) formula is to make
Solution minimum loss function C
According to least-norm solution criterion, there are Minimal Norm Least Square Solutions in (6) formula
β=H+T (7)
Wherein H+For output matrix H generalized inverse.
In any of the above-described scheme preferably, the error model training, when N is even number, performs second stage, bag
Include following sub-step:
(S1001) connection weight w is obtained according to the first stageij、biAnd βki,
The corresponding network of magnetic compass measured value is obtained by propagated forward to export;
(S1002) deviation, as network residual error err between calculating network output and magnetic compass reference value;
(S1003) activation value for assuming each layer of network is respectively a1, a2 and a3, and network losses function is e;
(S1004) reverse tuning is carried out, reduces loss function e.
In any of the above-described scheme preferably, in the step (S1004), by seeking loss function to connection weight
Local derviation obtains least disadvantage function e, comprises the following steps:
(S1101) during propagated forward, the activation value of each layer is:
WhereinThe network residual error err of each nodei=Ti-a3i, loss function
(S1102) second layer needed for reverse derived function and third layer residual error are passed through:
(S1103) gradient of input layer and output layer is:
(S1104) reverse tuning is carried out using gradient descent method:
In any of the above-described scheme preferably, according to step (S1004), reversely tuning result updates network cost function,
If exceeding predetermined value K less than desired value, or iterations, then training terminates;Otherwise preceding step continuation is returned to.
In any of the above-described scheme preferably, the magnetic compass error compensation uses transfiniting based on autonomous reversely tuning
Learning algorithm trains Nonlinear Error Models, recycles the model that true magnetic field value is returned in the measurement magnetic field inversion after distortion, subtracts
Small course angular error calculation,
WhereinIt is the magnetic course angle after compensation,For activation primitive, X is input vector, and L is represented pair
Which layer answered.
In any of the above-described scheme preferably, it is magnetic compass is suitably local installed in carrier inside, then magnetic sieve
Disk course angle measurement inputs the Nonlinear Error Models trained above, obtained according to formula (12) after normalized
Course angle output after compensation
Fig. 2 gives the learning network training flow chart that transfinites based on autonomous reversely tuning, and concealed nodes are initialized first
Number L, loss function e, iterations K, then random initializtion input weight w, b, calculate output weights, then according to formula 7
Neutral net carries out propagated forward, according to w, b, updates network output, when the loss function of output is less than e, or less than maximum
Iterations K, continues to train, and otherwise training terminates.During continuing to train, step judgement is had, if nodes L is
Odd number, just carries out reverse derivation and gradient calculation according to formula 9 and formula 10, then according to formula 11 to connection weight w, b and
Carry out reverse tuning.
Magnetic compass is suitably local installed in carrier inside, then magnetic course angle measurement by normalization
After reason, the Nonlinear Error Models trained above are inputted, the course angle output after being compensated according to formula (12)Such as Fig. 3 institutes
Show.The first step:Magnetic compass is fixed on three axles without on magnetic turntable, by changing the magnetic compass angle of pitch, roll angle, side without magnetic turntable
Parallactic angle, obtains raw data set, and notices that the course angle for recording photoelectric encoder output is used as the mark of data set.
Second step:Using same place earth's magnetic field and gravity field vector invariant feature, median filter is slided in design, to surveying
Amount data are pre-processed;Geomagnetic fieldvector and gravitational field inner product of vector abnormal data are abandoned, reduces magnetometer and accelerometer
Measurement noise and random error interference, obtain the preferable error model training set X of quality and mark T.
3rd step:Initial data is normalized using minimax method, network weight (i.e. error mould is improved
Type) training precision.
4th step:Randomly select in training set 90% data, initialization hidden layer nodes are 1, utilize study of transfiniting
Algorithm carries out initial training to network;Then connection weight is updated by reverse tuning;If the network losses after updating
Function or iterations reach desired value, and training terminates, and hidden layer nodes and iterations plus 1, repeat above-mentioned step
Suddenly.
5th step:Remaining 10% data are used as test set, the training effect of validation error model.Changed according to the result
Enter the learning algorithm that transfinites based on autonomous reversely tuning, until obtaining satisfied magnetic compass error model compensation precision.
The nonlinearity erron existed for magnetic compass, sets up implicit error model, it is proposed that autonomous reversely tuning transfinites
Learning algorithm, not completely random is produced each layer initial weight, and this not only substantially reduces network training and is absorbed in Local Minimum
Probability, also improves network training efficiency;Reversely autonomous tuning further solves how to determine optimal the number of hidden nodes purpose
Problem.
One embodiment of the present of invention is described in detail above, but the content is only the preferable implementation of the present invention
Example, it is impossible to be considered as the practical range for limiting the present invention.All impartial changes made according to the present patent application scope and improvement
Deng within the patent covering scope that all should still belong to the present invention.
Claims (10)
1. a kind of application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning, including following step
Suddenly:
S1, magnetic compass implicit error model of the foundation based on the learning machine that transfinites;
S2, using transfiniting, learning algorithm determines network parameter;
S3, the reference reverse tuning mechanism of deep learning, are reversely finely tuned using network residual error to above-mentioned network parameter;
S4, using the error model (neutral net) trained magnetic compass error is compensated.
2. application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning as claimed in claim 1, its
It is characterised by:The step S1 sets up the implicit error model of magnetic compass, and the implicit error model of the magnetic compass is trained,
To compensate the nonlinearity erron that magnetic compass measurement is present, magnetic compass orientation accuracy is improved.
3. application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning as claimed in claim 2, its
It is characterised by:In the implicit error model of magnetic compass
Magnetic course angle α is defined as the projection of carrier direction of advance in the horizontal plane and local meridianal angle, along up time
Pin rotates, and span is 0~360 °, and magnetic heading angle φ can be calculated by constituting the three axis magnetometer and accelerometer of magnetic compass
Obtain, magnetic declination is represented with δ, then course angle is
α=φ+δ (1).
4. application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning as claimed in claim 3, its
It is characterised by:In the implicit error model of magnetic compass
There is soft magnetism interference and normal axis effect, course angle measurement in three axis magnetometerThe nonlinearity erron between actual heading α
Functional relation
。
5. application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning as claimed in claim 1, its
It is characterised by:The step S3 is reversely finely tuned by network residual error to the connection weight of random initializtion, realizes hidden layer god
Through the autonomous optimizing of first number.
6. application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning according to claim 1,
Characterized in that, magnetic compass error compensation is divided into two steps in the step S4:
(S401) error model is trained;
(S402) error compensation during use.
7. application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning as claimed in claim 1, its
It is characterised by, error model training includes two stages in the step S401:
(S701) random initializtion input connection weight wijAnd bi, output connection weight β is calculated according to formulaki;
(S702) according to network residual error, use for reference deep learning reverse tuning and connection weight is reversely finely tuned.
8. application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning as claimed in claim 7, its
It is characterised by:The error model training, when N is odd number, performs the first stage (S701), when N is even number, performs second
Stage (S702).
9. application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning as claimed in claim 8, its
It is characterised by:The error model training, when N is odd number, when performing the first stage, ELM networks lack output layer biasing, and
And input weight wijB is biased with hidden layeriRandomly generating to adjust, and whole network is only left output weight betakiNeed really
It is fixed;
Assuming that hidden layer has L neuron, training set includes N number of different sample (xi,yi), wherein inputting xi=(xi1,xi2…xin)
∈Rn, export yi=(yi1,yi2…yim)∈Rm, according to nonlinear activation function g (x), can be approached with zero error from same
N number of input sample of continuous system so that
Wherein, wi=(wi1,wi2…win), xj=(x1j,x2j…xnj)T;(3) formula is rewritable is
Y=H β (4)
WhereinHidden layer is exported
The output of ELM networks is made to be equal to shown in sample label such as formula (5)
T=Y=H β (5)
In most cases, the number of hidden neuron is far smaller than the number N of training sample, and therefore the solution of (5) formula is to make loss
Solution minimum function C
According to least-norm solution criterion, there are Minimal Norm Least Square Solutions in (6) formula
β=H+T (7)
Wherein H+For output matrix H generalized inverse.
10. application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning as claimed in claim 8, its
It is characterised by:The error model training, when N is even number, performs second stage, including following sub-step:
(S1001) connection weight w is obtained according to the first stageij、biAnd βki,
The corresponding network of magnetic compass measured value is obtained by propagated forward to export;
(S1002) deviation, as network residual error err between calculating network output and magnetic compass reference value;
(S1003) activation value for assuming each layer of network is respectively a1, a2 and a3, and network losses function is e.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710113294.2A CN106959121B (en) | 2017-02-28 | 2017-02-28 | Application method of self-contained reverse-optimization-based overrun learning algorithm in magnetic compass error compensation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710113294.2A CN106959121B (en) | 2017-02-28 | 2017-02-28 | Application method of self-contained reverse-optimization-based overrun learning algorithm in magnetic compass error compensation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106959121A true CN106959121A (en) | 2017-07-18 |
CN106959121B CN106959121B (en) | 2020-12-29 |
Family
ID=59469987
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710113294.2A Expired - Fee Related CN106959121B (en) | 2017-02-28 | 2017-02-28 | Application method of self-contained reverse-optimization-based overrun learning algorithm in magnetic compass error compensation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106959121B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107389049A (en) * | 2017-08-10 | 2017-11-24 | 北京联合大学 | A kind of magnetic compass real-time error compensation method based on class Kalman's factor |
CN109752568A (en) * | 2019-01-28 | 2019-05-14 | 南京理工大学 | Microelectromechanical systems accelerometer scaling method based on principal component analysis |
CN112284366A (en) * | 2020-10-26 | 2021-01-29 | 中北大学 | Method for correcting course angle error of polarized light compass based on TG-LSTM neural network |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102297687B (en) * | 2011-05-13 | 2012-07-04 | 北京理工大学 | Calibrating method for electronic compass |
CN103793718A (en) * | 2013-12-11 | 2014-05-14 | 台州学院 | Deep study-based facial expression recognition method |
CN104931028A (en) * | 2015-06-30 | 2015-09-23 | 北京联合大学 | Triaxial magnetic electronic compass error compensation method based on depth learning |
CN106096728A (en) * | 2016-06-03 | 2016-11-09 | 南京航空航天大学 | A kind of dangerous matter sources recognition methods based on deep layer extreme learning machine |
-
2017
- 2017-02-28 CN CN201710113294.2A patent/CN106959121B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102297687B (en) * | 2011-05-13 | 2012-07-04 | 北京理工大学 | Calibrating method for electronic compass |
CN103793718A (en) * | 2013-12-11 | 2014-05-14 | 台州学院 | Deep study-based facial expression recognition method |
CN104931028A (en) * | 2015-06-30 | 2015-09-23 | 北京联合大学 | Triaxial magnetic electronic compass error compensation method based on depth learning |
CN106096728A (en) * | 2016-06-03 | 2016-11-09 | 南京航空航天大学 | A kind of dangerous matter sources recognition methods based on deep layer extreme learning machine |
Non-Patent Citations (4)
Title |
---|
FENG G. 等: "Error minimized extreme learning machine with growth of hidden nodes and incremental learning", 《IEEE TRANSACTIONS ON NEURAL NETWORKS》 * |
丁美昆 等: "深度信念网络研究综述", 《工业控制计算机》 * |
刘艳霞 等: "超限学习机在磁罗盘非线性误差补偿中的应用", 《仪器仪表学报》 * |
张博 等: "基于神经网络的装甲车辆铅酸蓄电池荷电状态估测研究", 《信息系统工程》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107389049A (en) * | 2017-08-10 | 2017-11-24 | 北京联合大学 | A kind of magnetic compass real-time error compensation method based on class Kalman's factor |
CN107389049B (en) * | 2017-08-10 | 2019-12-24 | 北京联合大学 | Magnetic compass error real-time compensation method based on quasi-Kalman factor |
CN109752568A (en) * | 2019-01-28 | 2019-05-14 | 南京理工大学 | Microelectromechanical systems accelerometer scaling method based on principal component analysis |
CN109752568B (en) * | 2019-01-28 | 2020-12-04 | 南京理工大学 | Method for calibrating accelerometer of micro-electro-mechanical system based on principal component analysis |
CN112284366A (en) * | 2020-10-26 | 2021-01-29 | 中北大学 | Method for correcting course angle error of polarized light compass based on TG-LSTM neural network |
CN112284366B (en) * | 2020-10-26 | 2022-04-12 | 中北大学 | Method for correcting course angle error of polarized light compass based on TG-LSTM neural network |
Also Published As
Publication number | Publication date |
---|---|
CN106959121B (en) | 2020-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104931028B (en) | A kind of three axle magneto-electronic compass error compensation methods based on deep learning | |
CN107389049B (en) | Magnetic compass error real-time compensation method based on quasi-Kalman factor | |
CN111156987B (en) | Inertia/astronomy combined navigation method based on residual compensation multi-rate CKF | |
CN103499345B (en) | A kind of Fiber Optic Gyroscope Temperature Drift compensation method based on wavelet analysis and BP neural network | |
CN106959121A (en) | A kind of application process of the learning algorithm in magnetic compass error compensation that transfinite of autonomous reversely tuning | |
CN109188026B (en) | Automatic calibration deep learning method suitable for MEMS accelerometer | |
CN106706003A (en) | Online calibration method for north-seeking rotation on basis of triaxial MEMS (Micro-Electromechanical System) gyroscope | |
CN107290801B (en) | One step bearing calibration of strapdown three axis magnetometer error based on functional-link direct type neural network and the field mould difference of two squares | |
CN113074752B (en) | Dynamic calibration method and system for vehicle-mounted geomagnetic sensor | |
CN103454677B (en) | Based on the earthquake data inversion method that population is combined with linear adder device | |
CN106052716A (en) | Method for calibrating gyro errors online based on star light information assistance in inertial system | |
CN106886047A (en) | A kind of method of receiver function and gravity Inversion CRUSTAL THICKNESS and ripple ratio | |
CN113109874B (en) | Wave impedance inversion method using neural network and neural network system | |
CN110728357A (en) | IMU data denoising method based on recurrent neural network | |
CN112595313A (en) | Vehicle-mounted navigation method and device based on machine learning and computer equipment | |
CN115617051B (en) | Vehicle control method, device, equipment and computer readable medium | |
Rhode | Robust and regularized algorithms for vehicle tractive force prediction and mass estimation | |
CN103644913B (en) | Unscented kalman nonlinear initial alignment method based on direct navigation model | |
CN104344835B (en) | A kind of inertial navigation moving alignment method based on suitching type Self Adaptive Control compass | |
CN109186630B (en) | MEMS (micro electro mechanical System) coarse alignment method and system based on improved threshold function wavelet denoising | |
CN112033438B (en) | Shaking base self-alignment method based on speed fitting | |
Gonzalez et al. | Time-delayed multiple linear regression for de-noising MEMS inertial sensors | |
CN111832399A (en) | Attention mechanism fused cross-domain road navigation mark registration algorithm | |
CN111738407B (en) | Clock error prediction method, device, medium and terminal based on deep learning | |
CN112985368A (en) | Rapid compass alignment method of underwater vehicle before launching of mobile carrying platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201229 |