The multi-modal fusion localization method of unmanned platform
Technical field
The present invention relates to field of locating technology more particularly to a kind of multi-modal fusion localization methods of unmanned platform.
Background technique
In the development process of mobile robot, location technology realizes the basic technology of all kinds of complex tasks for it, such as leads
Performing effectively for the tasks such as boat, planning all relies on accurately positioning input.In recent years, with unmanned field and sensor
The development of technology has emerged a variety of positioning strategies for relying on disparate modules sensor.Due to the survey of different types of sensor
The difference of information and location algorithm is measured, single module location technology respectively has superiority and inferiority, and intelligent vehicle platform need to merge multimode in complex scene
Block location technology exports location information to ensure reliable and stablely.
Existing multimode localization method is mostly rule of thumb design rule progress module switching, is substantially equivalent to single mode
Block positioning takes turns to operate, once it is i.e. no longer reliable to be detached from experience usable condition.Though Bayes Fusion Method based on probability
It can realize that information merges in itself, but its basic condition is to require each module positioning to have reliable error model, and require
Each module, which is located in fusion process, to be mutually adapted.For existing single module location technology, except limited several as inertia is led
Boat, dead reckoning etc. can be estimated outside its error model with calibration technique and Propagation Theory, be based on environment sensing for majority
Location technology for, error model is related often with standby scene since its internal nonlinearity characteristic processing link is difficult to export
Property, as laser positioning method is more accurate in the scene of feature rich.
For multimode location technology, the purpose is to realize that modules fusion is complementary, but still lacks now and effectively melt
Conjunction means, thus multi-source positioning fusion problem is still a great problem of positioning field.
Summary of the invention
The embodiment provides a kind of multi-modal fusion localization methods of unmanned platform, to overcome the prior art
Problem.
To achieve the goals above, this invention takes following technical solutions.
A kind of multi-modal fusion localization method of unmanned platform, comprising:
Carry multiple positioning systems in unmanned platform people, learn respectively each positioning system out for describing error mould
The neural network parameter of type obtains being based on to the control information matrix for exporting location algorithm according to the neural network parameter
The information matrix of each positioning system obtains the positioning result of each positioning system output;
The positioning result of each positioning system and information matrix are input to information filter, shown information filter output
The fusion positioning result of the unmanned platform.
Preferably, the neural network parameter for being used to describe error model for learning each positioning system out respectively,
The control information matrix to export location algorithm is obtained according to the neural network parameter, comprising:
The input data for acquiring the location algorithm of each positioning system makes the training number of each positioning system using input data
According to collection, it is directed to the corresponding neural network of each Location System Design respectively, each positioning system is indicated in the form of neural network
Positioning scene data utilize the training dataset and positioning scene number of each positioning system to the mapping relations between error model
According to the neural network for being used to describe error model for the mapping relations between error model, learning each positioning system out respectively
Parameter obtains the control information matrix to export location algorithm according to the neural network parameter.
Preferably, when the positioning system is laser odometer, the use for learning each positioning system out respectively
In the neural network parameter of description error model, believed according to the error that the neural network parameter is obtained to export location algorithm
Cease matrix, comprising:
Single frames two dimensional laser scanning data projection to two-dimensional surface is obtained by single frames and is accounted for by two-dimensional localization orientation problem
There is grating map, grating map is inputted into CNN neural network, exports 6 independent elements, it will be under 6 independent elements composition
The lower triangular matrix is multiplied by triangular matrix with its transposed matrix, positive semidefinite information matrix is obtained, by the positive semidefinite information square
Information matrix of the battle array as the laser odometer.
Preferably, the positioning result by each positioning system and information matrix are input to information filter, shown
Information filter output fusion positioning result, comprising:
Information filter obtains the information vector and information matrix of former frame fusion positioning result, by information vector and information
Matrixing was set to the local coordinate system of zero point to former framing bit, will it is described before framing bit be set to the local coordinate system of zero point and each
The positioning result and information matrix of a positioning system are input to information filter, and the fusion that information filter exports present frame is opposite
Positioning.
Preferably, the positioning result by each positioning system and information matrix are input to information filter, shown
Information filter output fusion positioning result, comprising:
The positioning system of certain known error model is selected as fusion reference position system, for positioning system to be learned,
Merge reference position system and the stateful relationship of positioning system to be learned:
Wherein xtIndicate the relative positioning value relative to preceding frame position, utFor the relatively fixed of fusion reference position system output
Position estimation, ztFor the relative positioning estimation of positioning system to be learned output, εtFor the relative positioning estimation for merging reference position system
Gaussian error, RtFor the covariance matrix for merging reference position system, δtFor positioning system to be learned relative positioning estimate
Gaussian error, QtFor the covariance matrix of Positioning System Error to be learned;
If the two-dimensional localization input data of t moment information filter is zt,ut,Rt, then the information filter of information filter
Wave fusion steps include:
1. carrying out coordinate system transformation
2. carrying out information matrix prediction
3. carrying out information vector prediction
4. carrying out information matrix update according to observation
5. carrying out information vector update according to observation
6. obtaining fusion positioning solution
Export μt,Ωt,ξt
Wherein, for two-dimensional localization solution μt=(μx,t,μy,t,μθ,t)TThere is two-dimensional localization coordinate transform battle array
ξtFor the information vector in information filter, there is ξt=Ωtμt。
As can be seen from the technical scheme provided by the above-mentioned embodiment of the present invention, the method for the embodiment of the present invention can be effective
Ground merges the output of multimode positioning system, obtains the metastable fusion positioning result of essence, holds for unmanned platforms such as mobile robots
Other complex tasks of row provide the foundation.Scalability is strong, can be applied to multiple module fusion positioning.
The additional aspect of the present invention and advantage will be set forth in part in the description, these will become from the following description
Obviously, or practice through the invention is recognized.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, required use in being described below to embodiment
Attached drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this
For the those of ordinary skill of field, without creative efforts, it can also be obtained according to these attached drawings others
Attached drawing.
Fig. 1 is a kind of realization principle signal of multi-modal fusion localization method of unmanned platform provided in an embodiment of the present invention
Figure;
Fig. 2 is a kind of process flow diagram of the multi-modal fusion localization method of unmanned platform provided in an embodiment of the present invention;
Fig. 3 is a kind of realization principle figure of the learning method of the information matrix of positioning system provided in an embodiment of the present invention;
Fig. 4 is the process schematic that a kind of information filter provided in an embodiment of the present invention carries out use processing:
Fig. 5 is a kind of process flow diagram of the scene of laser odometer-error map algorithm provided in an embodiment of the present invention;
Fig. 6 is a kind of error model of the scene of laser odometer-error map algorithm provided in an embodiment of the present invention
Practise the process flow diagram of algorithm.
Specific embodiment
Embodiments of the present invention are described below in detail, the example of the embodiment is shown in the accompanying drawings, wherein from beginning
Same or similar element or element with the same or similar functions are indicated to same or similar label eventually.Below by ginseng
The embodiment for examining attached drawing description is exemplary, and for explaining only the invention, and is not construed as limiting the claims.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singular " one " used herein, " one
It is a ", " described " and "the" may also comprise plural form.It is to be further understood that being arranged used in specification of the invention
Diction " comprising " refer to that there are the feature, integer, step, operation, element and/or component, but it is not excluded that in the presence of or addition
Other one or more features, integer, step, operation, element, component and/or their group.It should be understood that when we claim member
Part is " connected " or when " coupled " to another element, it can be directly connected or coupled to other elements, or there may also be
Intermediary element.In addition, " connection " used herein or " coupling " may include being wirelessly connected or coupling.Wording used herein
"and/or" includes one or more associated any cells for listing item and all combinations.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology
Term and scientific term) there is meaning identical with the general understanding of those of ordinary skill in fields of the present invention.Also answer
It should be appreciated that those terms such as defined in the general dictionary should be understood that have in the context of the prior art
The consistent meaning of meaning, and unless defined as here, it will not be explained in an idealized or overly formal meaning.
In order to facilitate understanding of embodiments of the present invention, it is done by taking several specific embodiments as an example below in conjunction with attached drawing further
Explanation, and each embodiment does not constitute the restriction to the embodiment of the present invention.
Embodiment one
The embodiment of the present invention positions fusion problem for the multimode of mobile robot, proposes a kind of based on machine learning
End-to-end error model learning method.This method can learn all kinds of positioning systems based on Bayesian filter fusion frame
The error model of system output carries out positioning fusion using the error model, and fusion positioning solution is precisely reliable, the continuous nothing of positioning track
It trembles.In fusion frame, without carrying out complicated error model design for each disparate modules and assuming, it is ensured that each module
It works independently.In learning process, without relying on continuous, high-precision positioning true value information, only rely on rough grade track starting point,
Endpoint information can carry out once learning renewal process, therefore can be used for unmanned platform and carry out on-line study.
A kind of realization principle schematic diagram of the multi-modal fusion localization method of mobile robot provided in an embodiment of the present invention
As shown in Figure 1, specific process flow is as shown in Figure 2, comprising the following steps:
The first step acquires each positioning if carrying N in mobile robot covers positioning system in mobile robot usage scenario
The input data of the location algorithm of system utilizes the training dataset of each positioning system of the data creating of acquisition;
It is opposite to learn each positioning system as unit of frame respectively for second step, the positioning fusion benchmark for selecting each positioning system
The information matrix of previous frame, the information matrix are used to describe the error distribution of positioning system.It is provided in an embodiment of the present invention a kind of fixed
The realization principle of the learning method of the information matrix of position system is as shown in figure 3, include following treatment process: each positioning system of acquisition
The input data of the location algorithm of system makes the training dataset of each positioning system using input data.Due to position error one
As it is related with the scene information in the input of its algorithm, for arbitrarily needing the positioning system X of learning error model, respectively for each
The corresponding neural network of Location System Design indicates the positioning scene data of each positioning system to error in the form of neural network
Mapping relations between model.Then, the training dataset of each positioning system and positioning scene data to error model are utilized
Between mapping relations, learn the information matrix of the opposite previous frame of each positioning system out.
Learn the treatment process packet of the information matrix of each positioning system out using training dataset and above-mentioned mapping relations
It includes:
1. the positioning system A (such as inertial navigation system) of certain known error model is selected as fusion baseline system, if its
Data input is D1, positioning system A output phase is to positioning result and error co-variance matrix u, and (covariance matrix is information square to R
The inverse matrix of battle array);
2. couple any positioning system X for needing learning error model (sets its data to input as D2, export positioning result and error
Information matrix is z, Q) initialize its error model neural network.Choose the scene related data that may influence its position error
D2-sceneIt is inputted as neural network, such as visual odometry, that is, it is input that realtime graphic, which may be selected, and output is error letter
Cease matrix(can refer to embodiment two);
3. pair random length is the training data Traj of T, if the global position true value of its terminus isIt is desirable
Obtain the positioning input data D of t frame in data1,t,D2,tAnd neural network required input data D2-scene,t, in neural network
The relative positioning result and its error co-variance matrix <u of continuous each frame can be obtained under Parameter Conditions Θt,Rt>,<zt,Qt> (suitable
Information matrix need to be converted to), the relative positioning result of continuous each frame and its error co-variance matrix are inputted into fusion shown in FIG. 1
Frame can be obtained each frame fusion relative positioning result and information matrix < μt,Ωt>.Using coordinate system transformation,Position base
To μ on plinth1,μ2,...,μTIt is accumulative to carry out relative positioning, can be obtained the Global localization position under Current Situation of Neural Network parameter
Estimation
4. set loss function asSeek the J value of training data Traj.Neural network parameter, which may be updated, isWherein loss function can need to design according to experiment, such as can be
5. the different training data track of more than loading, parameter learning can be carried out by repeating step 3-4.Reach setting most
After big frequency of training, i.e. study terminates.At this point, the neural network parameter can be used, for any D2-scene,tIt maps out available
Information matrix
Then, the information matrix based on each positioning system obtains the positioning result of each positioning system output.
Then third step is merged the positioning result of each positioning system with information matrix input information filter
Positioning result.Information filter uses the error model learning outcome of each positioning system, realizes fusion positioning, output fusion positioning
As a result.Information filter can online updating in application process, continue to optimize error model.
A kind of information filter provided in an embodiment of the present invention carries out process schematic such as Fig. 4 institute of use processing
Show, including following treatment process:
Information filter obtains the information vector and information matrix of former frame fusion positioning result, by information vector and information
Matrixing was set to the local coordinate system of zero point to former framing bit, will it is above-mentioned before framing bit be set to the local coordinate system of zero point and each
The positioning result and information matrix of a positioning system are input to information filter, and the fusion that information filter exports present frame is opposite
Positioning.
Information filter is the exemplary filter in existing document, and the specific work process of information filter includes: to the
Two positioning systems in two steps, i.e. fusion reference position system and the stateful relationship of positioning system to be learned.
Wherein xtIndicate the relative positioning value relative to preceding frame position, utFor the relatively fixed of fusion reference position system output
Position estimation, ztFor the relative positioning estimation of positioning system to be learned output, εt,δtRespectively indicate the relative positioning of two positioning systems
The Gaussian error of estimation, the covariance matrix of error are respectively Rt,Qt。
If t moment obtains two-dimensional positioning data and inputs zt,ut,Rt, then time information filtering fusion steps are as follows:
7. carrying out coordinate system transformation
8. carrying out information matrix prediction
9. carrying out information vector prediction
10. carrying out information matrix update according to observation
11. carrying out information vector update according to observation
12. obtaining fusion positioning solution
Export μt,Ωt,ξt
Wherein, for two-dimensional localization solution μt=(μx,t,μy,t,μθ,t)TThere is two-dimensional localization (3DOF) coordinate transform battle array
ξtFor the information vector in information filter, there is ξt=Ωtμt
Embodiment two
The embodiment is described in detail by taking the study of the error model of laser odometer as an example for technical detail.Due to boat
The error model of position predication method has steadiness, can be obtained by initial sensor calibration according to Propagation Theory relatively reliable
Error model, therefore the error model in dead reckoning is selected to merge benchmark, it is merged with laser odometer.
A kind of scene-error map algorithm process flow such as Fig. 5 institute of laser odometer provided in an embodiment of the present invention
Show, concrete processing procedure includes: that single frames two dimensional laser scanning data projection to two-dimensional surface is obtained single frames with occupying grid
Grating map is inputted CNN neural network, exports 6 independent elements by figure.Then, 6 independent elements are formed into lower three angular moments
Battle array, which is multiplied with its transposed matrix, acquisition positive semidefinite information matrix, using the positive semidefinite information matrix as
The information matrix of the laser odometer.
Fig. 6 is a kind of error model of the scene of laser odometer-error map algorithm provided in an embodiment of the present invention
The process flow diagram of algorithm is practised, concrete processing procedure includes:
Step 1: initialization error Model Neural parameter, setting maximum number of iterations are M, the number of iterations i=0,
Set accumulative step number T
Step 2: being loaded into i-th trained track, data and track head and the tail section global position true value needed for obtaining T frame alignment
Step 3: initialization starting point global position estimation initializes relative positioning blending algorithm, and positioning has added up step number t=
0
Step 4: calculating dead reckoning relative positioning result and information matrix
Step 5: calculating laser mileage relative positioning result
Step 6: laser odometer information matrix is calculated according to laser odometer scene-error map algorithm
Step 7: calculating laser odometer using relative positioning blending algorithm and merge solution with the positioning of dead reckoning
Step 8: by t=0 moment global position true value, obtaining the accumulative global position of present frame
Whether the current accumulative step number t of step 9, judgement is less than accumulative step number T, if so, returning to step 4;Otherwise,
Execute step 10;
Step 10, the global position true value for obtaining the T moment calculate target loss function
Network graphics drawing is unfolded step 11 according to the time, carries out backpropagation T step using loss function value, obtains tired
Count gradient
Step 12 uses accumulative gradient updating network parameter
Step 13 judges whether the number of iterations i is less than maximum number of iterations for M, if so, returning to step 2;It is no
Then, process terminates.
In conclusion the method for the embodiment of the present invention can effectively merge the output of multimode positioning system, obtain accurate
Stable fusion positioning result, fills up the blank both at home and abroad in terms of multimode positions fusion, and for mobile robot etc., nobody is flat
Platform executes other complex tasks and provides the foundation.
Error model learning method proposed by the present invention carries out under probability fusion frame completely, solves in the prior art
Above-mentioned significant problem, have an advantage that 1) eliminating complicated error model design derives work, can be with positioning accuracy
Target carries out end-to-end study, and 2) scalability is strong, can be applied to multiple module fusions and positions, and 3) for carrying pre-training network
Multimode position unmanned platform for, the application prospect with on-line study, 4) syncretizing effect is good, can get higher positioning
Precision, positioning track are smooth.
Those of ordinary skill in the art will appreciate that: attached drawing is the schematic diagram of one embodiment, module in attached drawing or
Process is not necessarily implemented necessary to the present invention.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device or
For system embodiment, since it is substantially similar to the method embodiment, so describing fairly simple, related place is referring to method
The part of embodiment illustrates.Apparatus and system embodiment described above is only schematical, wherein the conduct
The unit of separate part description may or may not be physically separated, component shown as a unit can be or
Person may not be physical unit, it can and it is in one place, or may be distributed over multiple network units.It can root
According to actual need that some or all of the modules therein is selected to achieve the purpose of the solution of this embodiment.Ordinary skill
Personnel can understand and implement without creative efforts.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto,
In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of by anyone skilled in the art,
It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with scope of protection of the claims
Subject to.