CN106871904A - A kind of mobile robot code-disc positioning correction method based on machine vision - Google Patents

A kind of mobile robot code-disc positioning correction method based on machine vision Download PDF

Info

Publication number
CN106871904A
CN106871904A CN201710119445.5A CN201710119445A CN106871904A CN 106871904 A CN106871904 A CN 106871904A CN 201710119445 A CN201710119445 A CN 201710119445A CN 106871904 A CN106871904 A CN 106871904A
Authority
CN
China
Prior art keywords
robot
cos
alpha
formula
rsqb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710119445.5A
Other languages
Chinese (zh)
Inventor
崔明月
刘红钊
刘伟
赵金姬
蒋华龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanyang Normal University
Original Assignee
Nanyang Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanyang Normal University filed Critical Nanyang Normal University
Priority to CN201710119445.5A priority Critical patent/CN106871904A/en
Publication of CN106871904A publication Critical patent/CN106871904A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses the present invention relates to belong to robotics, particularly a kind of mobile robot code-disc positioning correction method based on machine vision;The instantaneous discrete motion model of robot is set up by photoelectric coded disk, robot real-time coordinates are positioned;Then, two target dot images of the camera acquisition on robot body, set up robot vision measurement angle model;Subsequently, extended Kalman filter normatron device people's immediate movement variable quantity is set up, and statistical property using photoelectric coded disk obtains the average and variance of change in location;Then, with the method for difference come instead of the value of local derviation, the orientation angle observed quantity of impact point during calculating robot current location;Finally, set up Unscented kalman filtering device model carries out merging the amendment realized to photoelectric coded disk position error to data.

Description

A kind of mobile robot code-disc positioning correction method based on machine vision
Technical field
The present invention relates to belong to robotics, and in particular to localization for Mobile Robot technology and Kalman filtering Information fusion technology, particularly a kind of mobile robot code-disc positioning correction method based on machine vision.
Background technology
Autonomous positioning technology is the basis that mobile robot carries out navigation and motion control, is to improve robot autonomous ability Key technology.It is a kind of most common autonomous positioning mode by odometer positioning for mobile robot.But, with Be there are problems that in the traditional odometer localization method based on the photoelectric coded disk on robot wheel it is many, wherein Including:Robot geometric parameter is inaccurate, the error of code-disc step-by-step counting, wheel-slip cause physical location and calculating is not inconsistent And the error for producing, and track is rugged and rough.These errors can be accumulated, and the mistake for allowing can be exceeded after a period of time Differ from scope and cause positioning to fail.Therefore, there has been proposed visual odometry technology.The technology is by calculating front and rear two field pictures Optical flow field or part matching characteristic point three-dimensional coordinate, realize estimation to body movement parameter.
It is that the B nuclear power plant working robots of CN 101774170 and its control system belong to machine in Chinese patent grant number People and automation equipment field.The nuclear power plant working robot is crawler-type mobile manipulator, is put down by the movement of double track drives The manipulator composition of platform and its four-degree-of-freedom of carrying, can move, and have manual remote control and autonomous control inside nuclear power station Two kinds of control models, remote control is carried out using wirelessly or non-wirelessly mode to it.The control of described nuclear power plant working robot System is divided into upper monitoring and planning control system and robot control system two parts, and the two is with the use of control robot Running;But it does not solve still, in the case where impact point is lacked, to be adjusted by autonomous verification, is reduced with this The generation of error.
Chinese patent grant number CN102538781 discloses a kind of mobile robot based on machine vision and inertial navigation fusion Athletic posture method of estimation, its step is:Synchronous acquisition mobile robot binocular camera image and three axle inertial guidance datas;Before extraction Frame image features and match estimation athletic posture afterwards;The angle of pitch and roll angle are calculated using inertial navigation;Set up Kalman filter mould Type merges vision and inertial navigation Attitude estimation;According to estimate variance self-adaptative adjustment filter parameter;The accumulation boat position of attitude rectification Calculate.The present invention proposes real-time extension Kalman filter Attitude estimation model, is made using inertial navigation combination acceleration of gravity direction It is supplement, three direction Attitude estimations of visual odometry is decoupled, corrects the accumulated error of Attitude estimation;According to motion shape State adjusts filter parameter using fuzzy logic, realizes that the filtering of self adaptation is estimated, reduces the influence of acceleration noise, effectively Improve the positioning precision and robustness of visual odometry.
But, visual odometry also has certain scope of application.In general, visual odometry is required for substantial amounts of feature Point.But under some scenes, such as moonscape, due to there is the weak texture region of large area so that characteristic point is carried Take inherently one problem.In addition, the matching precision of characteristic point is also a problem.Substantial amounts of characteristic point may result in With including substantial amounts of noise spot in result.This can largely reduce the reliability of method.
Method designed by the present invention is equally to introduce visual information, but has not both needed substantial amounts of characteristic point, is also not required to The three-dimensional information of calculating characteristic point is removed, and only needs to calculate car body to the sight line angle between any several characteristic points, we Referred to as visual protractor;By the way that angle of the robot between identical two characteristic points that diverse location is seen does not become in the same time Change, be allowed to be blended with the location information of photoelectric coded disk using self adaptation Unscented kalman filtering technology, realize to photoelectricity The amendment of coding disk position error.
The content of the invention
In view of this, the purpose of the present invention is directed to the deficiencies in the prior art, there is provided a kind of movement based on machine vision Robot code-disc positioning correction method, setting photoelectric coded disk by way of vision measurement angle is combined, is modified, So as to cause the reduction of error maximum possible in the situation of a small amount of impact point.
To reach above-mentioned purpose, the present invention uses following technical scheme:
Mobile robot code-disc positioning correction method based on machine vision, it is characterised in that comprise the following steps:
S1:The instantaneous discrete motion model of robot is set up by photoelectric coded disk, robot real-time coordinates are positioned;
S2:By the two target dot images of camera acquisition on robot body, robot vision measurement angle is set up Degree model;
S3:Extended Kalman filter normatron device people's immediate movement variable quantity is set up, and using the system of photoelectric coded disk Meter characteristic obtains the average and variance of change in location;
S4:With the method for difference come instead of the value of local derviation, the observed quantity of impact point during calculating robot current location;
S5:Set up Unscented kalman filtering device model to merge the data that S1-S4 is obtained, obtained by photoelectric coded disk first To positional information substantially, impact point observed quantity is estimated by difference on this position, i.e. this position coordinates and target The angle of point, and the determination position of photoelectric coded disk is corrected by the angle of actual observation.
Further, the instantaneous discrete motion model method of robot is set up in the S1 as follows:
1) according to the pulse frequency of the photoelectric coded disk being arranged on robot revolver (L) and right wheel (R), left and right wheelses are calculated Linear velocity vL、vRFor:
2) and then show that the angular speed of robot is:
3) the discrete motion equation of robot is:
Two coordinate systems at driving wheel axis midpoint for defining robot initial position are world coordinate system, and robot is in k The coordinate at moment is (xk,yk).In formula:fLAnd fRIt is respectively the arteries and veins of the photoelectric coded disk for being located at robot left and right sides driving wheel Frequency is rushed, d is the diameter of driving wheel, and L is two wheelspans of driving wheel of robot, and n is driving wheel for the line number of photoelectric coded disk The umber of pulse of the photoelectric coded disk that turns around output, Δ t is time variable, when the driving wheel speed of both sides is differed, robot Movement locus be and moved in a circle centered on one of driving wheel.
Further, set up in the S2 robot vision measurement angle model method be:Set up two impact point A, B, measures the angle of the included angle A OB between impact point A, B and the photocentre O of camera lens, and the computing formula of ∠ AOB cosine is:
By two images of impact point A, B of hypothesis of camera acquisition on robot body, in formula, F is focal length of camera, and Sa, Sb are respectively the projection of impact point A, B in image plane, and So is photocentre O flat in image The projection in face.
Further, in the step 3 by extended Kalman filter be calculated robot kinematical equation and Visual observation equation is as follows respectively:
In formula, f (xk,uk) and h (xk) it is respectively nonlinear motion model and measurement model of the robot at the k moment, wk With vkIt is orthogonal white Gaussian noise, xk=[x, y, θ]T,
The change u of the robot location obtained here by formula (3)k=[Δ xk,Δyk]T, and by the fusion meter of photoelectric coded disk Calculation obtains the average and variance of robot location's change.
Further, the observational equation z of the robot visionk=h (xk)+vk, herein, zk=[cos α 1k,cosα 2k,…,cosαnk]TThe vector being made up of the included angle cosine the vector of robot location's coordinate to each characteristic point, such as α in Fig. 2AB1
Real observational equation is:
But due to not knowing the coordinate of A, B point, thus this equation cannot be directly used to filtering.In spreading kalman filter The use of the purpose of this formula is the observed quantity z of the position estimated in the position of an estimation in ripplek+1|kBy this amount With actual observation amount zk+1Difference between (video camera direct measurement is obtained) corrects the position of estimation, makes what is obtained after amendment Position is under statistical significance closer to physical location.Meanwhile, because this equation is substantially nonlinear, in recursive process It also to be used for xkLocal derviation, in the case of no feature point coordinates, this local derviation cannot obtain actual value, thus nothing Method is used to calculate.Replace the value of local derviation with the method for difference herein, then in xkPoint does Taylor series expansion, and estimation exists xk+1|kObserved quantity z during positionk+1|k.Specific formula is as follows:
Further, Unscented kalman filtering device uses standard Unscented kalman filtering device, information fusion side in step 5 Method is:Can just be entered using self adaptation Unscented kalman filtering algorithm with formula (9) and two information of initial point by formula (8) Row filtering is calculated.Design introduces the Unscented kalman filtering device of adaptation mechanism to further improve estimated accuracy.Self adaptation The detailed process of Unscented kalman filtering algorithm is described as follows.
1) calculation procedure of standard Unscented kalman filtering device
(1) original state is defined as follows (k=0):
In formula,It is the desired value of original state, P0It is initial covariance.
The state of augmentation includes origin, and parameter, with process noise, is defined as
(2) k=1,2 is worked as ..., ∞
A () calculates sigma points, can obtain
(b) predict the step of be
Wherein, QkIt is the covariance matrix of process noise, weightsWithComputing formula be
In formula (14), n is the dimension of augmented state;Parameter alpha can control the size of the distributed areas of sigma points, especially when being When the non-linearization degree of system is stronger, the non-local effect in sampling period can be avoided by selecting a preferable decimal;β is One weights of non-negative, the higher order square information for confirming distribution.For a Gaussian prior model parameter β prioritizing selection It is β=2, in order to ensure the Positive of covariance matrix, the selection of covariance adjusting parameter is κ >=0.Remaining new parameter is defined It is as follows:
C () updates step
In formula, RkIt is the covariance matrix of measurement noise.
Further, Unscented kalman filtering device uses standard Unscented kalman filtering device, information fusion side in step 5 Method is:In order to further improve estimated accuracy, in Unscented kalman filtering device estimation procedure, using a covariance matching Technology, introduces a noise covariance adaptively correcting mechanism.More precisely, the pose sequence based on mobile robot, examines Consider process noise covariance QkWith measurement noise covariance RkART network.Therefore, QkWith RkIt is estimated and updates repeatedly:
In formula,It is the measured pose of mobile robot, Ck is defined as foloows:
In formula,It is mobile robot in kth step appearance evaluated error, CkIt is robot kth step The approximation of pose covariance matrix, L is the sliding window size matched with covariance.
The present invention blends the information of vision by adapting to Unscented kalman filtering with photoelectric code disk information, first code-disc Itself can obtain positional information substantially, and observed quantity-be exactly this position coordinates is estimated by difference on this position With the angle of impact point, the determination position of code-disc is then corrected by the angle of actual observation.
In addition, the present invention is directed to mobile robot the characteristics of rough ground photoelectric code disk positioning precision difference, using vision The method for carrying out angular surveying, and self adaptation Unscented kalman filtering method is utilized by the information and photoelectric code disk positioning result phase Fusion, and devise experiment and device;The method mainly has three advantages compared with traditional visual odometry technology:One is Without a large amount of characteristic points, so being applied to weak texture region as similar moonscape;Two be vision measurement value accurately and reliably. Its certainty of measurement is solely dependent upon the resolution ratio of image;Three be self adaptation Unscented kalman filtering device estimated accuracy be far above standard Kalman filter, extended Kalman filter and standard Unscented kalman filtering device;It is worth noting that, the method can not be thorough Bottom eliminates the accumulation of position error, but the designed method of the present invention is for improving wheeled mobile robot system accuracy Very simple is effective, in the case of reduces cost, under the mutual synergy by vision data and photoelectric code disk data, The error of positioning is calculated, is timely corrected, greatly reduce error accumulation.
Brief description of the drawings
Fig. 1 is visual angle measuring principle;
Fig. 2 carries out track correct schematic diagram for present invention application vision;
Fig. 3 is fundamental diagram of the present invention.
Specific embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention Accompanying drawing 1-3, the technical scheme to the embodiment of the present invention is clearly and completely described.Obviously, described embodiment is this A part of embodiment of invention, rather than whole embodiments.Based on described embodiments of the invention, the common skill in this area The every other embodiment that art personnel are obtained, belongs to the scope of protection of the invention.
Embodiment one
One, photoelectric coded disk positioning
Photoelectric coded disk positioning is realized by robot both sides code wheel reading and its difference of reading;Robot is used at the k moment (xk,yk) represent, x, y are the coordinate at the driving wheel axis midpoint of robot two;Bodywork reference frame with robot initial position is as generation Boundary's coordinate system, sets up the kinematical equation of robot.When both sides vehicle wheel rotational speed is differed, car just does circumference around a center Motion, if two-wheeled wheelspan is L, the line number of code-disc is n the umber of pulse of code-disc output (wheel turn around), and wheel diameter passes through for d The pulse frequency f of left and right code-discLAnd fRThe linear velocity v of left and right wheel can be calculatedL、vRFor:
The angular speed of robot is
The discrete motion equation of robot is
Two, robot vision measurement angle principle
Robot vision measurement angle is exactly in fact one video camera of installation on mobile robot body, as long as mark Point matching The pixel of accurate and camera is sufficiently high, it is possible to think that this angle value has enough precision.Angle is carried out using video camera The general principle of measurement is as shown in Figure 1;Wherein O is photocentre, and A, B are two impact points;So-called visual protractor, be exactly by A, 2 points of imagings on video camera of B, measure the angle of AOB.
If A, B are respectively S in the projection of the plane of delineationa、Sb, photocentre O is projected as S the plane of delineationo, f is focal length of camera, Then the computing formula of angle ∠ AOB cosine is:
Obviously, the accuracy of the angle cosine value is only relevant with image resolution ratio.Therefore angular surveying is carried out with video camera, simply It is reliable.
Three, filtering equations
Extended Kalman Filter requirement obtains kinematical equation and observational equation and their the error distributed intelligence of robot; Consider that the nonlinear motion equation and observational equation of robot are as follows:
In formula, f (xk,uk) and h (xk) be respectively robot nonlinear motion model and measurement model, xk=[x, y, θ]T,wkWith vkIt is orthogonal white Gaussian noise.
The change u of the robot location obtained here by formula (3)k=[Δ xk,Δyk]T, and it is special by the statistics of photoelectric coded disk Property obtains the average and variance of change in location.
Four, robot vision metrical information
The observational equation z of robot visionk=h (xk)+vk, herein, zk=[cos α 1k,cosα2k,…,cosαnk]TBe by The vector that robot location's coordinate is constituted to the included angle cosine between the vector of each characteristic point, in Fig. 2With
Real observational equation is
But due to not knowing the coordinate of A, B point, thus this equation cannot be directly used to filtering in spreading kalman filter It is the observed quantity z estimated in the position of an estimation using the purpose of this formula in ripplek+1|kBy this amount and reality Deflection observed quantity zk+1Between difference correct the position of estimation, the orientation angle observed quantity zk+1, i.e. video camera is direct Measurement is obtained, and makes the position obtained after amendment under statistical significance closer to physical location;Meanwhile, because this equation is obvious It is nonlinear, it is also used in recursive process for xkLocal derviation, in the case of no feature point coordinates, this local derviation Actual value cannot be obtained, thus is not used to calculate.Replace the value of local derviation with the method for difference herein, then in xkPoint First order Taylor series expansion is done, is estimated in xk+1|kObserved quantity z during positionk+1|k.Specific formula is as follows:
Five, carry out information fusion with self adaptation Unscented kalman filtering device
Can just be calculated using self adaptation Unscented kalman filtering by formula (8) and formula (9) and two information of initial point Method is filtered calculating;Design introduces the Unscented kalman filtering device of adaptation mechanism to further improve estimated accuracy.From The detailed process for adapting to Unscented kalman filtering algorithm is described as follows.
1) calculation procedure of standard Unscented kalman filtering device
(1) original state is defined as follows (k=0):
In formula,It is the desired value of original state, P0It is initial covariance.
The state of augmentation includes origin, and parameter, with process noise, is defined as
(2) k=1,2 is worked as ..., ∞
A () calculates sigma points, can obtain
(b) predict the step of be
Wherein, QkIt is the covariance matrix of process noise, weightsWithComputing formula be
In formula (14), n is the dimension of augmented state;Parameter alpha can control the size of the distributed areas of sigma points, especially When the non-linearization degree of system is stronger, the non local effect in sampling period can be avoided by selecting a preferable decimal Should;β is a weights for non-negative, the higher order square information for confirming distribution.It is excellent for a Gaussian prior model parameter β First selection is β=2, and in order to ensure the Positive of covariance matrix, the selection of covariance adjusting parameter is κ >=0;Remaining new ginseng Number is defined as follows:
C () updates step
In formula, RkIt is the covariance matrix of measurement noise.
2) self adaptation Unscented kalman filtering device
In order to further improve estimated accuracy, in Unscented kalman filtering device estimation procedure, using a covariance Matching technique, introduces a noise covariance adaptively correcting mechanism.More precisely, the pose sequence based on mobile robot Row, it is considered to process noise covariance QkWith measurement noise covariance RkART network;Therefore, QkWith RkBe estimated with repeatedly Update:
In formula,It is the measured pose of mobile robot, CkIt is defined as foloows:
In formula,It is mobile robot in kth step appearance evaluated error, CkIt is robot kth step The approximation of pose covariance matrix, L is the sliding window size matched with covariance.
The present invention blends the information of vision by adapting to Unscented kalman filtering with photoelectric code disk information, first code-disc Itself can obtain positional information substantially, and observed quantity-be exactly this position coordinates is estimated by difference on this position With the angle of impact point, the determination position of code-disc is then corrected by the angle of actual observation.
In addition, the present invention is directed to mobile robot the characteristics of rough ground photoelectric code disk positioning precision difference, using vision The method for carrying out angular surveying, and self adaptation Unscented kalman filtering method is utilized by the information and photoelectric code disk positioning result phase Fusion, and devise experiment and device;The method mainly has three advantages compared with traditional visual odometry technology:One is Without a large amount of characteristic points, so being applied to weak texture region as similar moonscape;Two be vision measurement value accurately and reliably. Its certainty of measurement is solely dependent upon the resolution ratio of image;Three be self adaptation Unscented kalman filtering device estimated accuracy be far above standard Kalman filter, extended Kalman filter and standard Unscented kalman filtering device;It is worth noting that, the method can not be thorough Bottom eliminates the accumulation of position error, but the designed method of the present invention is for improving wheeled mobile robot system accuracy Very simple is effective, in the case of reduces cost, under the mutual synergy by vision data and photoelectric code disk data, The error of positioning is calculated, is timely corrected, greatly reduce error accumulation.
Finally illustrate, the above embodiments are merely illustrative of the technical solutions of the present invention and it is unrestricted, this area is common Other modifications or equivalent that technical staff is made to technical scheme, without departing from technical solution of the present invention Spirit and scope, all should cover in the middle of scope of the presently claimed invention.

Claims (7)

1. a kind of mobile robot code-disc positioning correction method based on machine vision, it is characterised in that comprise the following steps:
S1:The instantaneous discrete motion model of robot is set up by photoelectric coded disk, robot real-time coordinates are positioned;
S2:By the two target dot images of camera acquisition on robot body, robot vision measurement angle is set up Degree model;
S3:Extended Kalman filter normatron device people's immediate movement variable quantity is set up, and using the system of photoelectric coded disk Meter characteristic obtains the average and variance of change in location;
S4:With the method for difference come instead of the value of local derviation, the orientation angle observation of impact point during calculating robot current location Amount;
S5:Set up Unscented kalman filtering device model to merge the data that S1-S4 is obtained, obtained by photoelectric coded disk first To positional information substantially, impact point observed quantity is estimated by difference on this position, i.e. this position coordinates and target The angle of point, and the determination position of photoelectric coded disk is corrected by the angle of actual observation.
2. the mobile robot code-disc positioning correction method of machine vision is based on as claimed in claim 1, it is characterised in that institute To state that set up the instantaneous discrete motion model method of robot in S1 as follows:
1) according to the pulse frequency of the photoelectric coded disk being arranged on robot revolver and right wheel, the linear velocity of left and right wheelses is calculated vL、vRFor:
v L = f L n π d v R = f R n π d - - - ( 1 )
2) and then show that the angular speed of robot is:
ω = v R - v L L - - - ( 2 )
3) the discrete motion equation of robot is:
x k + 1 = x k + L ( v k R + v k L ) 2 ( v k R - v k L ) ( cosθ k + 1 - cosθ k ) y k + 1 = y k + L ( v k R + v k L ) 2 ( v k R - v k L ) ( sinθ k + 1 - sinθ k ) θ k + 1 = θ k + v k R - v k L L Δ t - - - ( 3 )
Two coordinate systems at driving wheel axis midpoint for defining robot initial position are world coordinate system, and robot is at the k moment Coordinate be (xk, yk), in formula:fLAnd fRIt is respectively the pulse frequency of the photoelectric coded disk for being located at robot left and right sides driving wheel Rate, d is the diameter of driving wheel, and L is two wheelspans of driving wheel of robot, and n is to drive for the line number of incremental digital formula encoder The umber of pulse of photoelectric coded disk output of turning around is taken turns, Δ t is time variable, when the driving wheel speed of both sides is differed, machine The movement locus of people is and is moved in a circle centered on one of driving wheel.
3. the mobile robot code-disc positioning correction method of machine vision is based on as claimed in claim 1, it is characterised in that:Institute State and the method for robot vision measurement angle model is set up in S2 be:Two impact points A, B are set up, impact point A, B is measured and is taken the photograph The angle of the included angle A OB between the photocentre 0 of camera lens, the computing formula of ∠ AOB cosine is:
cos ∠ A O B = | | OS a | | 2 + | | OS b | | 2 - | | S a S b | | 2 2 | | OS a | | · | | OS b | | = 2 f 2 + | | S o S a | | 2 + | | S o S b | | 2 - | | S a S b | | 2 2 f 2 + | | S o S a | | 2 f 2 + | | S o S b | | 2 - - - ( 4 )
By two images of impact point A, B of hypothesis of camera acquisition on robot body, in formula, f is Focal length of camera, Sa, Sb are respectively the projection of impact point A, B in image plane, and So is photocentre 0 in image plane Projection.
4. the mobile robot code-disc positioning correction method of machine vision is based on as claimed in claim 1, it is characterised in that:Institute The kinematical equation and visual observation equation for being calculated robot by self adaptation Unscented kalman filtering device in S3 is stated to distinguish It is as follows:
x k + 1 = f ( x k , u k ) + w k z k = h ( x k ) + v k - - - ( 5 )
In formula, f (xk, uk) and h (xk) it is respectively nonlinear motion model and measurement model of the robot at the k moment, wkWith vk It is orthogonal white Gaussian noise, xk=[x, y, θ]T,
E ( w k w j T ) = Q k δ k j , Q k ≥ 0 E ( v k v j T ) = R k δ k j , R k ≥ 0 - - - ( 6 )
The change u of the robot location obtained here by formula (3)k=[Δ xk, Δ yk]T, and by the fusion meter of photoelectric coded disk Calculation obtains the average and variance of robot location's change.
5. the mobile robot code-disc positioning correction method of machine vision is based on as claimed in claim 1, it is characterised in that:Institute State the observational equation z of robot visionk=h (xk)+vk, herein, zk=[cos α 1k, cos α 2k..., cos α nk]TIt is by machine Vector of people's position coordinates to the included angle cosine composition between the vector of each characteristic point;
Real observational equation is:
cos &angle; &alpha; AO 1 B = < O 1 A , O 1 B > | | O 1 A | | &CenterDot; | | O 1 B | | , cos &angle; &alpha; AO 2 B = < O 2 A , O 2 B > | | O 2 A | | &CenterDot; | | O 2 B | | - - - ( 7 )
Specific formula is as follows:
&part; z k &part; x k = cos &alpha; 1 k - cos &alpha; 1 k - 1 x k - x k - 1 cos &alpha; 1 k - cos &alpha; 1 k - 1 y k - y k - 1 cos &alpha; 2 k - cos &alpha; 2 k - 1 x k - x k - 1 cos &alpha; 2 k - cos &alpha; 2 k - 1 y k - y k - 1 . . . . . . cos&alpha;n k - cos&alpha;n k - 1 x k - x k - 1 cos&alpha;n k - cos&alpha;n k - 1 y k - y k - 1 - - - ( 8 )
z k + 1 | k = cos &alpha; 1 k - cos &alpha; 1 k - 1 x k - x k - 1 cos &alpha; 1 k - cos &alpha; 1 k - 1 y k - y k - 1 cos &alpha; 2 k - cos &alpha; 2 k - 1 x k - x k - 1 cos &alpha; 2 k - cos &alpha; 2 k - 1 y k - y k - 1 . . . . . . cos&alpha;n k - cos&alpha;n k - 1 x k - x k - 1 cos&alpha;n k - cos&alpha;n k - 1 y k - y k - 1 u k - - - ( 9 ) .
6. the mobile robot code-disc positioning correction method of machine vision is based on as claimed in claim 1, it is characterised in that:S5 Middle Unscented kalman filtering device uses standard Unscented kalman filtering device, and information fusion method is:By formula (8) and formula (9) Just calculating can be filtered using self adaptation Unscented kalman filtering algorithm with two information of initial point, self adaptation is without mark card The detailed process of Kalman Filtering algorithm is described as follows;
1) calculation procedure of standard Unscented kalman filtering device
(1) original state is defined as follows (k=0):
x &OverBar; 0 = E &lsqb; x 0 &rsqb; P 0 = E &lsqb; ( x 0 - x &OverBar; 0 ) ( x 0 - x &OverBar; 0 ) T &rsqb; - - - ( 10 )
In formula,It is the desired value of original state, P0It is initial covariance;
The state of augmentation includes origin, and parameter, with process noise, is defined as
x ^ 0 a = x &OverBar; 0 T 0 0 T P 0 a = d i a g ( P 0 , Q 0 , R 0 ) - - - ( 11 )
(2) k=1,2 ..., ∞ are worked as
A () calculates sigma points, can obtain
X k = &lsqb; x ^ k a x ^ k a + &gamma; P k a x ^ k a - &gamma; P k a &rsqb; - - - ( 12 )
(b) predict the step of be
X k + 1 , k * = f ( X k , u k ) x ^ k + 1 , k a = &Sigma; i = 0 2 n W i m X k + 1 , k * ( i ) P k , k + 1 a = &Sigma; i = 0 2 n W i c &lsqb; X k + 1 , k * ( i ) - x ^ k + 1 , k a &rsqb; &lsqb; X k + 1 , k * ( i ) - x ^ k + 1 , k a &rsqb; T + Q k X k + 1 , k = &lsqb; x ^ k + 1 , k a x ^ k + 1 , k a + &gamma; P k , k + 1 a x ^ k + 1 , k a - &gamma; P k , k + 1 a &rsqb; z k + 1 , k = h ( X k + 1 , k ) z ^ k + 1 , k = &Sigma; i = 0 2 n W i m Y k + 1 , k ( i ) - - - ( 13 )
Wherein, QkIt is the covariance matrix of process noise, weightsWithComputing formula be
W i m = &lambda; n + &lambda; , i = 0 W i c = &lambda; n + &lambda; + ( 1 - &alpha; 2 + &beta; ) , i = 0 W i m = W i c = 1 2 ( n + &lambda; ) , i = 1 , 2 , ... , 2 n - - - ( 14 )
In formula (14), n is the dimension of augmented state;Parameter alpha can control the size of the distributed areas of sigma points, and β is one non- Negative weights, are β=2 for a Gaussian prior model parameter β selection, in order to ensure the Positive of covariance matrix, The selection of covariance adjusting parameter is κ >=0, and remaining new parameter is defined as follows:
&lambda; = &alpha; 2 ( n + &kappa; ) - n &gamma; = n + &lambda; - - - ( 15 )
C () updates step
P y y = &Sigma; i = 0 2 n W i c &lsqb; z k + 1 , k ( i ) - z ^ k + 1 , k &rsqb; &lsqb; z k + 1 , k ( i ) - z ^ k + 1 , k &rsqb; T + R k P x y = &Sigma; i = 0 2 n W i c &lsqb; X k + 1 , k ( i ) - x ^ k + 1 , k a &rsqb; &lsqb; z k + 1 , k ( i ) - z ^ k + 1 , k &rsqb; T K k = P x y P y y - 1 , x ^ k + 1 a = x ^ k + 1 , k a + K k ( z k + 1 - z ^ k + 1 , k ) P k + 1 a = P k , k + 1 a - K k P y y K k T - - - ( 16 )
In formula, RkIt is the covariance matrix of measurement noise.
7. the mobile robot code-disc positioning correction method of machine vision is based on as claimed in claim 6, it is characterised in that:S5 Middle Unscented kalman filtering device uses self adaptation Unscented kalman filtering device, and information fusion method is:QkWith RkBe estimated with repeatedly Update:
Q k = K k C k ( K k ) T R k = C k + &Sigma; i = 0 2 n W i c &lsqb; z k + 1 , k ( i ) - z ^ k + 1 , k &rsqb; &lsqb; z k + 1 , k ( i ) - z ^ k + 1 , k &rsqb; T - - - ( 17 )
In formula,It is the measured pose of mobile robot, Ck is defined as foloows:
C k = &Sigma; i = k - L + 1 k E i ( E i ) T - - - ( 18 )
In formula,It is mobile robot in kth step appearance evaluated error, CkIt is robot kth step The approximation of pose covariance matrix, L is the sliding window size matched with covariance.
CN201710119445.5A 2017-03-02 2017-03-02 A kind of mobile robot code-disc positioning correction method based on machine vision Pending CN106871904A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710119445.5A CN106871904A (en) 2017-03-02 2017-03-02 A kind of mobile robot code-disc positioning correction method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710119445.5A CN106871904A (en) 2017-03-02 2017-03-02 A kind of mobile robot code-disc positioning correction method based on machine vision

Publications (1)

Publication Number Publication Date
CN106871904A true CN106871904A (en) 2017-06-20

Family

ID=59168338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710119445.5A Pending CN106871904A (en) 2017-03-02 2017-03-02 A kind of mobile robot code-disc positioning correction method based on machine vision

Country Status (1)

Country Link
CN (1) CN106871904A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107224692A (en) * 2017-06-26 2017-10-03 福州大学 The wheeled autonomous aiming extinguishing method of fire-fighting robot
CN107671855A (en) * 2017-08-31 2018-02-09 广州泰行智能科技有限公司 A kind of construction method and device of the space coordinates based on mechanical arm
CN107797094A (en) * 2017-11-10 2018-03-13 南阳师范学院 A kind of mobile robot position and orientation estimation method based on RFID
CN109099921A (en) * 2018-08-03 2018-12-28 重庆电子工程职业学院 Robot compensates localization method and device
CN109270288A (en) * 2018-10-15 2019-01-25 中国航空工业集团公司洛阳电光设备研究所 A kind of axis angular rate estimation method based on position interpolation
CN109870167A (en) * 2018-12-25 2019-06-11 四川嘉垭汽车科技有限公司 Positioning and map creating method while the pilotless automobile of view-based access control model
CN111053498A (en) * 2018-10-17 2020-04-24 郑州雷动智能技术有限公司 Displacement compensation method of intelligent robot and application thereof
CN111750896A (en) * 2019-03-28 2020-10-09 杭州海康机器人技术有限公司 Holder calibration method and device, electronic equipment and storage medium
CN112405526A (en) * 2020-10-26 2021-02-26 北京市商汤科技开发有限公司 Robot positioning method and device, equipment and storage medium
CN112435279A (en) * 2019-08-26 2021-03-02 天津大学青岛海洋技术研究院 Optical flow conversion method based on bionic pulse type high-speed camera
CN112461237A (en) * 2020-11-26 2021-03-09 浙江同善人工智能技术有限公司 Multi-sensor fusion positioning algorithm applied to dynamic change scene
CN112558602A (en) * 2020-11-19 2021-03-26 许昌许继软件技术有限公司 Robot positioning method based on image characteristics
CN113485326A (en) * 2021-06-28 2021-10-08 南京深一科技有限公司 Autonomous mobile robot based on visual navigation
CN113989371A (en) * 2021-10-28 2022-01-28 山东大学 Modularized platform relative pose estimation system based on vision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789233A (en) * 2012-06-12 2012-11-21 湖北三江航天红峰控制有限公司 Vision-based combined navigation robot and navigation method
CN104914865A (en) * 2015-05-29 2015-09-16 国网山东省电力公司电力科学研究院 Transformer station inspection tour robot positioning navigation system and method
CN106323294A (en) * 2016-11-04 2017-01-11 新疆大学 Positioning method and device for patrol robot of transformer substation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789233A (en) * 2012-06-12 2012-11-21 湖北三江航天红峰控制有限公司 Vision-based combined navigation robot and navigation method
CN104914865A (en) * 2015-05-29 2015-09-16 国网山东省电力公司电力科学研究院 Transformer station inspection tour robot positioning navigation system and method
CN106323294A (en) * 2016-11-04 2017-01-11 新疆大学 Positioning method and device for patrol robot of transformer substation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CUI MINGYUE,ETC: "An adaptive unscented Kalman filter-based adaptive tracking control for wheeled mobile robots with control constrains in the presence of wheel slipping", 《INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS》 *
陈伟,等: "基于视觉量角计的航迹修正算法", 《机器人》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107224692A (en) * 2017-06-26 2017-10-03 福州大学 The wheeled autonomous aiming extinguishing method of fire-fighting robot
CN107671855A (en) * 2017-08-31 2018-02-09 广州泰行智能科技有限公司 A kind of construction method and device of the space coordinates based on mechanical arm
CN107797094A (en) * 2017-11-10 2018-03-13 南阳师范学院 A kind of mobile robot position and orientation estimation method based on RFID
CN109099921A (en) * 2018-08-03 2018-12-28 重庆电子工程职业学院 Robot compensates localization method and device
CN109099921B (en) * 2018-08-03 2020-06-05 重庆电子工程职业学院 Robot compensation positioning method and device
CN109270288A (en) * 2018-10-15 2019-01-25 中国航空工业集团公司洛阳电光设备研究所 A kind of axis angular rate estimation method based on position interpolation
CN111053498A (en) * 2018-10-17 2020-04-24 郑州雷动智能技术有限公司 Displacement compensation method of intelligent robot and application thereof
CN109870167A (en) * 2018-12-25 2019-06-11 四川嘉垭汽车科技有限公司 Positioning and map creating method while the pilotless automobile of view-based access control model
CN111750896A (en) * 2019-03-28 2020-10-09 杭州海康机器人技术有限公司 Holder calibration method and device, electronic equipment and storage medium
CN112435279A (en) * 2019-08-26 2021-03-02 天津大学青岛海洋技术研究院 Optical flow conversion method based on bionic pulse type high-speed camera
CN112435279B (en) * 2019-08-26 2022-10-11 天津大学青岛海洋技术研究院 Optical flow conversion method based on bionic pulse type high-speed camera
CN112405526A (en) * 2020-10-26 2021-02-26 北京市商汤科技开发有限公司 Robot positioning method and device, equipment and storage medium
CN112558602A (en) * 2020-11-19 2021-03-26 许昌许继软件技术有限公司 Robot positioning method based on image characteristics
CN112558602B (en) * 2020-11-19 2024-06-14 许昌许继软件技术有限公司 Robot positioning method based on image features
CN112461237A (en) * 2020-11-26 2021-03-09 浙江同善人工智能技术有限公司 Multi-sensor fusion positioning algorithm applied to dynamic change scene
CN113485326A (en) * 2021-06-28 2021-10-08 南京深一科技有限公司 Autonomous mobile robot based on visual navigation
CN113989371A (en) * 2021-10-28 2022-01-28 山东大学 Modularized platform relative pose estimation system based on vision
CN113989371B (en) * 2021-10-28 2024-05-03 山东大学 Vision-based modularized platform relative pose estimation system

Similar Documents

Publication Publication Date Title
CN106871904A (en) A kind of mobile robot code-disc positioning correction method based on machine vision
CN103761737B (en) Robot motion&#39;s method of estimation based on dense optical flow
CN106338245B (en) A kind of non-contact traverse measurement method of workpiece
CN110146099B (en) Synchronous positioning and map construction method based on deep learning
CN108733039A (en) The method and apparatus of navigator fix in a kind of robot chamber
CN103745474B (en) Image registration method based on inertial sensor and camera
CN104848858A (en) Two-dimensional code and vision-inert combined navigation system and method for robot
CN108731670A (en) Inertia/visual odometry combined navigation locating method based on measurement model optimization
CN108898638A (en) A kind of on-line automatic scaling method of vehicle-mounted camera
CN109242912A (en) Join scaling method, electronic equipment, storage medium outside acquisition device
CN112347205B (en) Updating method and device for vehicle error state
CN107179091B (en) A kind of AGV walking vision positioning error correcting method
CN107192375B (en) A kind of unmanned plane multiple image adaptive location bearing calibration based on posture of taking photo by plane
CN107909614A (en) Crusing robot localization method under a kind of GPS failures environment
WO2022000713A1 (en) Augmented reality self-positioning method based on aviation assembly
CN104808590A (en) Mobile robot visual servo control method based on key frame strategy
CN107608348A (en) A kind of autonomous type graticule robot system and line-marking method
CN108122255A (en) It is a kind of based on trapezoidal with circular combination terrestrial reference UAV position and orientation method of estimation
CN110533718A (en) A kind of navigation locating method of the auxiliary INS of monocular vision artificial landmark
CN113189613B (en) Robot positioning method based on particle filtering
CN105241449A (en) Vision navigation method and system of inspection robot under parallel architecture
CN105303518A (en) Region feature based video inter-frame splicing method
CN109282810A (en) A kind of snake-shaped robot Attitude estimation method of inertial navigation and angular transducer fusion
CN114993298A (en) EKF-based template matching VO and wheel type odometer fusion positioning method
CN109741372A (en) A kind of odometer method for estimating based on binocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170620

RJ01 Rejection of invention patent application after publication