CN109579825A - Robot positioning system and method based on binocular vision and convolutional neural networks - Google Patents

Robot positioning system and method based on binocular vision and convolutional neural networks Download PDF

Info

Publication number
CN109579825A
CN109579825A CN201811416964.9A CN201811416964A CN109579825A CN 109579825 A CN109579825 A CN 109579825A CN 201811416964 A CN201811416964 A CN 201811416964A CN 109579825 A CN109579825 A CN 109579825A
Authority
CN
China
Prior art keywords
binocular vision
neural networks
convolutional neural
mobile robot
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811416964.9A
Other languages
Chinese (zh)
Other versions
CN109579825B (en
Inventor
马国军
倪朋
郑威
朱琎
李效龙
曾庆军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN201811416964.9A priority Critical patent/CN109579825B/en
Publication of CN109579825A publication Critical patent/CN109579825A/en
Application granted granted Critical
Publication of CN109579825B publication Critical patent/CN109579825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/30Interpretation of pictures by triangulation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/36Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Abstract

The present invention discloses a kind of robot positioning system based on binocular vision and convolutional neural networks and localization method.The system comprises mobile robot, the mobile robot is equipped with the inertial sensor module, binocular vision module and host computer being connected respectively with sensor control block.The motion state of mobile robot is obtained by inertial sensor module, the image of ambient enviroment is obtained by binocular vision module, and carry out Stereo matching using convolutional neural networks, the position of terrestrial reference is calculated.Sensor control block carries out operation control to inertial sensor module and binocular vision module, and receives its data, and the data that then will acquire pass to host computer and carry out data processing, obtains the position of mobile robot.Mobile robot is realized by the method proposed efficiently to position oneself.

Description

Robot positioning system and method based on binocular vision and convolutional neural networks
Technical field
The present invention relates to robots to position immediately and map structuring (Simultaneous Localization and Mapping, SLAM) field more particularly to a kind of robot positioning system based on binocular vision and convolutional neural networks and fixed Position method.
Background technique
For Autonomous Mobile Robot, positioning immediately is it in unknown or nothing with map structuring (SLAM) technology Realize that independent navigation truly provides feasible scheme under the complex environment of priori map.SLAM technology can be The location estimation being obtained from while building in real time to ambient enviroment map in the map.
Mobile robot obtains locating environmental information by sensor, and sensor mainly has laser sensor and vision Sensor etc..Laser sensor volume is big, and energy consumption is high, expensive, is not suitable for the lesser robot of figure.In addition, laser Sensor is difficult to extract angle or linear feature in the case where height is chaotic or crowded, has the problem of perception is drifted about.Sense Know that resolution ratio is lower, is difficult the feature that will be observed and known feature carries out data correlation.Visual sensor can obtain rich Rich information, it is convenient to load, cheap.In the extraction of environmental characteristic terrestrial reference, matching, the expression of environmental map and management etc. Aspect is done well, and can solve the problems, such as the data correlation in SLAM very well using vision.Monocular vision is with respect to stereoscopic vision System is uncertain larger in orientation measurement since the image information of acquisition is fewer, and depth recovery is relatively difficult, Application is subject to certain restrictions.
Binocular vision is simulation human vision principle, uses the method for the passive perceived distance of computer.It is from two or more A point observes an object, and the image obtained under different perspectives passes through triangle according to the matching relationship of pixel between image Measuring principle calculates the offset between pixel to obtain the three-dimensional information of object.Obtain the depth information of object, so that it may Calculate the actual range between object and camera, object dimensional size, actual range between two o'clock.
Convolutional neural networks are a kind of special artificial neural networks, oneself warp becomes speech analysis and image recognition at present for it Most-often used one of the tool in field.Convolutional neural networks by combine local sensing region, shared weight, space or when Between on it is down-sampled come features such as the localities that makes full use of data itself to include, optimize network structure.Convolutional neural networks Structure is a kind of special multilayer perceptron, it has height to the transformation of translation, scaling, inclination, rotation or other forms Invariance.
Extended Kalman filter (EKF) exists real-time during mobile robot simultaneous localization and mapping (SLAM) Property it is poor, precision is low, vulnerable to interference the problems such as.
Summary of the invention
The purpose of the present invention is being directed to the above-mentioned problems of the prior art and deficiency, propose it is a kind of based on binocular vision and The robot positioning system of convolutional neural networks and localization method.The present invention uses augmentation expanded Kalman filtration algorithm, to increase The robustness and raising positioning accuracy of strong robot localization.
To achieve the above object, the present invention takes following technical scheme:
A kind of robot positioning system based on binocular vision and convolutional neural networks, the system comprises mobile machines People, the mobile robot equipped be connected respectively with sensor control block inertial sensor module, binocular vision mould Block and host computer.The motion state of mobile robot is obtained by inertial sensor module, week is obtained by binocular vision module The image in collarette border, and Stereo matching is carried out using convolutional neural networks, obtain the position of terrestrial reference.Sensor control block pair Inertial sensor module and binocular vision module carry out operation control, and receive its data, and the data that then will acquire pass to Position machine carries out data processing, obtains the position of mobile robot.It is efficient that mobile robot is realized by the method proposed It positions oneself.
Further, binocular vision module described above is made of binocular solid camera, and the inertial sensor module is by being used to Property sensor composition, the sensor control block is laptop at, the host computer by singlechip group.
In order to achieve the above object, the present invention is as follows using another technical solution.
A kind of localization method of the robot positioning system based on binocular vision and convolutional neural networks, the steps include:
(1) motion state of mobile robot, including acceleration and angular speed are obtained by inertial sensor;
(2) surrounding image is obtained by binocular solid camera and landmark locations is calculated with this;
2.1 obtain the binocular image of ambient enviroment with binocular camera;
2.2 image rectification
Binocular camera obtain image be difficult to obtain the binocular vision model for being completely in parallel alignment, need at this time into Row binocular correct operation, the image for allowing binocular camera to obtain make it meet parallel alignment to projecting again.Image rectification The two images for not being in practice coplanar row alignment, it is corrected into coplanar row alignment.
Binocular correction is according to the monocular internal reference data (focal length, imaging origin, distortion factor) obtained after camera calibration With binocular relative position relationship (spin matrix and translation vector), respectively left and right view is carried out eliminating distortion and row is aligned, made Left and right view imaging origin it is consistent, two camera optical axises are parallel, left and right imaging plane is coplanar, are aligned to polar curve row. Left and right view is adjusted to be substantially parallel the ideal form of alignment, on such piece image any point and its in another width figure As upper corresponding points just inevitable line number having the same, only need to carry out linear search in the row can be matched to corresponding points.
Binocular correction is realized usually using standard binocular correcting algorithm Bouguet algorithm.
The principle of Bouguet algorithm mainly when re-projection, which distorts, to be minimized, makes to observe area maximum.Obtaining two Under conditions of the relativeness matrix, that is, spin matrix R and translation vector T of camera, two cameras in left and right are respectively rotated into half, i.e., Spin matrix R is decomposed into rlAnd rr, what the optical axis of two such camera can be parallel be directed toward its former primary optical axis is directed toward to Amount and direction.The plane of delineation rotated at this time, which can only be realized, coplanar can't realize capable alignment.In order to realize capable alignment, i.e. pole Point transformation needs to construct transformation matrix R to infinite point and polar curve horizontal aligumentrectIf:
Wherein, RrectIt is transformation matrix, e1、e2、e3For three vectors of composition.
Due to matrix RrectThe pole of left camera is transformed into infinity and polar curve is horizontal, so vector e1Direction is exactly two In-plane between a principal point for camera:
Wherein,Vector e2It only need to be with vector e1It is orthogonal.
The direction orthogonal with primary optical axis is selected under normal circumstances, by calculating vector e1With key light axis direction (0,0,1) Cross product obtains, and is normalized:
Wherein, Tx、Ty、TzIt decomposes to obtain by matrix T.
Third vector e3With vector e1And e2It is orthogonal, it is obtained by cross product:
e3=e1×e2
The correction matrix of binocular camera can be obtained by the spin matrix and transformation matrix of left and right camera, can obtain ideal Parallel optical axis binocular vision model.
Wherein, RrectFor transformation matrix, rl,rrIt is synthesis spin matrix, Rl,RrIt is to rotate integrally matrix.
2.3 carry out Stereo matching with convolutional neural networks;
2.3.1 each disparity correspondence cost is calculated to binocular image using convolutional neural networks;
The essence of Stereo matching is to compare the similarity degree of two pixels in binocular image, and matching cost is exactly this The mathematical expression of similarity degree.Matching cost is lower between two pixels, and more similar between them, matching degree is higher.Benefit With block-based image matching method, accurate corresponding relationship between each piece is found using convolutional neural networks, by by image Multiple pieces are divided into, more each piece of similitude attempts to find corresponding pixel, then according to the corresponding relationship of different masses Carry out different degrees of similarity score.The negative value of similitude is defined as the cost of Stereo matching, poly- for subsequent cost Conjunction and disparity computation.The specific steps that binocular solid matching cost based on convolutional neural networks method calculates can be summarized as follows:
1) the different characteristic information of left images block is extracted using 4 layers of convolutional neural networks layer respectively;
2) it is coupled left and right characteristic information, classification judgement is carried out to feature using 4 layers of full connection layer network.Wherein, letter is lost Number uses cross entropy cost function:
tlog(s)+(1-t)log(1-s)
S is the output of similarity system design network in formula, and t is sample labeling, and the t=1 when input is positive sample, input is negative T=0 when sample;
3) judging result of phase Sihe dissmilarity is exported in the form of similarity score, is used for subsequent Stereo matching;
4) cost function is constructed using the inverse proportion of similarity score, obtains matching cost.
2.3.2 cost polymerize
It goes to select optimal parallax with the matching cost of a pixel, is easy to be influenced by noise in image, so that calculating Method stability is poor.In local matching algorithm, centered on pixel to be matched, takes in surrounding neighbors (polymerizing windows) and own Pixel matching cost is overlapped statistics.To one wait select for parallax, polymerizing windows are two-dimensional surfaces;Parallax is searched All wait select for parallax within the scope of rope, polymerizing windows are three-dimensional space.
The polymerization of matching cost is equivalent to using a disparity plane convolution to be selected in polymerizing windows and disparity space, It is shown below.
In formula, CA(p, d) indicates polymerization cost of the reference point p at parallax d to be selected, C0(p, d) indicates that reference point p exists Original match cost at parallax d to be selected, w (p, d) indicate point p in the polymerizing windows weight of parallax d plane.It is simplest Window weight selection method is exactly to be averaged, it is assumed that when polymerizing windows are that a size is fixed as 5 × 5 square window, Window weight is set as 1/25 at this time.
2.3.3 disparity computation
In disparity computation, local matching algorithm and global registration algorithm have biggish difference.The weight of local matching algorithm Point is to calculate to polymerize this two step with cost in matching cost, and the establishment of final parallax value is relatively easy to, generally logical using winner Eat tactful (Winner Take All, WTA).WTA strategy be exactly in all search disparity ranges select one have it is optimal Final parallax of the parallax d of matching cost as point p to be matched.
dp=arg minC (p, d), d ∈ D
In formula, dpIndicate the final parallax value of point p to be matched, D indicates disparity search range.
In this four steps, disparity computation is the emphasis step of global registration algorithm.Most of global registration algorithm is all It is first to establish an energy function, then optimize the energy function by constantly adjusting parallax value, obtains final parallax.
E (d)=Edata(d)+λEsmooth(d)
In formula, Edata(d) it is data item, indicates similarity degree of two pixels when parallax is d.EsmoothIt (d) is smooth , it is used to smoothly hypothesis encodes in global registration algorithm, usual Esmooth(d) difference between neighborhood territory pixel parallax is only measured Value.
2.3.4 parallax is refined, and obtains final parallax value
Local matching algorithm and global registration algorithm are all to directly obtain the integer parallax being distributed in search disparity range Value, can meet the not high occasion demand of required precision.But for the application for having required precision, such parallax is often It cannot obtain effect well.It, can be further to parallax using mathematical methods such as sub-pix enhancings on the basis of initial parallax Refinement is refined, the other parallax value of sub-pixel is obtained.It also will use the methods of the consistent inspection in left and right, median filtering removal simultaneously Mismatching point in disparity map obtains more accurate parallax value.
2.4 obtain the position of terrestrial reference according to parallax value and using principle of triangulation.
If spatial point P (Xc,Yc,Zc) the magazine image coordinate in left and right is projected in as pl(ul,vl) and pr(ur,vr), due to Camera is provided parallel to alignment, so there is v=vl=vr。ΔPOlOrWith Δ PplprIt is similar to get arrive following relational expression:
Abbreviation obtains:
Wherein, B is baseline length, i.e. the distance between two camera photocentres;F is focal length;D=ulur, i.e. parallax.Camera is sat Mark has with image coordinate transformational relation:
Parallel optical axis vision mode conversion coordinate relationship can be obtained are as follows:
(3) processing of augmentation Extended Kalman filter is carried out according to the data of acquisition.
3.1 prediction processes
By the pose vector sum covariance of the mobile robot of previous moment, the shape of current time mobile robot is predicted State vector sum covariance.
3.2 observation process
Observation obtains different landmark locations, and the pose of mobile robot is updated using kalman gain and new breath matrix With Systematic Error Correction parameter.
3.3 data correlation
Judge observed result, if the terrestrial reference observed is new landmark, enters the state augmentation stage;If the terrestrial reference observed It is known landmarks, then enters step.
3.4 state augmentation processes
When mobile robot observes a new terrestrial reference, after verification, the terrestrial reference that needs this newly to observe Location status is added in system mode vector.
3.5 more new stages
New system mode vector and covariance matrix are updated with augmentation model.
(4) map is updated, according to augmentation Extended Kalman filter as a result, drawing and updating the position of mobile robot.
The invention has the following advantages that
1, the present invention acquires image with binocular camera, and the accuracy for obtaining image is high
2, Stereo matching, available accurate disparity map are carried out with the method for convolutional neural networks, to obtain more smart True landmark locations improve the accuracy of position of mobile robot estimation.
3, the present invention positions mobile robot with the method for augmentation Extended Kalman filter, improves positioning accurate Degree.
Detailed description of the invention
The system schematic of robot positioning system of the Fig. 1 based on binocular vision and convolutional neural networks;
The method flow diagram of robot positioning system of the Fig. 2 based on binocular vision and convolutional neural networks;
Fig. 3 Stereo matching flow chart;
Fig. 4 terrestrial reference fetching portion flow chart;
Fig. 5 augmentation Extended Kalman filter partial process view.
Specific embodiment
The present invention is described in detail below with reference to the accompanying drawings and embodiments.
It is the system schematic of robot positioning system based on binocular vision and convolutional neural networks a kind of with reference to Fig. 1. A kind of robot positioning system based on binocular vision and convolutional neural networks includes mobile robot, the mobile robot Equipped with the inertial sensor module, binocular vision module and host computer being connected respectively with sensor control block.
Inertial sensor obtain mobile robot motion state, the inertial sensor of use can be MPU6050 or MPU6500, inertial sensor module are connected with sensor control block by USB data line.Inertial sensor internal integration three Axis gyroscope and three axis accelerometer, when carrying the robot motion of inertial sensor module, chip can be passed according to inertia The detection axis of sensor defined and direction, real-time detection and respectively output angular velocity and acceleration.Binocular vision module is by binocular Camera composition, can be ZED binocular camera or bumblebee2 binocular camera, and binocular camera passes through USB data line and controller Module connection.
Sensor control block passes through RS232 as core processor, sensor control block and computer using STM32 Serial ports connects and carries out real time communication.Sensor control block operates inertial sensor module and binocular vision module Control, and its data is received, the data that then will acquire pass to host computer and carry out data processing, obtain the position of mobile robot It sets.
Host computer uses laptop, and display movement can be drawn according to the result of augmentation Extended Kalman filter The position of robot.Mobile robot used herein is turtblebot miniature mobile robot, small in size, and flexibility is high, Inertial sensor module, binocular vision module, sensor control block and laptop can be carried.
The localization method of the robot positioning system based on binocular vision and convolutional neural networks invented herein a kind of, ginseng Examine the localization method flow chart that Fig. 2 is the robot positioning system based on binocular vision and convolutional neural networks;It is with reference to Fig. 3 Stereo matching flow chart;It is terrestrial reference fetching portion flow chart with reference to Fig. 4;It is augmentation Extended Kalman filter part stream with reference to Fig. 5 Cheng Tu, specific steps are as follows:
(1) it predicts
The kinetics equation of mobile robot are as follows:
In formula: f () is the kinetic model equation of mobile robot, and dt is control time step, and L is the wheel of antero posterior axis Cardinal distance.K-1 moment, mobile robot pose vectorThe dominant vector of k-1 to k moment mobile robot isFor velocity correction parameter, γ is course angle.
So, the prediction of k moment, state space and covariance is respectively as follows:
In formulaFor mobile robot pose vector, f () is the kinetic model equation of mobile robot
In formula: ▽ fx, ▽ fuIt is moveable robot movement model about mobile robot pose vectorWith control to Measure Jacobi (Jacobian) determinant of u (k);QkFor the error covariance of process noise, it is assumed that process noise is that mean value is Zero Gaussian noise.
(2) it observes
It is assumed that the road sign in environment be it is static,It is i-th of tunnel that mobile robot observes Cursor position, mobile robot load the distance between inertial sensor Returning sensor and the road sign and angle, then observation model It may be expressed as:
S (k)=▽ hP-(K)·(▽h)T+Rk
In above formula, S (k) is new breath matrix, and ▽ h is observation model about state spaceJacobian, P-(k) it is covariance matrix, now it is handled as follows:
Snew(k)=0.5 (S (k)+S (k)T)
SChol(k)=chol (Snew(k))
Above formula Snew(k) make newly to cease matrix symmetric, formula SChol(k) indicate that matrix is newly ceased to newly-generated symmetrization to carry out Cholesky is decomposed, and numerically breath matrix newer than simple utilization is more steady during recursive operation for this processing result It is fixed.
Kalman gain KkThen change are as follows:
Kk=P-(k)·(▽h)T·S-1 Chol(k)·(S-1 Chol(k))T
In above formula, KkFor Kalman gain, P-It (k) is covariance matrix
Recurrence renewal equation are as follows:
P+(k)=(I-Kk·▽h)·P-(k)
In above formula: ▽ h is observation model about state spaceJacobian, RkFor the mistake of observation noise Poor covariance, it is assumed that observation noise is the Gaussian noise that mean value is zero.
(3) data correlation judges observed result, if the terrestrial reference observed is new landmark, enters step (4);If observation To terrestrial reference be known landmarks, then enter step (5).
(4) state augmentation
When mobile robot observes a new terrestrial reference, after verification, the terrestrial reference that needs this newly to observe Location status is added in system mode vector.
(5) new system mode vector and covariance matrix are updated with augmentation model.The terrestrial reference newly observedPosition may be expressed as:
In above formula,Indicate the landmark locations newly observed, g () indicates new kinetic model equation.
It so include the system state space of new landmarkAre as follows:
Accordingly covariance matrix P*(k) are as follows:
Wherein:
In formula:Jacobi ranks for g () about mobile robot pose vector sum observation vector Formula.
The above, only the specific embodiment of the present invention, in the feelings without departing substantially from spirit of that invention and its essence Under condition, anyone skilled in the art makes various equivalent changes in accordance with the present invention, all according to this The technical idea proposed is invented, any changes made on the basis of the technical scheme all should belong to appended claims of the present invention Protection scope.

Claims (6)

1. a kind of robot positioning system based on binocular vision and convolutional neural networks, which is characterized in that including mobile machine People, the mobile robot equipped be connected respectively with sensor control block inertial sensor module, binocular vision mould Block and host computer;The motion state of mobile robot is obtained by the inertial sensor module, passes through the binocular vision mould Block obtains the image of ambient enviroment, and carries out Stereo matching using convolutional neural networks, obtains the position of terrestrial reference;The sensor Control module carries out operation control to the inertial sensor module and binocular vision module, and receives its data, then will obtain The data taken pass to the host computer and carry out data processing, obtain the position of mobile robot.
2. a kind of robot positioning system based on binocular vision and convolutional neural networks according to claim 1, special Sign is that the binocular vision module is made of binocular solid camera, and the inertial sensor module is made of inertial sensor, The sensor control block is by singlechip group at the host computer is laptop.
3. a kind of robot positioning system's according to claim 1 based on binocular vision and convolutional neural networks determines Position method, which comprises the steps of:
(1) data acquisition is carried out by inertial sensor and binocular vision sensor, mobile machine is obtained by inertial sensor The acceleration and angular speed of people obtains the image around mobile robot by binocular vision sensor;
(2) position that terrestrial reference is calculated in Stereo matching is carried out using convolutional neural networks by the image obtained;
(3) motion state of the mobile robot obtained according to inertial sensor and obtained landmark locations, carry out augmentation extension Kalman filtering processing, obtains the position of mobile robot;
(4) motion maps are drawn according to the change in location of mobile robot.
4. localization method according to claim 3, which is characterized in that step (2) the image volume by obtaining Product neural network carries out Stereo matching and obtains the position of terrestrial reference, and specific steps include:
(1) ambient image around robot is obtained using binocular camera;
(2) binocular image is corrected, the image for allowing binocular camera to obtain makes it meet parallel alignment to projecting again;
(3) Stereo matching is carried out to acquired image with convolutional neural networks and obtains parallax value;
(4) position of terrestrial reference is calculated according to parallax value and principle of triangulation.
5. localization method according to claim 4, which is characterized in that the step (2) is with convolutional neural networks to adopting Collect the specific method that image carries out Stereo matching, comprising:
(1) each disparity correspondence cost is calculated to binocular image using convolutional neural networks;
(2) cost polymerize, i.e., is polymerize to obtain to the matching cost in a support window by summation, the method averaged Accumulated costs of the point at specific parallax on image;
(3) parallax is calculated, the point for selecting accumulated costs optimal within the scope of disparity search is corresponding to it as corresponding match point Parallax be required parallax;
(4) it refines parallax, is optimized to anaglyph is obtained, obtain final parallax value.
6. according to localization method described in claims 3, which is characterized in that step (2) is described to be increased according to the data of acquisition Wide Extended Kalman filter processing, detailed process include:
(1) it predicts process, by the state matrix and covariance matrix of the robot of previous moment, predicts current time robot State matrix and covariance matrix;
(2) observation process observes the terrestrial reference of ambient enviroment, by the state of kalman gain and Xin Xi matrix update robot and Systematic Error Correction parameter;
(3) data correlation judges the relationship for observing terrestrial reference and known landmarks according to observed result, if observation terrestrial reference is new landmark, (4) are then entered step, if observation terrestrial reference is known landmarks, enter step (5);
(4) state augmentation process, if what is observed is new landmark, the position of the new landmark of observation be added to system mode to In amount;
(5) renewal process carries out newer systematic observation matrix and covariance if the terrestrial reference observed is known landmarks Newly.
CN201811416964.9A 2018-11-26 2018-11-26 Robot positioning system and method based on binocular vision and convolutional neural network Active CN109579825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811416964.9A CN109579825B (en) 2018-11-26 2018-11-26 Robot positioning system and method based on binocular vision and convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811416964.9A CN109579825B (en) 2018-11-26 2018-11-26 Robot positioning system and method based on binocular vision and convolutional neural network

Publications (2)

Publication Number Publication Date
CN109579825A true CN109579825A (en) 2019-04-05
CN109579825B CN109579825B (en) 2022-08-19

Family

ID=65924557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811416964.9A Active CN109579825B (en) 2018-11-26 2018-11-26 Robot positioning system and method based on binocular vision and convolutional neural network

Country Status (1)

Country Link
CN (1) CN109579825B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110118556A (en) * 2019-04-12 2019-08-13 浙江工业大学 A kind of robot localization method and device based on covariance mixing together SLAM
CN110519582A (en) * 2019-08-16 2019-11-29 哈尔滨工程大学 A kind of crusing robot data collection system and collecting method
CN111174781A (en) * 2019-12-31 2020-05-19 同济大学 Inertial navigation positioning method based on wearable device combined target detection
CN111325794A (en) * 2020-02-23 2020-06-23 哈尔滨工业大学 Visual simultaneous localization and map construction method based on depth convolution self-encoder
CN111504331A (en) * 2020-04-29 2020-08-07 杭州环峻科技有限公司 Method and device for positioning panoramic intelligent vehicle from coarse to fine
WO2020211655A1 (en) * 2019-04-17 2020-10-22 北京迈格威科技有限公司 Laser coarse registration method, device, mobile terminal and storage medium
CN111913484A (en) * 2020-07-30 2020-11-10 杭州电子科技大学 Path planning method of transformer substation inspection robot in unknown environment
CN112904359A (en) * 2019-11-19 2021-06-04 沃尔沃汽车公司 Velocity estimation based on remote laser detection and measurement
CN114862955A (en) * 2022-07-07 2022-08-05 诺伯特智能装备(山东)有限公司 Rapid visual positioning method for industrial robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102042835A (en) * 2010-11-05 2011-05-04 中国海洋大学 Autonomous underwater vehicle combined navigation system
CN102288176A (en) * 2011-07-07 2011-12-21 中国矿业大学(北京) Coal mine disaster relief robot navigation system based on information integration and method
CN103983263A (en) * 2014-05-30 2014-08-13 东南大学 Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network
US20180012125A1 (en) * 2016-07-09 2018-01-11 Doxel, Inc. Monitoring construction of a structure
CN108648161A (en) * 2018-05-16 2018-10-12 江苏科技大学 The binocular vision obstacle detection system and method for asymmetric nuclear convolutional neural networks
CN108734143A (en) * 2018-05-28 2018-11-02 江苏迪伦智能科技有限公司 A kind of transmission line of electricity online test method based on binocular vision of crusing robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102042835A (en) * 2010-11-05 2011-05-04 中国海洋大学 Autonomous underwater vehicle combined navigation system
CN102288176A (en) * 2011-07-07 2011-12-21 中国矿业大学(北京) Coal mine disaster relief robot navigation system based on information integration and method
CN103983263A (en) * 2014-05-30 2014-08-13 东南大学 Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network
US20180012125A1 (en) * 2016-07-09 2018-01-11 Doxel, Inc. Monitoring construction of a structure
CN108648161A (en) * 2018-05-16 2018-10-12 江苏科技大学 The binocular vision obstacle detection system and method for asymmetric nuclear convolutional neural networks
CN108734143A (en) * 2018-05-28 2018-11-02 江苏迪伦智能科技有限公司 A kind of transmission line of electricity online test method based on binocular vision of crusing robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TOMÈ, D.1 等: ""Deep Convolutional Neural Networks for pedestrian detection"", 《SIGNAL PROCESSING: IMAGE COMMUNICATION》 *
胡颖等: "卷积神经网络双目视觉路面障碍物检测", 《计算机工程与设计》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110118556A (en) * 2019-04-12 2019-08-13 浙江工业大学 A kind of robot localization method and device based on covariance mixing together SLAM
WO2020211655A1 (en) * 2019-04-17 2020-10-22 北京迈格威科技有限公司 Laser coarse registration method, device, mobile terminal and storage medium
CN110519582A (en) * 2019-08-16 2019-11-29 哈尔滨工程大学 A kind of crusing robot data collection system and collecting method
CN112904359A (en) * 2019-11-19 2021-06-04 沃尔沃汽车公司 Velocity estimation based on remote laser detection and measurement
CN112904359B (en) * 2019-11-19 2024-04-09 沃尔沃汽车公司 Speed estimation based on remote laser detection and measurement
CN111174781A (en) * 2019-12-31 2020-05-19 同济大学 Inertial navigation positioning method based on wearable device combined target detection
CN111174781B (en) * 2019-12-31 2022-03-04 同济大学 Inertial navigation positioning method based on wearable device combined target detection
CN111325794B (en) * 2020-02-23 2023-05-26 哈尔滨工业大学 Visual simultaneous localization and map construction method based on depth convolution self-encoder
CN111325794A (en) * 2020-02-23 2020-06-23 哈尔滨工业大学 Visual simultaneous localization and map construction method based on depth convolution self-encoder
CN111504331A (en) * 2020-04-29 2020-08-07 杭州环峻科技有限公司 Method and device for positioning panoramic intelligent vehicle from coarse to fine
CN111913484A (en) * 2020-07-30 2020-11-10 杭州电子科技大学 Path planning method of transformer substation inspection robot in unknown environment
CN111913484B (en) * 2020-07-30 2022-08-12 杭州电子科技大学 Path planning method of transformer substation inspection robot in unknown environment
CN114862955B (en) * 2022-07-07 2022-09-02 诺伯特智能装备(山东)有限公司 Rapid visual positioning method for industrial robot
CN114862955A (en) * 2022-07-07 2022-08-05 诺伯特智能装备(山东)有限公司 Rapid visual positioning method for industrial robot

Also Published As

Publication number Publication date
CN109579825B (en) 2022-08-19

Similar Documents

Publication Publication Date Title
CN109579825A (en) Robot positioning system and method based on binocular vision and convolutional neural networks
CN109993113B (en) Pose estimation method based on RGB-D and IMU information fusion
CN110125928B (en) Binocular inertial navigation SLAM system for performing feature matching based on front and rear frames
CN106679648B (en) Visual inertia combination SLAM method based on genetic algorithm
CN109029433A (en) Join outside the calibration of view-based access control model and inertial navigation fusion SLAM on a kind of mobile platform and the method for timing
CN107590827A (en) A kind of indoor mobile robot vision SLAM methods based on Kinect
CN109544636A (en) A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN109166149A (en) A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN108090958A (en) A kind of robot synchronously positions and map constructing method and system
CN111288989B (en) Visual positioning method for small unmanned aerial vehicle
CN113706626B (en) Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction
CN104281148A (en) Mobile robot autonomous navigation method based on binocular stereoscopic vision
CN110322507A (en) A method of based on depth re-projection and Space Consistency characteristic matching
CN113223045A (en) Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN111998862A (en) Dense binocular SLAM method based on BNN
CN114529576A (en) RGBD and IMU hybrid tracking registration method based on sliding window optimization
CN110749308B (en) SLAM-oriented outdoor positioning method using consumer-grade GPS and 2.5D building models
CN112945233A (en) Global drift-free autonomous robot simultaneous positioning and map building method
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
CN116128966A (en) Semantic positioning method based on environmental object
CN114234967B (en) Six-foot robot positioning method based on multi-sensor fusion
CN116147618A (en) Real-time state sensing method and system suitable for dynamic environment
Kang et al. A Visual SLAM Algorithm Based on Dynamic Feature Point Filtering
Hu et al. M³LVI: a multi-feature, multi-metric, multi-loop, LiDAR-visual-inertial odometry via smoothing and mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant