CN207351462U - Real-time binocular visual positioning system based on GPU-SIFT - Google Patents

Real-time binocular visual positioning system based on GPU-SIFT Download PDF

Info

Publication number
CN207351462U
CN207351462U CN201720318565.3U CN201720318565U CN207351462U CN 207351462 U CN207351462 U CN 207351462U CN 201720318565 U CN201720318565 U CN 201720318565U CN 207351462 U CN207351462 U CN 207351462U
Authority
CN
China
Prior art keywords
camera
real
match point
video
visual positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201720318565.3U
Other languages
Chinese (zh)
Inventor
罗斌
张云
林国华
刘军
赵青
王伟
陈警
张良培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Chang'e Medical Anti - Aging Robot Ltd By Share Ltd
Original Assignee
Wuhan Chang'e Medical Anti - Aging Robot Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Chang'e Medical Anti - Aging Robot Ltd By Share Ltd filed Critical Wuhan Chang'e Medical Anti - Aging Robot Ltd By Share Ltd
Priority to CN201720318565.3U priority Critical patent/CN207351462U/en
Application granted granted Critical
Publication of CN207351462U publication Critical patent/CN207351462U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A kind of real-time binocular visual positioning system based on GPU SIFT is the utility model is related to, including:Three-dimensional image video acquisition module, is obtained in robot or mobile platform moving process using parallel binocular camera and obtains right and left eyes image;Corresponding match point acquisition module in two frames of video, using corresponding match point in two frames before and after shooting video in Feature Points Matching acquisition motion process;Camera displacement computing module, in imaging space changes in coordinates or is established three-dimensional coordinate and calculates the displacement of camera by match point;Binocular visual positioning module, after obtaining the position at each moment, the anglec of rotation that camera advances, to the real-time binocular visual positioning of robot or mobile platform.The utility model accelerates SIFT feature matching process using GPU SIFT, coordinates binocular visual positioning, can realize robot or the positioning of mobile platform real-time vision, obtain higher positioning accuracy, scalability, highly practical and ambient adaptability are strong.

Description

Real-time binocular visual positioning system based on GPU-SIFT
Technical field
Robot visual location and field of navigation technology are the utility model is related to, is specifically a kind of real-time based on GPU-SIFT Binocular vision speedometer system.
Background technology
With the continuous development of robot and computer vision, camera is more and more used for robotic vision and is positioned Among navigation.Robot localization is mainly code-disc, sonar, IMU, GPS, the Big Dipper, laser scanner, RGBD cameras and double Mesh camera carries out the methods of vision positioning, and wherein code-disc is converted into the number of turns and then right that wheel rotates according to the number of turns that motor rotates The stroke of robot, which derive, realizes positioning, but this positioning method is when sand ground, meadow or wheel slip Error is larger, and positioning is inaccurate.Sonar positioning is determined mainly by sonac transmitting and return signal analysis disturbance in judgement thing Position and navigation, but the resolution ratio of sonar is relatively low, and there are more noise in signal, easily positioning is interfered.Robot Positioned using IMU often there are accumulated error, it is past in the positioning of robot progress long-time long range and navigation procedure Accurate positionin could be realized toward needing to correct.Positioning method is carried out using satellites such as GPS or the Big Dippeves, precision is often very poor, Obtaining high accuracy positioning, often cost is higher is difficult to realize, and GPS or Big Dipper positioning are believed often appropriate only to outdoor satellite It is often helpless for the poor environment of indoor positioning or satellite-signal among number preferable environment.Though laser scanner So possess high-precision stationkeeping ability in any environment, but its is with high costs, data volume is big, and processing is complicated, and work( Consume larger.More common at present positioned using the laser of single line, but application environment is relatively more limited, is only applicable to put down Face ring border, can not use for rolling topography environment.Can acquired disturbance thing and image letter although being positioned using RGBD cameras Breath, but since Infrared laser emission intensity is by environmental restrictions, indoor environment is only applicable to substantially, and effective distance is limited. Relative positioning can only be realized by carrying out positioning using common one camera, and positioning accuracy is extremely restricted, but be used parallel Binocular camera can carry out absolute fix, and positioning accuracy can reach the precision of laser positioning in certain circumstances, and Illumination can be used in the case of allowing in usual environment, but the vision positioning data computation complexity based on binocular camera Height, it is computationally intensive, it tends to be difficult to reach real-time positioning requirements.And in order to reach real-time vision positioning effect, often use Relatively simple image processing algorithm, especially in visual odometry.
Visual odometry is the visual information that is obtained only with the camera in mobile vehicle or robot to realize car Either the positioning of robot movement shoots field around in operational process by moving body or robot in-vehicle camera The image of scape either extract in video car body or robot operation situation and running environment information to moving body or Robot is positioned in real time.In order to realize real-time visual odometry, substantially time loss appears at images match portion Point, and and among images match 80% time loss appear in feature extraction and feature description on, so in order to reduce vision The time loss of odometer, is essentially all using simple local feature and character description method, to realize real-time vision Odometer positioning function.It is more common to have Harris, Fast, CenSurE and simple Edge Feature Points, but these are simple Single feature description is difficult to realize scale and rotational invariance, and these situations are often generally deposited in camera operational process , so these simple features are difficult to realize accurate images match, then reach the vision positioning effect of degree of precision. And SIFT feature aims at and solves scale and rotating consistency and design, the scale and rotation that can be good at overcoming image become Change, realize accurate images match, obtain the vision positioning effect of degree of precision.But SIFT feature extraction and description time disappear Consume it is larger be difficult to realize real-time images match, using GPU to SIFT feature extraction, description and matching process carry out acceleration processing GPU-SIFT can significantly speed up SIFT feature matching process, realize the matching of real-time SIFT feature.The utility model uses GPU-SIFT coordinates binocular visual positioning, realizes real-time visual odometry system, real-time positioning and navigation for robot.
Utility model content
The utility model is in order to overcome drawbacks described above existing in the prior art, there is provided a kind of reality based on GPU-SIFT When binocular visual positioning system, SIFT feature matching process is accelerated, makes up to real-time matching speed, coordinates binocular Vision positioning, realizes real-time visual odometry system, positions and navigates for the real-time vision of robot or mobile platform.
To solve the above problems, the real-time binocular visual positioning system provided by the utility model based on GPU-SIFT, bag Include following hardware module:
Three-dimensional image video acquisition module, is obtained in robot or mobile platform moving process using parallel binocular camera Obtain right and left eyes image;
Corresponding match point acquisition module in two frames of video, is obtained in motion process using Feature Points Matching and shoots video Front and rear two frame in corresponding match point;
Camera displacement computing module, in imaging space changes in coordinates or is established three-dimensional coordinate and calculates phase by match point The displacement of machine;
Binocular visual positioning module, after obtaining the position at each moment, the anglec of rotation that camera advances, filters with reference to kalman The route of travel of camera in whole process is obtained, to the real-time binocular visual positioning of robot or mobile platform;
Corresponding match point obtains in two frames of the three-dimensional image video acquisition module output terminal connection video Module input, corresponding match point acquisition module output terminal connects the camera displacement computing module in two frames of the video Input terminal, the camera displacement computing module output terminal connect the input terminal of the binocular visual positioning module.
In the above-mentioned technical solutions, corresponding match point acquisition module includes have cured GPU- in two frames of the video The hardware module of SIFT feature matching algorithm.
Corresponding program is have cured in above-mentioned hardware module, to complete the function, these independent hardware modules Commercially can be after bulk purchase, conception according to the present utility model carries out hardware connection, to complete the utility model Function and purpose.
The utility model is had the advantages that compared with prior art and advantage:
The utility model proposes the real-time binocular visual positioning system based on GPU-SIFT using GPU-SIFT to SIFT Characteristic matching coordinates binocular visual positioning, realizes robot or mobile platform regards in real time, it can be achieved that real-time matching speed Feel positioning, obtain high position precision, scalability and highly practical, ambient adaptability is strong.
Brief description of the drawings
Fig. 1 is the schematic diagram of image space auxiliary coordinates in the utility model.
Fig. 2 is the utility model intermediate cam measuring principle figure.
Explanation is numbered in figure:1st, left camera;2nd, right camera.
Embodiment
The utility model is described in further detail below in conjunction with the drawings and specific embodiments:
In the present embodiment, the real-time binocular visual positioning system provided by the utility model based on GPU-SIFT, including with Lower hardware module:
Three-dimensional image video acquisition module, is obtained in robot or mobile platform moving process using parallel binocular camera Obtain right and left eyes image;
Corresponding match point acquisition module in two frames of video, is obtained in motion process using Feature Points Matching and shoots video Front and rear two frame in corresponding match point;
Camera displacement computing module, in imaging space changes in coordinates or is established three-dimensional coordinate and calculates phase by match point The displacement of machine;
Binocular visual positioning module, after obtaining the position at each moment, the anglec of rotation that camera advances, filters with reference to kalman The route of travel of camera in whole process is obtained, to the real-time binocular visual positioning of robot or mobile platform;
Corresponding match point obtains in two frames of the three-dimensional image video acquisition module output terminal connection video Module input, corresponding match point acquisition module output terminal connects the camera displacement computing module in two frames of the video Input terminal, the camera displacement computing module output terminal connect the input terminal of the binocular visual positioning module.
Corresponding match point acquisition module includes have cured the hard of GPU-SIFT Feature Correspondence Algorithms in two frames of video Part module.
Corresponding program is have cured in above-mentioned hardware module, to complete the function, these independent hardware modules Commercially can be after bulk purchase, conception according to the present utility model carries out hardware connection, to complete the utility model Function and purpose.
Corresponding match point acquisition module completes following functions in two frames of video:
The SIFT feature of two frame binocular images or so, four width image is extracted, and SIFT feature generation SIFT feature is retouched State;The SIFT feature of the left camera image of the first frame and right camera image is matched, obtains Stereo matching point (PL1, PR1);Matching The SIFT feature of the left camera image of second frame and right camera image, obtains Stereo matching point (PL2, PR2);Match the first frame left side The SIFT feature (LL1, LL2) of camera image and the left camera image of the second frame;It is left to find out the first frame obtained in step S24 Characteristic point conduct identical with the left camera image match point PL1 of the first frame obtained in sub-step S22 camera image match point LL1 The final match point of the left camera image of first frame;Similarly obtain the match point of the left camera image of the second frame;According in sub-step S25 Obtained left camera image match point, corresponding right camera image match point is found by the matching double points in sub-step S22;Together Reason finds the right camera image match point of the second frame, that is, completes the matching process of two frames, four width image.
Camera displacement computing module, completes following functions:
Using an image space auxiliary coordinates, by obtaining the Corresponding matching point in front and rear two frames, four width image, according to three The method of angular measurement calculates three-dimensional coordinate point P of the synchronization Corresponding matching point under the auxiliary coordinates of image spacei, wherein using Image space auxiliary coordinates S-XYZ as shown in Figure 1, using the back end surface central point of left camera as coordinate origin, X-axis is located at On two central point lines of the back end surface of left camera and right camera, Z axis is located on the central axis of left camera, triangulation Principle as shown in Fig. 2, by the triangle S1 in Fig. 2 to S2's similar and S1 ' it is similar to S2's ' draw it is following calculate it is public Formula:
Wherein, (xl,yl)、 (xr,yr) for same frame left images match point relative to the coordinate of picture centre, d is binocular phase The baseline of machine, f are camera focus;
It will obtain three-dimensional coordinate PiIt is updated to equation of motion Pi=RPi' solve in+T, draw left camera and right camera from It is respectively T (Tx, Ty, Tz) and R (Rx, Ry, Rz) by degree parameter;
Three coordinate points P are randomly choosed using RANSAC methods every timei, all the points are substituted into error formulaMiddle calculating error value E (R, T);
The number that E (R, T) value is less than the point of a certain threshold value is counted, the point less than a certain threshold value is taken after selecting several times That the largest number of group result be final result of calculation, it is larger thus to largely avoid matching error The interference of point, improves computational solution precision;
Final result of calculation is updated to equation of motion Pi=RPi'+T is to obtain the equation of motion of camera so as to estimate The displacement of camera.
Binocular visual positioning module, completes following functions:
Using graphical pointv as vector point, direction vector for the cumulative of the anglec of rotation of former frame and, draw and working as during subsequent point Preceding point side translates up T, determines its coordinate, the anglec of rotation is that spin matrix R is multiplied by the direction of former frame, according to formula2≤i≤N determines specific path inverting midpoint, wherein PoIt is initial time camera in XOZ planes Position coordinates, is set to (0,0);PiIt is the i-th moment camera in the position coordinates of XOZ planes, TiIt it was the i-th moment in current point direction On translation distance.
Since the factors such as the shake in the limitation of extraction characteristic point precision, car body traveling process, the change of scene light are made Into motion estimation result there are error, or R, the T being calculated cause finally to draw road there are the jump error of contingency The error in footpath is larger, path is discontinuous, so general motion generally needs after estimating the estimation between obtaining per adjacent two frame Motion estimation result is smoothed, usually using to restrictive condition be rotation and translation speed during body movement Either the continuity of acceleration, the restricted estimated result that error is larger are replaced degree using neighboring mean value or Mesophyticum, more It is complicated can use Kalman filtering or Extended Kalman filter carry out it is smooth, so as to get path continuously smooth.Herein In in order to reduce the time loss of whole process to the greatest extent, use the filtering process that the former is relatively simple, that is, reject error compared with Big R, T is simultaneously replaced with neighborhood Mesophyticum, and smooth effect is ideal.
Finally illustrate, above example is merely intended for describing the technical solutions of the present application, but not for limiting the present application, although ginseng The utility model is described in detail according to preferred embodiment, it will be understood by those of ordinary skill in the art that, can be to this The technical solution technical scheme is modified or replaced equivalently of utility model, without departing from the objective and model of technical solutions of the utility model Enclose, it should all cover in the right of the utility model.

Claims (2)

  1. A kind of 1. real-time binocular visual positioning system based on GPU-SIFT, it is characterised in that including:
    Three-dimensional image video acquisition module, is obtained in robot or mobile platform moving process using parallel binocular camera and obtained Right and left eyes image;
    Corresponding match point acquisition module in two frames of video, is obtained using Feature Points Matching before shooting video in motion process Corresponding match point in two frames afterwards;
    Camera displacement computing module, in imaging space changes in coordinates or is established three-dimensional coordinate and calculates camera by match point Displacement;
    Binocular visual positioning module, after obtaining the position at each moment, the anglec of rotation that camera advances, filters with reference to kalman and obtains The route of travel of camera in whole process, to the real-time binocular visual positioning of robot or mobile platform;
    Corresponding match point acquisition module in two frames of the three-dimensional image video acquisition module output terminal connection video Input terminal, corresponding match point acquisition module output terminal connects the camera displacement computing module input in two frames of the video End, the camera displacement computing module output terminal connect the input terminal of the binocular visual positioning module.
  2. 2. the real-time binocular visual positioning system according to claim 1 based on GPU-SIFT, in two frames of the video Corresponding match point acquisition module includes have cured the hardware module of GPU-SIFT Feature Correspondence Algorithms.
CN201720318565.3U 2017-03-29 2017-03-29 Real-time binocular visual positioning system based on GPU-SIFT Active CN207351462U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201720318565.3U CN207351462U (en) 2017-03-29 2017-03-29 Real-time binocular visual positioning system based on GPU-SIFT

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201720318565.3U CN207351462U (en) 2017-03-29 2017-03-29 Real-time binocular visual positioning system based on GPU-SIFT

Publications (1)

Publication Number Publication Date
CN207351462U true CN207351462U (en) 2018-05-11

Family

ID=62361797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201720318565.3U Active CN207351462U (en) 2017-03-29 2017-03-29 Real-time binocular visual positioning system based on GPU-SIFT

Country Status (1)

Country Link
CN (1) CN207351462U (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109084778A (en) * 2018-09-19 2018-12-25 大连维德智能视觉技术创新中心有限公司 A kind of navigation system and air navigation aid based on binocular vision and pathfinding edge technology
CN111238477A (en) * 2019-03-25 2020-06-05 武汉珈鹰智能科技有限公司 Method and device for positioning unmanned aerial vehicle in chimney

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109084778A (en) * 2018-09-19 2018-12-25 大连维德智能视觉技术创新中心有限公司 A kind of navigation system and air navigation aid based on binocular vision and pathfinding edge technology
CN109084778B (en) * 2018-09-19 2022-11-25 大连维德智能视觉技术创新中心有限公司 Navigation system and navigation method based on binocular vision and road edge finding technology
CN111238477A (en) * 2019-03-25 2020-06-05 武汉珈鹰智能科技有限公司 Method and device for positioning unmanned aerial vehicle in chimney

Similar Documents

Publication Publication Date Title
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
Heng et al. Project autovision: Localization and 3d scene perception for an autonomous vehicle with a multi-camera system
CN107945220B (en) Binocular vision-based reconstruction method
CN106931962A (en) A kind of real-time binocular visual positioning method based on GPU SIFT
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
CN105300403B (en) A kind of vehicle mileage calculating method based on binocular vision
CN108398139B (en) Dynamic environment vision mileometer method fusing fisheye image and depth image
CN112304307A (en) Positioning method and device based on multi-sensor fusion and storage medium
CN107590827A (en) A kind of indoor mobile robot vision SLAM methods based on Kinect
CN107292965A (en) A kind of mutual occlusion processing method based on depth image data stream
CN103325108A (en) Method for designing monocular vision odometer with light stream method and feature point matching method integrated
CN109579825B (en) Robot positioning system and method based on binocular vision and convolutional neural network
CN104318561A (en) Method for detecting vehicle motion information based on integration of binocular stereoscopic vision and optical flow
CN110070598B (en) Mobile terminal for 3D scanning reconstruction and 3D scanning reconstruction method thereof
CN109100730A (en) A kind of fast run-up drawing method of more vehicle collaborations
Honegger et al. Embedded real-time multi-baseline stereo
CN113781562B (en) Lane line virtual-real registration and self-vehicle positioning method based on road model
CN104281148A (en) Mobile robot autonomous navigation method based on binocular stereoscopic vision
WO2024045632A1 (en) Binocular vision and imu-based underwater scene three-dimensional reconstruction method, and device
CN113223045A (en) Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation
CN207351462U (en) Real-time binocular visual positioning system based on GPU-SIFT
CN114638897B (en) Multi-camera system initialization method, system and device based on non-overlapping views
CN113450334B (en) Overwater target detection method, electronic equipment and storage medium
Yang et al. Vision-based intelligent vehicle road recognition and obstacle detection method
CN113503873A (en) Multi-sensor fusion visual positioning method

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant