CN106931962A - A kind of real-time binocular visual positioning method based on GPU SIFT - Google Patents

A kind of real-time binocular visual positioning method based on GPU SIFT Download PDF

Info

Publication number
CN106931962A
CN106931962A CN201710197839.2A CN201710197839A CN106931962A CN 106931962 A CN106931962 A CN 106931962A CN 201710197839 A CN201710197839 A CN 201710197839A CN 106931962 A CN106931962 A CN 106931962A
Authority
CN
China
Prior art keywords
sub
point
sift
frame
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710197839.2A
Other languages
Chinese (zh)
Inventor
罗斌
张云
林国华
刘军
赵青
王伟
陈警
张良培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Chang'e Medical Anti - Aging Robot Ltd By Share Ltd
Wuhan University WHU
Original Assignee
Wuhan Chang'e Medical Anti - Aging Robot Ltd By Share Ltd
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Chang'e Medical Anti - Aging Robot Ltd By Share Ltd, Wuhan University WHU filed Critical Wuhan Chang'e Medical Anti - Aging Robot Ltd By Share Ltd
Priority to CN201710197839.2A priority Critical patent/CN106931962A/en
Publication of CN106931962A publication Critical patent/CN106931962A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The present invention relates to a kind of real-time binocular visual positioning method based on GPU SIFT, comprise the following steps:Step one, the three-dimensional image video that the right and left eyes image in robot or mobile platform moving process is obtained using parallel binocular camera;Step 2, obtained using Feature Points Matching and shoot in motion process before and after video corresponding match point in two frames;Step 3, by the way that match point is in imaging space changes in coordinates or sets up three-dimensional coordinate the displacement that solves the equation of motion so as to estimate camera;After step 4, position, the anglec of rotation at each moment for obtaining camera traveling, the course of camera in whole process can be obtained with reference to kalman filtering, you can realize to robot or the real-time binocular visual positioning of mobile platform.The present invention is accelerated using GPU SIFT to SIFT feature matching process, coordinates binocular visual positioning, can realize robot or mobile platform real-time vision positioning, obtains positioning precision higher, and scalability, practical and ambient adaptability are strong.

Description

A kind of real-time binocular visual positioning method based on GPU-SIFT
Technical field:
It is specifically a kind of to be based on the real-time binoculars of GPU-SIFT the present invention relates to Robot visual location and field of navigation technology Visual odometry system.
Background technology:
With continuing to develop for robot and computer vision, camera is more and more positioned for robotic vision Among navigation.Robot localization is mainly code-disc, sonar, IMU, GPS, the Big Dipper, laser scanner, RGBD cameras and binocular Camera carries out the methods such as vision positioning, and wherein code-disc is converted into the number of turns of wheel rotation and then to machine according to the number of turns that motor is rotated The stroke of device people derive realizes positioning, but this positioning method is missed when sand ground, meadow or wheel slip Difference is larger, and positioning is inaccurate.Sonar positioning is positioned mainly by sonac transmitting with return signal analysis disturbance in judgement thing With navigation, but the resolution ratio of sonar is relatively low, there is more noise in signal, easily to positioning interfere.Robot is adopted Positioned with IMU and often there is accumulated error, robot is carried out in the positioning of long-time long range and navigation procedure often Needing correction could realize being accurately positioned.Positioning method is carried out using satellites such as GPS or the Big Dippeves, precision is often very poor, obtained High accuracy positioning is often relatively costly is difficult to, and GPS or Big Dipper positioning are often appropriate only to outdoor satellite signal Among preferable environment, the environment poor for indoor positioning or satellite-signal is often helpless.Although laser scanner Possess high-precision stationkeeping ability in any environment, but its is with high costs, and data volume is big, and treatment is complicated, and power consumption It is larger.What is more commonly used at present is positioned using the laser of single line, but applied environment is relatively limited, and is only applicable to plane Environment, cannot use for rolling topography environment.Although being positioned energy acquired disturbance thing and image information using RGBD cameras, But because Infrared laser emission intensity is limited by environment, indoor environment is only applicable to substantially, and coverage is limited.Using Common one camera carries out positioning can only realize relative positioning, and positioning precision is extremely restricted, but use parallel binocular Camera can carry out absolute fix, and positioning precision can reach the precision of laser positioning in certain circumstances, and in illumination Be can be used in usual environment in the case of permission, but the vision positioning data computation complexity based on binocular camera is high, It is computationally intensive, it tends to be difficult to reach real-time positioning requirements.And in order to reach real-time vision positioning effect, often using more Simple image processing algorithm, especially in visual odometry.
Visual odometry is the visual information that is obtained only with the camera in mobile vehicle or robot realizes car Or robot movement positioning, i.e., field around in running is shot by moving body or robot in-vehicle camera The situation and running environment information of car body or robot operation are extracted in the image or video of scape to moving body or Robot is positioned in real time.In order to realize real-time visual odometry, substantially time loss appears at images match portion Point, and and among images match 80% time loss appear in feature extraction and feature description on, so in order to reduce vision The time loss of odometer, is essentially all to realize real-time vision using simple local feature and character description method Odometer positioning function.More conventional have Harri s, Fast, CenSurE and a simple Edge Feature Points, but these Simple feature description is difficult to yardstick and rotational invariance, and these situations are often universal in camera running Exist, so these simple features are difficult to accurate images match, then reach the vision positioning effect of degree of precision Really.And SIFT feature aims at the consistency of solution yardstick and rotation and designs, can be good at overcoming yardstick and the rotation of image Change, realizes accurate images match, obtains the vision positioning effect of degree of precision.But SIFT feature is extracted and the description time Consumption is larger to be difficult to real-time images match, and SIFT feature extraction, description and matching process are carried out at acceleration using GPU The GPU-SIFT of reason can significantly speed up SIFT feature matching process, realize real-time SIFT feature matching.The present invention is used GPU-SIFT coordinates binocular visual positioning, realizes real-time visual odometry system, for real-time positioning and the navigation of robot.
The content of the invention:
The present invention is in order to overcome drawbacks described above present in prior art, there is provided a kind of real-time double based on GPU-SIFT Mesh vision positioning method, accelerates to SIFT feature matching process, makes up to real-time matching speed, coordinates binocular vision Positioning, realizes real-time visual odometry system, for real-time vision positioning and the navigation of robot or mobile platform.
To solve the above problems, the real-time binocular visual positioning method based on GPU-SIFT proposed by the present invention, including with Lower step:
Step one, the right and left eyes image in robot or mobile platform moving process is obtained using parallel binocular camera Three-dimensional image video;
Step 2, obtained using the method for Feature Points Matching and shoot in motion process before and after video corresponding in two frames With point;
Step 3, by match point imaging space changes in coordinates or set up three-dimensional coordinate come solve the equation of motion so as to Estimate the displacement of camera;
After step 4, position, the anglec of rotation at each moment for obtaining camera traveling, can obtain whole with reference to kalman filtering The course of camera during individual, you can realize to robot or the real-time binocular visual positioning of mobile platform.
In above-mentioned technical proposal, Feature Points Matching uses GPU-SIFT Feature Correspondence Algorithms, GPU- in the step 2 SIFT refers to that the scale invariant feature accelerated using image processor is changed.
In above-mentioned technical proposal, Feature Points Matching specifically includes following sub-step in the step 2:
Sub-step S21, the SIFT feature for extracting two frame binocular images or so four width images, and SIFT feature is generated SIFT feature is described;
The SIFT feature of sub-step S22, the matching left camera image of the first frame and right camera image, obtains Stereo matching point (PL1, PR1);
The SIFT feature of sub-step S23, the matching left camera image of the second frame and right camera image, obtains Stereo matching point (PL2, PR2);
Sub-step S24, matching the left camera image of the first frame and the left camera image of the second frame SIFT feature (LL1, LL2);
Sub-step S25, find out in the first frame obtained in step S24 left camera image match point LL1 and sub-step S22 The left camera image match point PL1 identicals characteristic point of the first frame for arriving is used as the final match point of the left camera image of the first frame;Together Reason obtains the match point of the left camera image of the second frame;
Sub-step S26, according to the left camera image match point obtained in sub-step S25, by the match point in sub-step S22 To finding corresponding right camera image match point;The right camera image match point of the second frame is similarly found, that is, completes the width figure of two frame four The matching process of picture.
In above-mentioned technical proposal, the step 3 specifically includes following sub-step:
Sub-step S31, an image space auxiliary coordinates are set up, by obtaining the Corresponding matching in the width image of front and rear two frame four Point, the method according to triangulation calculates three-dimensional of the synchronization Corresponding matching point under the auxiliary coordinates of image space by formula Coordinate points Pi
Sub-step S32, three-dimensional coordinate P will be obtainediIt is updated to equation of motion Pi=RPi' solve in+T, draw left camera and The free degree parameter of right camera is respectively T (Tx, Ty, Tz) and R (Rx, Ry, Rz);
Sub-step S33, using RANSAC methods every time random selection three coordinate points Pi, will a little substitute into error formulaMiddle calculating;
Sub-step S34, statistics E (R, T) value take interior points less than the number of the point of a certain threshold value after selecting several times That most group results are final result of calculation;
Sub-step S35, final result of calculation is updated to equation of motion Pi=RPi'+T is the equation of motion for obtaining camera So as to estimate the displacement of camera.
In above-mentioned technical proposal, the step 4 is specifically included:It is former frame using graphical pointv as vector point, direction vector The anglec of rotation cumulative and, translate up T in current point side when drawing subsequent point, determine its coordinate, the anglec of rotation is former frame Spin matrix R is multiplied by direction, according to formulaIt is determined that specific road Footpath inverting midpoint, wherein PoIt is initial time camera in the position coordinates of XOZ planes, is set to (0,0);PiFor the i-th moment camera exists The position coordinates of XOZ planes, TiIt is translation distance of i-th moment on current point direction.
The present invention has the advantages that and advantage compared with prior art:
Real-time binocular visual positioning method based on GPU-SIFT proposed by the present invention is using GPU-SIFT to SIFT feature Matching process is accelerated, and makes up to real-time matching speed, coordinates binocular visual positioning, can realize robot or shifting Moving platform real-time vision is positioned, and obtains positioning precision higher, scalability and practical, and ambient adaptability is strong.
Brief description of the drawings
Fig. 1 is the schematic diagram of image space auxiliary coordinates in the present invention.
Fig. 2 is intermediate cam measuring principle figure of the present invention.
Explanation is numbered in figure:1st, left camera;2nd, right camera.
Specific embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in further detail:
In the present embodiment, the real-time binocular visual positioning method based on GPU-SIFT proposed by the present invention, including following step Suddenly:
Step one, the right and left eyes image in robot or mobile platform moving process is obtained using parallel binocular camera Three-dimensional image video;
Step 2, obtained using GPU-SIFT Feature Correspondence Algorithms and shoot in motion process before and after video correspondence in two frames Match point;
Step 3, by match point imaging space changes in coordinates or set up three-dimensional coordinate come solve the equation of motion so as to Estimate the displacement of camera;
After step 4, position, the anglec of rotation at each moment for obtaining camera traveling, can obtain whole with reference to kalman filtering The course of camera during individual, you can realize to robot or the real-time binocular visual positioning of mobile platform.
Feature Points Matching specifically includes following sub-step in step 2:
Sub-step S21, the SIFT feature for extracting two frame binocular images or so four width images, and SIFT feature is generated SIFT feature is described;
The SIFT feature of sub-step S22, the matching left camera image of the first frame and right camera image, obtains Stereo matching point (PL1, PR1);
The SIFT feature of sub-step S23, the matching left camera image of the second frame and right camera image, obtains Stereo matching point (PL2, PR2);
Sub-step S24, matching the left camera image of the first frame and the left camera image of the second frame SIFT feature (LL1, LL2);
Sub-step S25, find out in the first frame obtained in step S24 left camera image match point LL1 and sub-step S22 The left camera image match point PL1 identicals characteristic point of the first frame for arriving is used as the final match point of the left camera image of the first frame;Together Reason obtains the match point of the left camera image of the second frame;
Sub-step S26, according to the left camera image match point obtained in sub-step S25, by the match point in sub-step S22 To finding corresponding right camera image match point;The right camera image match point of the second frame is similarly found, that is, completes the width figure of two frame four The matching process of picture.
Step 3 specifically includes following sub-step:
Sub-step S31, using an image space auxiliary coordinates, by obtaining the Corresponding matching in the width image of front and rear two frame four Point, the method according to triangulation calculates three-dimensional coordinate point of the synchronization Corresponding matching point under the auxiliary coordinates of image space Pi, wherein the image space auxiliary coordinates S-XYZ for using is as shown in figure 1, former by coordinate of the back end surface central point of left camera Point, X-axis is located on two central point lines of the back end surface of left camera and right camera, and Z axis are located at the central axis of left camera On, the principle of triangulation to the similar of S2 ' to the similar and S1 ' of S2 by the triangle S1 in Fig. 2 as shown in Fig. 2 drawn Following computing formula:
Wherein, (xl,yl)、(xr, yr) it is coordinate of the same frame left images match point relative to picture centre, d is the baseline of binocular camera, and f is camera focus;
Sub-step S32, three-dimensional coordinate P will be obtainediIt is updated to equation of motion Pi=RPi' solve in+T, draw left camera and The free degree parameter of right camera is respectively T (Tx, Ty, Tz) and R (Rx, Ry, Rz);
Sub-step S33, using RANSAC methods every time random selection three coordinate points Pi, will a little substitute into error formulaMiddle calculating error value E (R, T);
Sub-step S34, statistics E (R, T) value are taken less than certain less than the number of the point of a certain threshold value after selecting several times That group result that the number of the point of one threshold value is most is final result of calculation, thus largely avoid matching The interference of the larger point of error, improves computational solution precision;
Sub-step S35, final result of calculation is updated to equation of motion Pi=RPi'+T is the equation of motion for obtaining camera So as to estimate the displacement of camera.
Step 4 is specifically included:Graphical pointv as vector point, direction vector being added up and obtained for the anglec of rotation of former frame T is translated up in current point side when going out subsequent point, its coordinate is determined, the anglec of rotation is that spin matrix R, root are multiplied by the direction of former frame According to formulaIt is determined that specific path inverting midpoint, wherein PoFor first Moment camera begin in the position coordinates of XOZ planes, is set to (0,0);PiIt is the i-th moment camera in the position coordinates of XOZ planes, Ti It is translation distance of i-th moment on current point direction.
Because the factors such as shake, the change of scene light in the limitation, the car body traveling process that extract characteristic point precision are made There is error into motion estimation result, or there is the jump error of contingency in R, the T for calculating, cause final drawing path Error is larger, path discontinuous, so general motion estimate between obtaining per adjacent two frame estimation after generally require Motion estimation result is smoothed, usually using to restrictive condition be rotation and translation speed during body movement Or the continuity, restricted of acceleration, the larger estimated result of error is replaced using neighboring mean value or Mesophyticum, it is more multiple Miscellaneous can be smoothed using Kalman filtering or EKF, so as to get path continuously smooth.Herein In order to reduce the time loss of whole process as far as possible, the filtering process that the former is relatively simple is used, that is, reject error larger R, T and replaced with neighborhood Mesophyticum, smooth effect is ideal.
Finally illustrate, the above embodiments are merely illustrative of the technical solutions of the present invention and it is unrestricted, although with reference to compared with Good embodiment has been described in detail to the present invention, it will be understood by those within the art that, can be to skill of the invention Art scheme is modified or equivalent, and without deviating from the objective and scope of technical solution of the present invention, it all should cover at this In the right of invention.

Claims (5)

1. a kind of real-time binocular visual positioning method based on GPU-SIFT, it is characterised in that comprise the following steps:
Step one, the solid that the right and left eyes image in robot or mobile platform moving process is obtained using parallel binocular camera Image/video;
Step 2, to be obtained using the method for Feature Points Matching and shoot corresponding matching in two frames before and after video in motion process Point;
Step 3, by the way that match point is in imaging space changes in coordinates or sets up three-dimensional coordinate and solves the equation of motion so as to estimate Go out the displacement of camera;
After step 4, position, the anglec of rotation at each moment for obtaining camera traveling, whole mistake can be obtained with reference to kalman filtering The course of camera in journey, you can realize to robot or the real-time binocular visual positioning of mobile platform.
2. the real-time binocular visual positioning method based on GPU-SIFT according to claim 1, it is characterised in that the step Feature Points Matching uses GPU-SIFT Feature Correspondence Algorithms in rapid two.
3. the real-time binocular visual positioning method based on GPU-SIFT according to claim 2, it is characterised in that the step Feature Points Matching specifically includes following sub-step in rapid two:
Sub-step S21, the SIFT feature for extracting two frame binocular images or so four width images, and SIFT is generated to SIFT feature Feature is described;
The SIFT feature of sub-step S22, the matching left camera image of the first frame and right camera image, obtains Stereo matching point (PL1, PR1);
The SIFT feature of sub-step S23, the matching left camera image of the second frame and right camera image, obtains Stereo matching point (PL2, PR2);
The SIFT feature (LL1, LL2) of sub-step S24, the matching left camera image of the first frame and the left camera image of the second frame;
Sub-step S25, find out what is obtained in the first frame obtained in step S24 left camera image match point LL1 and sub-step S22 The left camera image match point PL1 identicals characteristic point of first frame is used as the final match point of the left camera image of the first frame;Similarly To the match point of the left camera image of the second frame;
Sub-step S26, according to the left camera image match point obtained in sub-step S25, looked for by the matching double points in sub-step S22 To corresponding right camera image match point;The right camera image match point of the second frame is similarly found, that is, completes the width image of two frame four Matching process.
4. the real-time binocular visual positioning method based on GPU-SIFT according to claim 3, it is characterised in that the step Rapid three specifically include following sub-step:
Sub-step S31, an image space auxiliary coordinates are set up, by obtaining the Corresponding matching point in the width image of front and rear two frame four, Method according to triangulation calculates three-dimensional seat of the synchronization Corresponding matching point under the auxiliary coordinates of image space by formula Punctuate Pi
Sub-step S32, three-dimensional coordinate P will be obtainediIt is updated to equation of motion Pi=RPi' solve in+T, draw left camera and right phase The free degree parameter of machine is respectively T (Tx, Ty, Tz) and R (Rx, Ry, Rz);
Sub-step S33, using RANSAC methods every time random selection three coordinate points Pi, will a little substitute into error formulaMiddle calculating;
Sub-step S34, statistics E (R, T) value take less than a certain threshold less than the number of the point of a certain threshold value after selecting several times That group result that the number of the point of value is most is final result of calculation;
Sub-step S35, final result of calculation is updated to equation of motion Pi=RPi'+T be obtain the equation of motion of camera so as to Estimate the displacement of camera.
5. the real-time binocular visual positioning method based on GPU-SIFT according to claim 4, it is characterised in that the step Rapid four specifically include:Using graphical pointv as vector point, direction vector for the cumulative of the anglec of rotation of former frame and, when drawing subsequent point T is translated up in current point side, its coordinate is determined, the anglec of rotation is that spin matrix R is multiplied by the direction of former frame,
According to formulaIt is determined that specific path inverting midpoint, wherein PoIt is initial time camera in the position coordinates of XOZ planes, is set to (0,0);PiFor the i-th moment camera is sat in the position of XOZ planes Mark, TiIt is translation distance of i-th moment on current point direction.
CN201710197839.2A 2017-03-29 2017-03-29 A kind of real-time binocular visual positioning method based on GPU SIFT Pending CN106931962A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710197839.2A CN106931962A (en) 2017-03-29 2017-03-29 A kind of real-time binocular visual positioning method based on GPU SIFT

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710197839.2A CN106931962A (en) 2017-03-29 2017-03-29 A kind of real-time binocular visual positioning method based on GPU SIFT

Publications (1)

Publication Number Publication Date
CN106931962A true CN106931962A (en) 2017-07-07

Family

ID=59425636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710197839.2A Pending CN106931962A (en) 2017-03-29 2017-03-29 A kind of real-time binocular visual positioning method based on GPU SIFT

Country Status (1)

Country Link
CN (1) CN106931962A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107498559A (en) * 2017-09-26 2017-12-22 珠海市微半导体有限公司 The detection method and chip that the robot of view-based access control model turns to
CN108734175A (en) * 2018-04-28 2018-11-02 北京猎户星空科技有限公司 A kind of extracting method of characteristics of image, device and electronic equipment
CN109084778A (en) * 2018-09-19 2018-12-25 大连维德智能视觉技术创新中心有限公司 A kind of navigation system and air navigation aid based on binocular vision and pathfinding edge technology
CN109459023A (en) * 2018-09-18 2019-03-12 武汉三体机器人有限公司 A kind of ancillary terrestrial robot navigation method and device based on unmanned plane vision SLAM
CN109470216A (en) * 2018-11-19 2019-03-15 国网四川省电力公司电力科学研究院 Robot binocular vision characteristic point positioning method
CN109543694A (en) * 2018-09-28 2019-03-29 天津大学 A kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature
CN109741372A (en) * 2019-01-10 2019-05-10 哈尔滨工业大学 A kind of odometer method for estimating based on binocular vision
CN111768437A (en) * 2020-06-30 2020-10-13 中国矿业大学 Image stereo matching method and device for mine inspection robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102221884A (en) * 2011-06-15 2011-10-19 山东大学 Visual tele-existence device based on real-time calibration of camera and working method thereof
CN104915965A (en) * 2014-03-14 2015-09-16 华为技术有限公司 Camera tracking method and device
CN105469405A (en) * 2015-11-26 2016-04-06 清华大学 Visual ranging-based simultaneous localization and map construction method
CN106408531A (en) * 2016-09-09 2017-02-15 四川大学 GPU acceleration-based hierarchical adaptive three-dimensional reconstruction method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102221884A (en) * 2011-06-15 2011-10-19 山东大学 Visual tele-existence device based on real-time calibration of camera and working method thereof
CN104915965A (en) * 2014-03-14 2015-09-16 华为技术有限公司 Camera tracking method and device
US20160379375A1 (en) * 2014-03-14 2016-12-29 Huawei Technologies Co., Ltd. Camera Tracking Method and Apparatus
CN105469405A (en) * 2015-11-26 2016-04-06 清华大学 Visual ranging-based simultaneous localization and map construction method
CN106408531A (en) * 2016-09-09 2017-02-15 四川大学 GPU acceleration-based hierarchical adaptive three-dimensional reconstruction method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
吴功伟 等: "基于视差空间的双目视觉里程计", 传感技术学报, vol. 20, no. 6, pages 1432 - 1436 *
申镇: "基于双目视觉的车辆运动估计技术研究", 万方学术论文, pages 1 - 71 *
邢龙龙: "基于SURF特征点的立体视觉里程计研究", 2013年北京汽车工程学会学术年会论文集, pages 269 - 277 *
马玉娇 等: "基于最小平方中值定理的立体视觉里程计", 计算机工程与应用, vol. 46, no. 11, pages 60 - 62 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107498559A (en) * 2017-09-26 2017-12-22 珠海市微半导体有限公司 The detection method and chip that the robot of view-based access control model turns to
CN108734175A (en) * 2018-04-28 2018-11-02 北京猎户星空科技有限公司 A kind of extracting method of characteristics of image, device and electronic equipment
CN109459023A (en) * 2018-09-18 2019-03-12 武汉三体机器人有限公司 A kind of ancillary terrestrial robot navigation method and device based on unmanned plane vision SLAM
CN109459023B (en) * 2018-09-18 2021-07-16 武汉三体机器人有限公司 Unmanned aerial vehicle vision SLAM-based auxiliary ground robot navigation method and device
CN109084778A (en) * 2018-09-19 2018-12-25 大连维德智能视觉技术创新中心有限公司 A kind of navigation system and air navigation aid based on binocular vision and pathfinding edge technology
CN109084778B (en) * 2018-09-19 2022-11-25 大连维德智能视觉技术创新中心有限公司 Navigation system and navigation method based on binocular vision and road edge finding technology
CN109543694A (en) * 2018-09-28 2019-03-29 天津大学 A kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature
CN109470216A (en) * 2018-11-19 2019-03-15 国网四川省电力公司电力科学研究院 Robot binocular vision characteristic point positioning method
CN109741372A (en) * 2019-01-10 2019-05-10 哈尔滨工业大学 A kind of odometer method for estimating based on binocular vision
CN111768437A (en) * 2020-06-30 2020-10-13 中国矿业大学 Image stereo matching method and device for mine inspection robot
CN111768437B (en) * 2020-06-30 2023-09-05 中国矿业大学 Image stereo matching method and device for mine inspection robot

Similar Documents

Publication Publication Date Title
CN106931962A (en) A kind of real-time binocular visual positioning method based on GPU SIFT
Heng et al. Project autovision: Localization and 3d scene perception for an autonomous vehicle with a multi-camera system
CN107945220B (en) Binocular vision-based reconstruction method
CN110582798B (en) System and method for virtual enhanced vision simultaneous localization and mapping
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN107590827A (en) A kind of indoor mobile robot vision SLAM methods based on Kinect
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
CN109579825B (en) Robot positioning system and method based on binocular vision and convolutional neural network
CN105300403B (en) A kind of vehicle mileage calculating method based on binocular vision
CN112304307A (en) Positioning method and device based on multi-sensor fusion and storage medium
CN108398139B (en) Dynamic environment vision mileometer method fusing fisheye image and depth image
CN108534782B (en) Binocular vision system-based landmark map vehicle instant positioning method
WO2021196941A1 (en) Method and apparatus for detecting three-dimensional target
WO2021138989A1 (en) Depth estimation acceleration method for multiband stereo camera
CN103325108A (en) Method for designing monocular vision odometer with light stream method and feature point matching method integrated
CN108519102B (en) Binocular vision mileage calculation method based on secondary projection
CN104318561A (en) Method for detecting vehicle motion information based on integration of binocular stereoscopic vision and optical flow
CN113781562B (en) Lane line virtual-real registration and self-vehicle positioning method based on road model
CN104281148A (en) Mobile robot autonomous navigation method based on binocular stereoscopic vision
CN114001733B (en) Map-based consistent efficient visual inertial positioning algorithm
CN111263960A (en) Apparatus and method for updating high definition map for autonomous driving
CN105844692A (en) Binocular stereoscopic vision based 3D reconstruction device, method, system and UAV
CN115406447B (en) Autonomous positioning method of quad-rotor unmanned aerial vehicle based on visual inertia in rejection environment
CN110349249B (en) Real-time dense reconstruction method and system based on RGB-D data
CN207351462U (en) Real-time binocular visual positioning system based on GPU-SIFT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination