CN107945220A - A kind of method for reconstructing based on binocular vision - Google Patents

A kind of method for reconstructing based on binocular vision Download PDF

Info

Publication number
CN107945220A
CN107945220A CN201711248693.6A CN201711248693A CN107945220A CN 107945220 A CN107945220 A CN 107945220A CN 201711248693 A CN201711248693 A CN 201711248693A CN 107945220 A CN107945220 A CN 107945220A
Authority
CN
China
Prior art keywords
camera
obtains
grid
point cloud
right view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711248693.6A
Other languages
Chinese (zh)
Other versions
CN107945220B (en
Inventor
蔡超
马昱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201711248693.6A priority Critical patent/CN107945220B/en
Publication of CN107945220A publication Critical patent/CN107945220A/en
Application granted granted Critical
Publication of CN107945220B publication Critical patent/CN107945220B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of method for reconstructing based on binocular vision, including:The left and right view of input picture is corrected using camera parameter, Stereo matching is carried out to the left and right view after correction, obtains the disparity map of left and right view;The camera attitude information that the camera Attitude estimation information and gyroscope obtained according to input image sequence is passed to, carries out Kalman filtering and obtains posture optimal estimation, and then obtains the optimal estimation of camera motion track;According to the disparity map of left and right view and the optimal estimation of camera motion track, three dimensional point cloud is obtained;Cloud data registration is carried out to three dimensional point cloud using Grid Method, point cloud data fusion then is carried out to the cloud data after registration using spatial grid, obtains panorama cloud data.Present invention reduces the error of camera motion Attitude estimation during three-dimensional reconstruction, the accuracy of three dimensional point cloud is improved.

Description

A kind of method for reconstructing based on binocular vision
Technical field
The invention belongs to computer vision field, more particularly, to a kind of method for reconstructing based on binocular vision.
Background technology
Popular research direction of the three-dimensional reconstruction as computer vision field, has been widely used in industry, doctor The field such as treatment, robot navigation, unmanned.
Three-dimensional reconstruction is broadly divided into two kinds of method for reconstructing and image-based reconstruction method based on geometry.Pass The three-dimensional geometrical structure of system is rebuild directly is manually generated threedimensional model, the field that this method is built using computer aided design software Scape model is exquisite, complicated, having good interactivity, viewpoint, freely still the sense of reality is not strong, is carrying out large scale scene reconstruction When, it is necessary to put into substantial amounts of time and efforts.Image-based reconstruction method is from the gray scale of single width or two-dimensional images, solid Calculated in the information such as parallax by principles such as triangulations and obtain three-dimensional scene information, a large amount of manpowers are freed from scene rebuilding Out, it is the main research direction of current three-dimensional reconstruction.
Three-dimensional reconstruction, which is carried out, using image sequence is primarily intended to two kinds of single eye stereo vision and binocular stereo vision at present. Single eye stereo vision carries out body surface three-dimensional reconstruction by the multiple images of non-planar surfaces, this method by illumination effect compared with Greatly;Binocular stereo vision is similar to human visual perception process, from two viewpoints different from position carry out Same Scene, And then two images of the Same Scene in different points of view are obtained, two images are calculated by images match and principle of triangulation Deviation in imaging carrys out the positional information of object in analyzing three-dimensional environment.Present invention is primarily based on Binocular Stereo Vision System.
Real-time three-dimensional method for reconstructing can be based on journey away from two major class of method and vision.Journey obtains distance away from method using laser radar Information, according to known depth map, surface information and the object in model foundation scene are rebuild with the method for numerical radius Description realizes that three-dimensional environment is rebuild.But this method is limited to hardware device, airborne laser range finder finding range cheap and easy to get It is short, in the range of maintaining 5-6 meters mostly, it is unsuitable for actual conditions, and what is launched is usually single-point laser, and it reconstructs three-dimensional The speed of information can be slow, causes real-time poor.The high airborne laser range finder of performance indicator is with high costs and software technology wall Height is built, without universality.The real-time three-dimensional of view-based access control model is rebuild and is rebuild using the disparity map of left and right two field picture, and equipment is easy Obtain and algorithm is more ripe.
Dense point cloud data can more precisely reduce model structure, but excessive redundant points can waste a large amount of storage skies Between, point cloud registering is being carried out with consuming plenty of time and calculation amount during splicing, and can lift the difficulty of three-dimensional surface rebuilding.Cause This, it is necessary to dense point cloud data are about subtracted.Alexa et al. is by calculating some data point deletion afterwards to least square The method whether influence degree of mobile curved surface retains to determine the data point to need about is subtracted, but this method is calculated and compared It is complicated and inefficient.The most common method of real time environment reconstruction at present has following two:
SLAM, this method is to rebuild surrounding environment using mobile robot, and the position of oneself is calculated using environment rebuilt, However, to ensure that computational efficiency, can be only generated sparse cloud, and three-dimensional surface rebuilding needs to obtain dense three-dimensional point cloud.
Geiger et al. proposes a kind of real-time three-dimensional method for reconstructing based on sequence of pictures, and this method is based on binocular solid With with old brand visual odometry algorithm, but due to the continuous increase of camera Attitude estimation accumulated error, extensive environment rebuilt As a result drift is serious.
It can be seen from the above that the prior art is there are the error of camera motion Attitude estimation during three-dimensional reconstruction is larger, three-dimensional point The poor technical problem of the accuracy of cloud data.
The content of the invention
For the disadvantages described above or Improvement requirement of the prior art, the present invention provides a kind of reconstruction side based on binocular vision Method, thus solves the prior art there are the error of camera motion Attitude estimation during three-dimensional reconstruction is larger, three dimensional point cloud The poor technical problem of accuracy.
To achieve the above object, the present invention provides a kind of method for reconstructing based on binocular vision, including:
(1) the left and right view of input picture is corrected using camera parameter, the left and right view after correction is stood Body matches, and obtains the disparity map of left and right view;
(2) the camera attitude information that the camera Attitude estimation information and gyroscope obtained according to input image sequence is passed to, Carry out Kalman filtering and obtain posture optimal estimation, and then obtain the optimal estimation of camera motion track;
(3) according to the disparity map of left and right view and the optimal estimation of camera motion track, three dimensional point cloud is obtained;Utilize grid Lattice method carries out cloud data registration to three dimensional point cloud, then carries out a cloud to the cloud data after registration using grid space Data fusion, obtains panorama cloud data.
Further, camera parameter includes monocular intrinsic parameter and the outer parameter of binocular, and the monocular intrinsic parameter is respectively to a left side Right camera carries out independent monocular and demarcates to obtain the respective monocular intrinsic parameter of left and right camera, and the outer parameter of the binocular is binocular The outer parameter of binocular between left and right cameras obtained by calibrating.
Further, step (1) includes:
(1-1) carries out the left and right view of input picture elimination distortion using camera parameter and handles every trade alignment of going forward side by side, and makes It is consistent to obtain the imaging origin of left and right view, the left and right view after being corrected;
(1-2) is responded by the cascade of wave filter both horizontally and vertically for the left and right view after correction, obtains a left side The feature vector of right view, using the manhatton distance between feature vector match twice from left to right, from right to left, will The pixel of strong robustness is matched as supporting point, obtains sparse disparities figure;
(1-3) carries out triangulation to the supporting point in left view coordinate space, forms multiple Delta Regions, utilizes triangle The parallax on vertex known to region builds the prior probability of the parallax of other viewpoints, based on prior probability to the feature of left and right view to Amount is matched to obtain right view observation parameter, and right view observation parameter is doubtful same place and similarity measurement, maximum a posteriori Estimate the parallax of other pixels in addition to supporting point, obtain the disparity map that dense disparity map is left and right view.
Further, the specific implementation of step (2) is:
Gyroscope is fixed on left and right cameras centre position, gyroscope coordinate system is transformed into left camera coordinate system, so Camera location information is obtained according to input image sequence afterwards, and First Speed is obtained by camera location information, is passed according to gyroscope The acceleration information entered obtains second speed, and Kalman filtering is carried out to First Speed and second speed and obtains that speed is optimal to be estimated Meter, then obtains the displacement information of camera according to input image sequence, and estimation displacement information is obtained according to speed optimal estimation, right The displacement information and estimation displacement information of camera carry out Kalman filtering and obtain displacement optimal estimation, are obtained according to input image sequence To the camera attitude information that is passed to of camera Attitude estimation information and gyroscope carry out Kalman filtering and obtain posture optimal estimation, Kalman filtering is carried out according to displacement optimal estimation and posture optimal estimation and obtains the optimal estimation of camera motion track.
Further, the specific implementation of cloud data registration is:
Assuming that obtaining two groups of three dimensional point clouds P1 and P2 respectively, the frame of reference is established with the camera initial position of P1, The initial attitude of camera is corresponded to according to two groups of three dimensional point clouds, obtains the initial rotation translation matrix between two coordinate systems, Rigid body translation first is carried out to P2 using initial rotation translation matrix and obtains P2 ', and then obtains initial point cloud place grid ID and closes System, using grid ID limitations matching range where initial point cloud, puts P1 and P2 ' and obtains a set pair to matching, according to point Set pair obtains new rotation translation matrix, and P2 ' is converted using new rotation translation matrix to obtain P2 ", obtains new point Grid ID relations where cloud, calculate P2 ' arrive the square distance of P2 " with, with the absolute value of its difference of square distance sum twice in succession or Grid ID relations where the new point cloud of person as whether convergent foundation, iteration, until absolute value is less than threshold value or new point Grid ID relations where cloud do not change, and obtain target rotation translation matrix, rotate translation matrix by three-dimensional point in P2 using target The coordinate system of cloud data is transformed into the frame of reference, realizes the cloud data registration of two groups of three dimensional point clouds.
Further, the specific implementation of point cloud data fusion is:
The coordinate system space of three dimensional point cloud is carried out waiting grid partition, obtains N number of grid space, traversal grid is empty Between, if do not processed without a cloud in grid space;If only existing a single point cloud in grid space, with this cloud Rgb value of the rgb value as the grid space;It is if empty with the grid there are two and the point cloud of the above in grid space Between middle all the points cloud rgb value of the RGB averages as the grid space, realize point cloud data fusion.
In general, by the contemplated above technical scheme of the present invention compared with prior art, it can obtain down and show Beneficial effect:
(1) the camera posture that the camera Attitude estimation information and gyroscope that the present invention is obtained according to input image sequence are passed to Information, carries out Kalman filtering and obtains posture optimal estimation, and then obtains the optimal estimation of camera motion track;The present invention utilizes top The camera attitude information that spiral shell instrument is passed to corrects camera motion attitude estimation error, reduces camera during three-dimensional reconstruction The error of athletic posture estimation.
(2) present invention is directed to the cloud data overlap problem run into during extensive environment rebuilt midpoint cloud, proposes to adopt Cloud data registration is carried out to three dimensional point cloud with Grid Method, cloud data is merged using grid space.And then drop The error of camera motion Attitude estimation during low three-dimensional reconstruction, improves the accuracy of three dimensional point cloud.The present invention's Three-dimensional environment method for reconstructing is more quick accurate, available in the projects such as robot navigation, the fine mission planning of unmanned plane, to Substitute more manpowers and be difficult to competent work.
Brief description of the drawings
Fig. 1 is a kind of flow chart of method for reconstructing based on binocular vision provided in an embodiment of the present invention;
Fig. 2 is optimal estimation calibration result figure in camera motion track provided in an embodiment of the present invention;
Fig. 3 (a) is First partial three dimensional point cloud simulated effect figure provided in an embodiment of the present invention;
Fig. 3 (b) is the provided in an embodiment of the present invention second local three dimensional point cloud simulated effect figure;
Fig. 3 (c) is the 3rd partial 3 d cloud data simulated effect figure provided in an embodiment of the present invention;
Fig. 4 is the panorama cloud data simulated effect figure after fusion provided in an embodiment of the present invention.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, it is right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.As long as in addition, technical characteristic involved in each embodiment of invention described below Not forming conflict each other can be mutually combined.
As shown in Figure 1, a kind of method for reconstructing based on binocular vision, including:
Step 1, two cameras are fixed on experiment porch, then adjust the focal length and light of two cameras respectively Circle, ensures that video camera takes the photograph the optimal quality of image.Gyroscope is fixed on left and right cameras centre position, so that the later stage is with top Spiral shell instrument athletic posture approximate substitution binocular camera athletic posture.
Step 2, shoots 12 group picture pieces to gridiron pattern from different perspectives, asks for left and right cameras internal reference numerical value.Binocular camera shooting The fixed intrinsic parameter that not only draw each camera of leader, it is also necessary to by demarcating the opposite position to measure between two cameras Put, i.e., the relatively left camera D translation parameter t and rotation parameter R of right camera.Provided in OpenCV and realize binocular camera shooting The fixed algorithm cvStereoCalibrate of leader, but the function easily produces bigger pattern distortion when in use, therefore The method that the present invention generally takes binocular after first monocular.First independent monocular is carried out to left and right camera to demarcate, try to achieve a left side respectively The right respective monocular intrinsic parameter of camera, then carries out binocular calibration and tries to achieve the outer parameter of binocular between left and right cameras again.
Step 3, binocular correction is carried out to input picture.The monocular intrinsic parameter and binocular obtained after being calibrated according to camera Outer parameter, to left and right view eliminate distortion processing and goes forward side by side every trade alignment respectively so that the imaging origin of left and right view is consistent, Ensure left and right camera optical axis it is parallel, left and right imaging plane is coplanar, to polar curve row align so that ensure match point positioned at left and right On the same straight line of view.
Step 4, disparity map is obtained by the left and right view after correction.This part has used what Geiger et al. was proposed Efficient large-scale 3 D matching algorithm, the algorithm are a kind of can to calculate the accurate disparity map of high-definition image in real time Bayesian methods.The priori value of parallax is built so as to eliminate matching ambiguity by the triangulation on one group of support point, Search efficiency is improved by restricted searching area.Specific implementation method is as follows:
The first step:Responded, respectively obtained by cascade of the 3X3Sobel wave filters both horizontally and vertically to 9X9 windows The feature vector of left and right view 50 (2x5x5) dimension.Carried out from left to right, from right to left twice using the manhatton distance between vector Matching, chooses the point for wherein matching strong robustness as supporting point, obtains sparse disparities figure.Left view is set as reference chart Picture, obtains one group of observation parameter (i.e. above-mentioned supporting point pixel coordinate and its corresponding feature vector) of left view.
Second step:Support point set based on left view carries out triangulation, obtains multiple Delta Regions, calculates each triangle The accurate parallax on three, region vertex.
3rd step:According to the trigonum plane after subdivision and the accurate parallax on known vertex into row interpolation, other regard is obtained The prior probability of point disparity estimation, is expressed as a kind of piecewise linear function for meeting Gaussian Profile.
4th step:Using the feature vector of left and right view, image likelihood is calculated on the basis of parallax prior probability Rate, likelihood function meets constraint Laplace distributions, so as to obtain the observation parameter (doubtful same place and likelihood ratio) of right view.
5th step:The horizontal parallax of other viewpoints in addition to supporting point is calculated using maximum a-posteriori estimation MAP, is obtained thick Close disparity map.
Step 5, the camera posture letter that the camera Attitude estimation information and gyroscope obtained according to input image sequence is passed to Breath, carries out Kalman filtering and obtains posture optimal estimation, and then obtains the optimal estimation of camera motion track;Specific implementation method is such as Under:
The first step:The three-dimensional coordinate of match point is obtained by binocular solid matching and camera intrinsic parameter.Calculated by RANSAC Method randomly selects 3 groups from n group match points and pose is calculated using gauss-newton method.Judge pose accuracy simultaneously by interior point Accept or reject.Constantly repeat the above process to obtain accurate pose.
Second step:The parameter 3-axis acceleration and three shaft angle degree that incoming gyroscope is got, if on the basis of initial yaw angle, Yaw angle is corrected, i.e., gyroscope coordinate system is transformed into and is overlapped with space coordinates, and by three in gyroscope coordinate system The value of axle acceleration is converted to the 3-axis acceleration under space coordinates, while removes gravity effect.Specific embodiment party Method is as follows:
If the yaw angle of camera, roll angle, pitch angle are respectivelyγ, θ.Definition space coordinate system is n systems, gyroscope Coordinate system is b systems.N systems to b systems conversion by 3*3 attitude matrix Tn2bTo represent, due to being two orthonormal basis in space Conversion, so C is the orthogonal matrix of 3*3.Attitude matrix Tn2bThe product of three 3*3 matrixes can be decomposed into, these three matrixes It is yaw angle respectivelySpin matrixThe spin matrix T of pitching angle thetaθ, the spin matrix T of roll angle γγ.Their rotation Torque battle array is as follows:
Three above matrix multiple be can obtain into space coordinates n systems to the transformational relation posture square of gyroscope coordinate system Battle array C, the order of multiplication are determined by the rotating order of attitude angle.The rotational order of gyroscope be according to Z axis, Y-axis, X-axis successively into Capable.
So attitude matrix Tn2bIt is expressed as below
The angle that X-axis exports is denoted as pitching angle theta, the angle that Y-axis exports is denoted as roll angle γ, the angle that Z axis is exported Degree is denoted as yaw angleOutput angle is calculated according to formula (4), obtained attitude matrix Tn2bContain all gyroscopes With the attitude information of binocular camera.
By attitude matrix Tn2bInvert, you can obtain the transformational relation T by gyroscope coordinate system to space coordinatesb2n
It is i.e. available to formula (6) both sides derivation at the same time:
In formula (7), the acceleration on the equation left side is gyroscope and binocular vision system in 3 d space coordinate system Acceleration, do not change with the posture of gyroscope, only it is related to the motion state of system itself.In the condition of system quiescence Under, the vector acceleration of 3 d space coordinate system is:Wherein g is acceleration of gravity.
3rd step:The accurate pose of camera that visual odometry is calculated carries out differential and obtains First Speed V1, to top The 3-axis acceleration that spiral shell instrument obtains is integrated to obtain second speed V2, then using Kalman filtering to sets of speeds V1 and V2 Merged, the speed optimal estimation V after being corrected.Specific implementation method is as follows:
System mode was predicted first, we are using the acceleration that gyroscope is got as controlled quentity controlled variable, in upper a period of time Carve optimal estimation velocity amplitude on the basis of predict current time velocity amplitude;By the variance of last moment Posterior estimator and controlled quentity controlled variable Prediction variance of the sum of the variance as current time, it is 0.5 that we, which choose controlled quentity controlled variable variance, and predictive equation is as follows:
State equation such as formula (8):
Vpre=Vpost+a*t (8)
Systematic variance estimation such as formula (9):
Ppre=Ppost+Q (9)
Wherein, Vpost is the velocity amplitude of last moment, and Vpre is the velocity amplitude at current time, and a is to be obtained from gyroscope The acceleration at current time, t are the time difference that gyroscope obtains acceleration magnitude twice.Ppost is the Posterior estimator of last moment Variance, Ppre are the prediction variance at current time, and Q measures variance in order to control.
Next, by the velocity amplitude that visual odometry is calculated as measured value, with above-mentioned predicted value as a result, being showed It is as follows in the optimum estimation value of speed state, estimation equation.
K gain equations such as formula (10):
K=R*Ppre/ (Ppre+0.5) (10)
Velocity amplitude optimal estimation equation such as formula (11):
Vbest=Vpre+K* (Vget-Vpre) (11)
Predict variance evaluation equation such as formula (12):
Ppost=(1-K) * Ppre (12)
Wherein, K is kalman gain, is determined by current time prediction variance and measurement variance.R is measurement variance, here Take the matching rate of corresponding image characteristic point.
4th step:The optimal estimation for each moment velocity amplitude that previous step is calculated is as controlled quentity controlled variable, to current time Displacement state is estimated;Prediction using the sum of variance of the variance of last moment Posterior estimator and controlled quentity controlled variable as current time Variance, it is 1 that we, which choose controlled quentity controlled variable variance, and predictive equation is as follows.
State equation such as formula (13):
Dpre=Dpost+Vbest*t (13)
Systematic variance estimation such as formula (14):
Ppre=Ppost+Q (14)
Wherein, Dpost is the shift value of last moment, and Dpre is the shift value at current time, and Vbest is in previous step The optimal estimation of obtained speed, t are the time difference that gyroscope obtains acceleration magnitude twice.Ppost is the posteriority of last moment Estimate variance, Ppre are the prediction variance at current time, and Q measures variance in order to control.
Next, the camera pose that visual odometry is calculated is carried out as measured value with above-mentioned predicted value result Fusion, obtains the best estimate of camera motion posture, and estimation equation is as follows.
K gain equations such as formula (15):
K=R*Ppre/ (Ppre+0.05) (15)
Shift value optimal estimation equation such as formula (16):
Dbest=Dpre+K* (Dget-Dpre) (16)
Predict variance evaluation equation such as formula (17):
Ppost=(1-K) * Ppre (17)
It is worth noting that, the filtering to displacement depends on Image Feature Matching rate R herein, when the value of R is continuously less than After continuous abnormal point is detected, no longer obtained current time predictive displacement is calculated with image for threshold value 0.02 Displacement carry out second of Kalman filtering, but directly using current time predictive displacement as displacement optimal estimation continuation it is next The operation of step.So can to avoid displacement error obtained by being calculated in visual odometry it is larger when camera attitude updating is caused to do Situation about disturbing.
5th step:Attitude matrix is filtered, equation such as formula (18):
T=Tn2b+R(C-Tn2b) (18)
In above formula T be camera motion posture best estimate, Tn2bThe posture square being calculated for gyroscope input angle Battle array, C are the attitude matrix being calculated by image, and R is the success rate of images match.
As shown in Fig. 2, be optimal estimation calibration result figure in camera motion track of the present invention, solid line is actual path, dotted line The camera Attitude estimation information obtained for input image sequence, dotted line dotted line are camera motion track optimal estimation after correction.By This is as it can be seen that the camera attitude information that is passed to using gyroscope of the present invention corrects camera motion attitude estimation error, reduction The error of camera motion Attitude estimation during three-dimensional reconstruction.
Step 6, three-dimensional point cloud are rebuild.Three dimensional point cloud is obtained according to disparity map and the optimal estimation of camera motion track.
Step 7, point cloud data fusion.Coordinate system space where a cloud is subjected to division at equal intervals and forms grid space, lattice The scale of net is adjusted according to the dense degree of cloud data.In the case of being determined in grid space, traversal grid space is gone Except cloud data redundancy, specific method are as follows:
If skip over and do not process without a cloud in grid;
If only existing a single point cloud in grid, which is used as using this cloud information;
If sat in grid there are two and the point cloud of the above using the center point coordinate of grid as the grid points cloud Mark, the rgb value of the grid is used as using the RGB averages of all the points cloud in the grid.
It should be noted that in order to ensure a cloud precision, the selection of grid scale should not be excessive.
Step 8, the synthesis of panorama cloud data.Cloud data registration part specific implementation method is as follows:
The first step:The initial attitude of section camera Attitude estimation where obtaining two groups of cloud datas, if two groups of point cloud difference For P1 and P2, coordinate system on the basis of camera initial coordinate system in P1.
Second step:By the initial attitude of two groups of cloud datas, the rotation translation matrix between two coordinate systems is calculated The correspondence of grid ID where [R/t] and point cloud.
3rd step:New point set P2 ' is calculated by rotation translation matrix [R/t].
4th step:It is improved using grid relation pair classics ICP algorithm, determines final [R/t] matrix.Specific implementation Method is as follows:
A. to all the points in P1, its nearest point is searched out in P2 ', forms a point pair;Two points are found out to concentrate All points pair.Put to matching when, we on the basis of the point in P1, the middle corresponding grid of P2 ' and its around 26 Its corresponding points is searched in grid, obtains a set pair.
B. new rotation, translation matrix [R/t] are obtained by a set pair, by [R/t] matrix, P2 ' point clouds is sat Mark conversion, obtains P2 ".
C. if grid ID correspondences where point set P2 ' and P2 " do not change, iteration is stopped.
D. calculate P2 ' arrive the square distance of P2 " with using its difference absolute value of square distance sum twice in succession as whether restraining Foundation.If being less than threshold value, just restrain, stop iteration.
E. with P2, " renewal P2 ', repeats a-e.
Point cloud data fusion part specific implementation method is with reference to step 7.
As shown in figure 3, Fig. 3 (a) is First partial three dimensional point cloud simulated effect figure, Fig. 3 (b) is the second partial 3 d Cloud data simulated effect figure, Fig. 3 (c) are the 3rd partial 3 d cloud data simulated effect figure.Then the method for the present invention is utilized Grid Method pair is used to First partial three dimensional point cloud, the second local three dimensional point cloud and the 3rd partial 3 d cloud data Three dimensional point cloud carries out cloud data registration, and after being merged using grid space to cloud data, obtained Fig. 4 is fusion Panorama cloud data simulated effect figure afterwards.It can be seen from the above that the present invention carries out three dimensional point cloud a cloud number using Grid Method According to registration, cloud data is merged using grid space.Camera motion posture is estimated during thereby reducing three-dimensional reconstruction The error of meter, improves the accuracy of three dimensional point cloud.
As it will be easily appreciated by one skilled in the art that the foregoing is merely illustrative of the preferred embodiments of the present invention, not to The limitation present invention, all any modification, equivalent and improvement made within the spirit and principles of the invention etc., should all include Within protection scope of the present invention.

Claims (6)

  1. A kind of 1. method for reconstructing based on binocular vision, it is characterised in that including:
    (1) the left and right view of input picture is corrected using camera parameter, three-dimensional is carried out to the left and right view after correction Match somebody with somebody, obtain the disparity map of left and right view;
    (2) the camera attitude information that the camera Attitude estimation information and gyroscope obtained according to input image sequence is passed to, carries out Kalman filtering obtains posture optimal estimation, and then obtains the optimal estimation of camera motion track;
    (3) according to the disparity map of left and right view and the optimal estimation of camera motion track, three dimensional point cloud is obtained;Utilize Grid Method Cloud data registration is carried out to three dimensional point cloud, cloud data then is carried out to the cloud data after registration using grid space Fusion, obtains panorama cloud data.
  2. 2. a kind of method for reconstructing based on binocular vision as claimed in claim 1, it is characterised in that the camera parameter includes Monocular intrinsic parameter and the outer parameter of binocular, the monocular intrinsic parameter are demarcated to obtain to carry out independent monocular to left and right camera respectively The respective monocular intrinsic parameter of left and right camera, the outer parameter of the binocular are the binocular between the left and right cameras that binocular calibration obtains Outer parameter.
  3. A kind of 3. method for reconstructing based on binocular vision as claimed in claim 1 or 2, it is characterised in that step (1) bag Include:
    (1-1) carries out the left and right view of input picture elimination distortion using camera parameter and handles every trade alignment of going forward side by side so that left The imaging origin of right view is consistent, the left and right view after being corrected;
    (1-2) is responded by the cascade of wave filter both horizontally and vertically for the left and right view after correction, is obtained left and right and regard The feature vector of figure, using the manhatton distance between feature vector match twice from left to right, from right to left, will match The pixel of strong robustness obtains sparse disparities figure as supporting point;
    (1-3) carries out triangulation to the supporting point in left view coordinate space, forms multiple Delta Regions, utilizes Delta Region The parallax on known vertex builds the prior probability of the parallax of other viewpoints, based on prior probability to the feature vector of left and right view into Row matching obtains right view observation parameter, and right view observation parameter is doubtful same place and similarity measurement, MAP estimation The parallax of other pixels in addition to supporting point, obtains the disparity map that dense disparity map is left and right view.
  4. 4. a kind of method for reconstructing based on binocular vision as claimed in claim 1 or 2, it is characterised in that the step (2) Specific implementation is:
    Gyroscope is fixed on left and right cameras centre position, and gyroscope coordinate system is transformed into left camera coordinate system, Ran Hougen Camera location information is obtained according to input image sequence, and First Speed is obtained by camera location information, is passed to according to gyroscope Acceleration information obtains second speed, and carrying out Kalman filtering to First Speed and second speed obtains speed optimal estimation, so The displacement information of camera is obtained according to input image sequence afterwards, estimation displacement information is obtained according to speed optimal estimation, to camera Displacement information and estimation displacement information carry out Kalman filtering obtain displacement optimal estimation, obtained according to input image sequence The camera attitude information that camera Attitude estimation information and gyroscope are passed to carries out Kalman filtering and obtains posture optimal estimation, according to Displacement optimal estimation and posture optimal estimation carry out Kalman filtering and obtain the optimal estimation of camera motion track.
  5. A kind of 5. method for reconstructing based on binocular vision as claimed in claim 1 or 2, it is characterised in that the cloud data Registration specific implementation be:
    Assuming that obtaining two groups of three dimensional point clouds P1 and P2 respectively, the frame of reference is established with the camera initial position of P1, according to Two groups of three dimensional point clouds correspond to the initial attitude of camera, obtain the initial rotation translation matrix between two coordinate systems, utilize Initial rotation translation matrix carries out P2 rigid body translation first and obtains P2 ', and then obtains grid ID relations where initial point cloud, profit Grid ID limitations matching range, puts P1 and P2 ' and obtains a set pair to matching, obtained according to a set pair where initial point cloud To new rotation translation matrix, P2 ' is converted using new rotation translation matrix to obtain P2 ", obtain new point cloud place Grid ID relations, calculate P2 ' arrive P2 " square distance and, with the absolute value of its difference of square distance sum twice in succession or newly Grid ID relations where point cloud as whether convergent foundation, iteration, where absolute value is less than threshold value or new point cloud Grid ID relations do not change, and obtain target rotation translation matrix, rotate translation matrix by three dimensional point cloud in P2 using target Coordinate system be transformed into the frame of reference, realize the cloud data registration of two groups of three dimensional point clouds.
  6. A kind of 6. method for reconstructing based on binocular vision as claimed in claim 1 or 2, it is characterised in that the cloud data The specific implementation of fusion is:
    The coordinate system space of three dimensional point cloud is carried out waiting grid partition, N number of grid space is obtained, travels through grid space, such as Without a cloud in fruit grid space, then do not process;If only existing a single point cloud in grid space, with the RGB of this cloud It is worth the rgb value as the grid space;If there are two and the point cloud of the above in grid space, with institute in the grid space There is rgb value of the RGB averages of a cloud as the grid space, realize point cloud data fusion.
CN201711248693.6A 2017-11-30 2017-11-30 Binocular vision-based reconstruction method Expired - Fee Related CN107945220B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711248693.6A CN107945220B (en) 2017-11-30 2017-11-30 Binocular vision-based reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711248693.6A CN107945220B (en) 2017-11-30 2017-11-30 Binocular vision-based reconstruction method

Publications (2)

Publication Number Publication Date
CN107945220A true CN107945220A (en) 2018-04-20
CN107945220B CN107945220B (en) 2020-07-10

Family

ID=61948269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711248693.6A Expired - Fee Related CN107945220B (en) 2017-11-30 2017-11-30 Binocular vision-based reconstruction method

Country Status (1)

Country Link
CN (1) CN107945220B (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665499A (en) * 2018-05-04 2018-10-16 北京航空航天大学 A kind of low coverage aircraft pose measuring method based on parallax method
CN108955685A (en) * 2018-05-04 2018-12-07 北京航空航天大学 A kind of tanker aircraft tapered sleeve pose measuring method based on stereoscopic vision
CN109084785A (en) * 2018-07-25 2018-12-25 吉林大学 More vehicle co-locateds and map constructing method, device, equipment and storage medium
CN109247914A (en) * 2018-08-29 2019-01-22 百度在线网络技术(北京)有限公司 Illness data capture method and device
CN109345587A (en) * 2018-09-01 2019-02-15 哈尔滨工程大学 A kind of mixing vision positioning method based on panorama and monocular vision
CN109509226A (en) * 2018-11-27 2019-03-22 广东工业大学 Three dimensional point cloud method for registering, device, equipment and readable storage medium storing program for executing
CN109727277A (en) * 2018-12-28 2019-05-07 江苏瑞尔医疗科技有限公司 The body surface of multi-view stereo vision puts position tracking
CN109813278A (en) * 2019-02-26 2019-05-28 北京中科慧眼科技有限公司 Ranging model modification method, distance measuring method, device and automated driving system
CN109949372A (en) * 2019-03-18 2019-06-28 北京智行者科技有限公司 A kind of laser radar and vision combined calibrating method
CN110009675A (en) * 2019-04-03 2019-07-12 北京市商汤科技开发有限公司 Generate method, apparatus, medium and the equipment of disparity map
CN110097623A (en) * 2019-04-30 2019-08-06 北京控制工程研究所 A kind of non-homogeneous image data information method for amalgamation processing and system
CN110160524A (en) * 2019-05-23 2019-08-23 深圳市道通智能航空技术有限公司 A kind of the sensing data acquisition methods and device of inertial navigation system
CN110176032A (en) * 2019-04-28 2019-08-27 暗物智能科技(广州)有限公司 A kind of three-dimensional rebuilding method and device
CN110310331A (en) * 2019-06-18 2019-10-08 哈尔滨工程大学 A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature
CN110599586A (en) * 2019-08-06 2019-12-20 湖北亿咖通科技有限公司 Semi-dense scene reconstruction method and device, electronic equipment and storage medium
CN110609570A (en) * 2019-07-23 2019-12-24 中国南方电网有限责任公司超高压输电公司天生桥局 Autonomous obstacle avoidance inspection method based on unmanned aerial vehicle
CN110675418A (en) * 2019-09-26 2020-01-10 深圳市唯特视科技有限公司 Target track optimization method based on DS evidence theory
CN110706269A (en) * 2019-08-30 2020-01-17 武汉斌果科技有限公司 Binocular vision SLAM-based dynamic scene dense modeling method
CN110782412A (en) * 2019-10-28 2020-02-11 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium
CN111127436A (en) * 2019-12-25 2020-05-08 北京深测科技有限公司 Displacement detection early warning method for bridge
WO2020113423A1 (en) * 2018-12-04 2020-06-11 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle
CN112002016A (en) * 2020-08-28 2020-11-27 中国科学院自动化研究所 Continuous curved surface reconstruction method, system and device based on binocular vision
EP3716103A3 (en) * 2019-03-29 2020-12-02 Ricoh Company, Ltd. Method and apparatus for determining transformation matrix, and non-transitory computer-readable recording medium
CN113055598A (en) * 2021-03-25 2021-06-29 浙江商汤科技开发有限公司 Orientation data compensation method and device, electronic equipment and readable storage medium
CN113222891A (en) * 2021-04-01 2021-08-06 东方电气集团东方锅炉股份有限公司 Line laser-based binocular vision three-dimensional measurement method for rotating object
CN113393382A (en) * 2021-08-16 2021-09-14 四川省人工智能研究院(宜宾) Binocular picture super-resolution reconstruction method based on multi-dimensional parallax prior
CN113674275A (en) * 2021-10-21 2021-11-19 北京中科慧眼科技有限公司 Dense disparity map-based road surface unevenness detection method and system and intelligent terminal
CN113706622A (en) * 2021-10-29 2021-11-26 北京中科慧眼科技有限公司 Road surface fitting method and system based on binocular stereo vision and intelligent terminal
CN113902781A (en) * 2021-10-18 2022-01-07 深圳追一科技有限公司 Three-dimensional face reconstruction method, device, equipment and medium
CN113936043A (en) * 2021-12-20 2022-01-14 北京中科慧眼科技有限公司 Binocular stereo vision based motion estimation method, electronic device, vehicle, and medium
CN115690219A (en) * 2023-01-03 2023-02-03 山东矩阵软件工程股份有限公司 Method and system for detecting three-dimensional information of running train in complex environment
WO2024045632A1 (en) * 2022-08-31 2024-03-07 华南理工大学 Binocular vision and imu-based underwater scene three-dimensional reconstruction method, and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101645170A (en) * 2009-09-03 2010-02-10 北京信息科技大学 Precise registration method of multilook point cloud
CN102435172A (en) * 2011-09-02 2012-05-02 北京邮电大学 Visual locating system of spherical robot and visual locating method thereof
CN103868460A (en) * 2014-03-13 2014-06-18 桂林电子科技大学 Parallax optimization algorithm-based binocular stereo vision automatic measurement method
CN105654483A (en) * 2015-12-30 2016-06-08 四川川大智胜软件股份有限公司 Three-dimensional point cloud full-automatic registration method
CN106525003A (en) * 2016-12-16 2017-03-22 深圳市未来感知科技有限公司 Method for measuring attitude on basis of binocular vision
CN106679648A (en) * 2016-12-08 2017-05-17 东南大学 Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect
CN106920276A (en) * 2017-02-23 2017-07-04 华中科技大学 A kind of three-dimensional rebuilding method and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101645170A (en) * 2009-09-03 2010-02-10 北京信息科技大学 Precise registration method of multilook point cloud
CN102435172A (en) * 2011-09-02 2012-05-02 北京邮电大学 Visual locating system of spherical robot and visual locating method thereof
CN103868460A (en) * 2014-03-13 2014-06-18 桂林电子科技大学 Parallax optimization algorithm-based binocular stereo vision automatic measurement method
CN105654483A (en) * 2015-12-30 2016-06-08 四川川大智胜软件股份有限公司 Three-dimensional point cloud full-automatic registration method
CN106679648A (en) * 2016-12-08 2017-05-17 东南大学 Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN106525003A (en) * 2016-12-16 2017-03-22 深圳市未来感知科技有限公司 Method for measuring attitude on basis of binocular vision
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect
CN106920276A (en) * 2017-02-23 2017-07-04 华中科技大学 A kind of three-dimensional rebuilding method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
夏凌楠等: "基于惯性传感器和视觉里程计的机器人定位", 《仪器仪表学报》 *
安雁艳: "三维图像拼接算法的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
黄欢欢等: "旋转不变特征描述子的点云自动配准方法", 《黑龙江科技大学学报》 *

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665499A (en) * 2018-05-04 2018-10-16 北京航空航天大学 A kind of low coverage aircraft pose measuring method based on parallax method
CN108955685A (en) * 2018-05-04 2018-12-07 北京航空航天大学 A kind of tanker aircraft tapered sleeve pose measuring method based on stereoscopic vision
CN108665499B (en) * 2018-05-04 2021-08-10 北京航空航天大学 Near distance airplane pose measuring method based on parallax method
CN108955685B (en) * 2018-05-04 2021-11-26 北京航空航天大学 Refueling aircraft taper sleeve pose measuring method based on stereoscopic vision
CN109084785A (en) * 2018-07-25 2018-12-25 吉林大学 More vehicle co-locateds and map constructing method, device, equipment and storage medium
CN109247914A (en) * 2018-08-29 2019-01-22 百度在线网络技术(北京)有限公司 Illness data capture method and device
CN109345587A (en) * 2018-09-01 2019-02-15 哈尔滨工程大学 A kind of mixing vision positioning method based on panorama and monocular vision
CN109345587B (en) * 2018-09-01 2022-02-22 哈尔滨工程大学 Hybrid vision positioning method based on panoramic vision and monocular vision
CN109509226A (en) * 2018-11-27 2019-03-22 广东工业大学 Three dimensional point cloud method for registering, device, equipment and readable storage medium storing program for executing
CN109509226B (en) * 2018-11-27 2023-03-28 广东工业大学 Three-dimensional point cloud data registration method, device and equipment and readable storage medium
WO2020113423A1 (en) * 2018-12-04 2020-06-11 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle
CN109727277A (en) * 2018-12-28 2019-05-07 江苏瑞尔医疗科技有限公司 The body surface of multi-view stereo vision puts position tracking
CN109813278A (en) * 2019-02-26 2019-05-28 北京中科慧眼科技有限公司 Ranging model modification method, distance measuring method, device and automated driving system
CN109813278B (en) * 2019-02-26 2021-09-17 北京中科慧眼科技有限公司 Ranging model correction method, ranging method and device and automatic driving system
CN109949372A (en) * 2019-03-18 2019-06-28 北京智行者科技有限公司 A kind of laser radar and vision combined calibrating method
EP3716103A3 (en) * 2019-03-29 2020-12-02 Ricoh Company, Ltd. Method and apparatus for determining transformation matrix, and non-transitory computer-readable recording medium
CN110009675A (en) * 2019-04-03 2019-07-12 北京市商汤科技开发有限公司 Generate method, apparatus, medium and the equipment of disparity map
CN110176032A (en) * 2019-04-28 2019-08-27 暗物智能科技(广州)有限公司 A kind of three-dimensional rebuilding method and device
CN110176032B (en) * 2019-04-28 2021-02-26 暗物智能科技(广州)有限公司 Three-dimensional reconstruction method and device
CN110097623A (en) * 2019-04-30 2019-08-06 北京控制工程研究所 A kind of non-homogeneous image data information method for amalgamation processing and system
CN110160524A (en) * 2019-05-23 2019-08-23 深圳市道通智能航空技术有限公司 A kind of the sensing data acquisition methods and device of inertial navigation system
CN110310331B (en) * 2019-06-18 2023-04-14 哈尔滨工程大学 Pose estimation method based on combination of linear features and point cloud features
CN110310331A (en) * 2019-06-18 2019-10-08 哈尔滨工程大学 A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature
CN110609570A (en) * 2019-07-23 2019-12-24 中国南方电网有限责任公司超高压输电公司天生桥局 Autonomous obstacle avoidance inspection method based on unmanned aerial vehicle
CN110599586A (en) * 2019-08-06 2019-12-20 湖北亿咖通科技有限公司 Semi-dense scene reconstruction method and device, electronic equipment and storage medium
CN110706269B (en) * 2019-08-30 2021-03-19 武汉斌果科技有限公司 Binocular vision SLAM-based dynamic scene dense modeling method
CN110706269A (en) * 2019-08-30 2020-01-17 武汉斌果科技有限公司 Binocular vision SLAM-based dynamic scene dense modeling method
CN110675418B (en) * 2019-09-26 2023-04-18 深圳市唯特视科技有限公司 Target track optimization method based on DS evidence theory
CN110675418A (en) * 2019-09-26 2020-01-10 深圳市唯特视科技有限公司 Target track optimization method based on DS evidence theory
CN110782412B (en) * 2019-10-28 2022-01-28 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium
CN110782412A (en) * 2019-10-28 2020-02-11 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium
CN111127436B (en) * 2019-12-25 2023-10-20 北京深测科技有限公司 Displacement detection early warning method for bridge
CN111127436A (en) * 2019-12-25 2020-05-08 北京深测科技有限公司 Displacement detection early warning method for bridge
CN112002016A (en) * 2020-08-28 2020-11-27 中国科学院自动化研究所 Continuous curved surface reconstruction method, system and device based on binocular vision
CN112002016B (en) * 2020-08-28 2024-01-26 中国科学院自动化研究所 Continuous curved surface reconstruction method, system and device based on binocular vision
CN113055598A (en) * 2021-03-25 2021-06-29 浙江商汤科技开发有限公司 Orientation data compensation method and device, electronic equipment and readable storage medium
CN113222891A (en) * 2021-04-01 2021-08-06 东方电气集团东方锅炉股份有限公司 Line laser-based binocular vision three-dimensional measurement method for rotating object
CN113222891B (en) * 2021-04-01 2023-12-22 东方电气集团东方锅炉股份有限公司 Line laser-based binocular vision three-dimensional measurement method for rotating object
CN113393382B (en) * 2021-08-16 2021-11-09 四川省人工智能研究院(宜宾) Binocular picture super-resolution reconstruction method based on multi-dimensional parallax prior
CN113393382A (en) * 2021-08-16 2021-09-14 四川省人工智能研究院(宜宾) Binocular picture super-resolution reconstruction method based on multi-dimensional parallax prior
CN113902781A (en) * 2021-10-18 2022-01-07 深圳追一科技有限公司 Three-dimensional face reconstruction method, device, equipment and medium
CN113674275A (en) * 2021-10-21 2021-11-19 北京中科慧眼科技有限公司 Dense disparity map-based road surface unevenness detection method and system and intelligent terminal
CN113706622A (en) * 2021-10-29 2021-11-26 北京中科慧眼科技有限公司 Road surface fitting method and system based on binocular stereo vision and intelligent terminal
CN113936043B (en) * 2021-12-20 2022-03-18 北京中科慧眼科技有限公司 Binocular stereo vision based motion estimation method, electronic device, vehicle, and medium
CN113936043A (en) * 2021-12-20 2022-01-14 北京中科慧眼科技有限公司 Binocular stereo vision based motion estimation method, electronic device, vehicle, and medium
WO2024045632A1 (en) * 2022-08-31 2024-03-07 华南理工大学 Binocular vision and imu-based underwater scene three-dimensional reconstruction method, and device
CN115690219A (en) * 2023-01-03 2023-02-03 山东矩阵软件工程股份有限公司 Method and system for detecting three-dimensional information of running train in complex environment

Also Published As

Publication number Publication date
CN107945220B (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN107945220A (en) A kind of method for reconstructing based on binocular vision
CN111126269B (en) Three-dimensional target detection method, device and storage medium
US20170310892A1 (en) Method of 3d panoramic mosaicing of a scene
CN111563921B (en) Underwater point cloud acquisition method based on binocular camera
US9171225B2 (en) Device, method, and recording medium for detecting and removing mistracked points in visual odometry systems
US20220292711A1 (en) Pose estimation method and device, related equipment and storage medium
CN109579825B (en) Robot positioning system and method based on binocular vision and convolutional neural network
CN106920276B (en) A kind of three-dimensional rebuilding method and system
CN106780729A (en) A kind of unmanned plane sequential images batch processing three-dimensional rebuilding method
CN104363438B (en) Full-view stereo making video method
WO2019164498A1 (en) Methods, devices and computer program products for global bundle adjustment of 3d images
CN103106688A (en) Indoor three-dimensional scene rebuilding method based on double-layer rectification method
Oniga et al. Curb detection for driving assistance systems: A cubic spline-based approach
Heng et al. Real-time photo-realistic 3d mapping for micro aerial vehicles
CN111274943A (en) Detection method, detection device, electronic equipment and storage medium
CN111612728A (en) 3D point cloud densification method and device based on binocular RGB image
JP2023505891A (en) Methods for measuring environmental topography
CN108917753A (en) Method is determined based on the position of aircraft of structure from motion
CN109115184A (en) Based on noncooperative target cooperated measuring method and system
CN106683163A (en) Imaging method and system used in video monitoring
CN116128966A (en) Semantic positioning method based on environmental object
CN113034571A (en) Object three-dimensional size measuring method based on vision-inertia
Buck et al. Capturing uncertainty in monocular depth estimation: Towards fuzzy voxel maps
Barth et al. Vehicle tracking at urban intersections using dense stereo
Gomez-Balderas et al. Vision-based autonomous hovering for a miniature quad-rotor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200710

Termination date: 20201130

CF01 Termination of patent right due to non-payment of annual fee