CN105184857A - Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging - Google Patents

Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging Download PDF

Info

Publication number
CN105184857A
CN105184857A CN201510580648.5A CN201510580648A CN105184857A CN 105184857 A CN105184857 A CN 105184857A CN 201510580648 A CN201510580648 A CN 201510580648A CN 105184857 A CN105184857 A CN 105184857A
Authority
CN
China
Prior art keywords
point
coordinate
dimensional
image
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510580648.5A
Other languages
Chinese (zh)
Other versions
CN105184857B (en
Inventor
李秀智
秦宝岭
贾松敏
杨爱林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201510580648.5A priority Critical patent/CN105184857B/en
Publication of CN105184857A publication Critical patent/CN105184857A/en
Application granted granted Critical
Publication of CN105184857B publication Critical patent/CN105184857B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a scale factor determination method in monocular vision reconstruction based on dot structured light ranging. The scale factor determination method comprises the steps of positioning a center of a light spot centroid, fitting space straight lines, carrying out RANSAC rejection, calculating three-dimensional space point coordinates of the light spot and calculating a scale factor. Aiming at the fact that most of the traditional monocular vision three-dimensional reconstruction methods based on image sequences can only achieve three-dimensional reconstruction at a projective scale or affine scale, the invention provides the Euclidean three-dimensional reconstruction method utilizing dot structured light to achieve monocular vision in an assisted mode, so that the scale of a three-dimensional scene reconstructed by utilizing the image sequences is consistent with that of a real world scene. The scale factor determination method has the technical characteristics of: (1) achieving Euclidean three-dimensional reconstruction by introducing structured light active vision into monocular reconstruction, (2) positioning the light spot by using a centroid method, (3) adding an RANSAC rejecting mechanism into laser ray equation fitting, (4) positioning a spatial point of the light spot in back-projection optimization, (5) and being free of restrictions in Euclidean reconstruction by adopting various kinds of methods in monocular reconstruction.

Description

Monocular vision based on structure light range finding rebuilds mesoscale factor determination method
Technical field
The invention belongs to computer vision field, relate to the method utilizing structure light realization based on the Euclidean three-dimensional reconstruction of image sequence.
Background technology
Vision is the most important perception means of the mankind, and the external information of nearly 80% is received by people by eyes.Exactly because vision is to the importance of the mankind, along with the develop rapidly of digital machine, allow computing machine also have vision, processing visual information can just become a very tempting research topic.Like this, just result in the emergence and development of this subject of computer vision.
In the research of computer vision one very part and parcel be exactly to obtain dynamic image sequence carry out analyzing and processing, to obtain useful information.Dynamic image is object for motion or scenery, and they are not only the function of locus, and are time dependent, it is we provide the information abundanter than single image.To a certain scene shot to image sequence in, have at least the gray scale of one part of pixel and color to there occurs change between adjacent two two field pictures, this image sequence is just referred to as dynamic image sequence.
Developing rapidly in recent years along with computer vision field research, utilizes the two-dimensional signal in image sequence to be also wherein one of hot issue and important research direction to recover the problem of real world three dimensional structure.No matter domestic still in the world the researchist of association area proposed some comparatively effective solutions for these problems.Wherein, based on image sequence monocular vision three-dimensional rebuilding method with its rebuild constrained condition quantity of information that is few, that rebuild required precognition few, be suitable for advantage that large scale scene is rebuild, become the topmost method of a class solving this problem.In multi-view geometry, an important theory mode is exactly Hierarchy theory, and it define one about the grade level from real scene to its reconstruction model, the conversion that Hierarchy theory relates to mainly contains projective transformation, affined transformation, similarity transformation and euclidean transformation.But, traditional monocular vision three-dimensional rebuilding method based on image sequence can only realize the three-dimensional reconstruction under projective scale or affine yardstick mostly, namely the result after rebuilding differs a scale factor with real-world scene yardstick, will limit its application in reality like this.Be directed to this, the present invention proposes a kind of structure light that utilizes as assisting the Euclidean three-dimensional rebuilding method realizing monocular vision, the yardstick of the three-dimensional scenic after utilizing image sequence to rebuild and real-world scene is consistent, and meanwhile improves robustness and the accuracy of three-dimensional reconstruction algorithm.
According to the difference of the beam mode that optical projection device projects, structured light pattern can be divided into again structure light pattern, line-structured light pattern, multiple line structure optical mode and network optical mode etc.The simple three-dimensional scenic utilizing monocular vision to obtain cannot reach Euclidean three-dimensional scenic and truly rebuild effect, and the present invention utilizes structure light and monocular camera to realize Euclidean three-dimensional reconstruction.The beam projection that laser instrument sends to body surface produces a luminous point, and luminous point in the picture plane of video camera, forms a two-dimentional picture point through the lens imaging of video camera.The sight line of video camera and light beam line intersect in luminous point place in space, and uniquely can determine the locus of luminous point in a certain the known world coordinate system by it, and then try to achieve the effect that the space scale factor reaches European three-dimensional reconstruction.
The present invention is auxiliary single eye stereo vision to realize true yardstick Euclidean three-dimensional reconstruction to real scene by design with structure light.Schematic diagram as shown in Figure 1.First, fixing target (the black and white gridiron patterns of 8 × 11) and laser instrument, target is placed within the scope of camera fields of view, take a pictures (in figure as shown in target 1 position), then open laser instrument laser to be got on target and clap a pictures (in figure as shown in target 2 position) again, moving target mark repeatedly above operation obtain multiple image (target 3,4,5 etc.), pre-service is carried out to the image of input.Then, utilize frames differencing method to obtain laser spots hot spot, utilize centroid method to ask for the coordinate of facula mass center in image.The spatial point three-dimensional coordinate obtaining all hot spots under camera coordinate system should be related to, according to the ray equation l that multiple hot spot spatial point three-dimensional coordinate matching laser instrument is launched according to the list between target and image 1, finally, from the ray l through facula mass center that camera photocentre sends 2the intersection point of the ray launched with laser instrument (because error exists, l 1and l 2can different surface beeline be become, in practical application, use different surface beeline common vertical line mid point) be real hot spot space three-dimensional point, the size of scale factor can be obtained like this.
Summary of the invention
The present invention is from structure light, proposing a kind of is auxiliary monocular vision Euclidean three-dimensional rebuilding method with structure light, comprises facula mass center centralized positioning, adds the space line matching of RANSAC rejection, asks for flare three-dimensional spatial point coordinate, the asking for of scale factor.
The technical solution used in the present invention is rebuild mesoscale factor determination method based on the monocular vision of structure light range finding, and the overall flow figure that scale factor is asked for as shown in Figure 2.Whole demarcation is all unified fastens to camera coordinates.Intrinsic parameters of the camera uses and demarcates based on the camera marking method of 2D plane target drone, when calibration structure photosystem parameter, keeps the locus of camera and laser instrument not change.Then laser facula is projeced in the plane residing for chessboard target, take image now, then turn off laser instrument and take image now again, repeat target to put time acquisition one group of data image arbitrarily according to this step, facula information corresponding to the target of each position can be obtained, as shown in Figure 3 according to background subtraction.Utilize the angle point information on target to be put the outer parameter matrix of target plane at every turn, add the center-of-mass coordinate information of laser facula in each uncalibrated image, the volume coordinate of laser facula barycenter when just at every turn being put target.Like this result of repeatedly putting the laser facula center-of-mass coordinate of trying to achieve is carried out Levenberg-Marquardt matching, the space equation of laser beam can be obtained.The volume coordinate of laser facula on the plane of delineation is obtained by the sensor intrinsic parameter information of demarcating.Picpointed coordinate straight line on image can overlap laser beam completely with camera photocentre and laser facula in theory, but, due to the error of demarcating and calculate, these two straight lines can form different surface beeline, the common vertical line mid point of setting two different surface beelines is the actual spatial coordinates of laser bright spot, the ratio of the actual spatial coordinates that namely true scale factor utilizes structure light to demarcate to obtain hot spot and the hot spot volume coordinate of using three-dimensional reconstruction algorithm to obtain, and then reach Euclidean reconstruction effect.
(1) centroid method spot location
Light spot image is image comparatively common in conventional images process, and facula mass center is one of key character of light spot image.In the numerous areas such as vision measurement, satellite navigation, how realizing accurately locating fast of spot center is the important topic nowadays studied both at home and abroad.
The center positioning method of some shaped laser spot can be divided into based on gray scale and the two large classes based on edge.Based on the grayscale distribution information of the method general target of gray scale, be applicable to the less and uniform hot spot of intensity profile of radius; Based on the edge shape information of the method general target at edge, be applicable to the hot spot that radius is larger.Therefore, small size hot spot adopts the method based on gray scale to carry out centralized positioning usually.The center positioning method based on gray scale conventional at present comprises three kinds: i.e. centroid method, Hessian matrix method and Gauss curve fitting method.A kind of maximum segmented positioning method that centroid method is, it realizes fairly simple, fast operation, and has certain positioning precision, therefore adopts barycenter spot location method, and concrete methods of realizing elaborates hereafter having.
(2) the laser radiation equation model that RANSAC rejects mechanism is added
Can obtain many group barycenter hot spot coordinates by above step, then carry out the matching of spatial point laser radiation equation, the present invention utilizes the space line fitting function in OpenCV, can more accurately and obtain ray equation easily.But, due in the error asking for the steps such as center-of-mass coordinate, larger error may be there is in the facula mass center coordinate obtained, matching space line equation out and real straight-line equation can be made like this to there is comparatively big error, accurate as much as possible in order to reduce the space line equation that error makes to try to achieve, the present invention proposes to use RANSAC rejection method, the spatial point that rejecting error is larger, result as shown in Figure 4, will be introduced in a specific embodiment in detail.
(3) the hot spot spatial point that back projection optimizes is located
After trying to achieve laser radiation equation according to above step, the sight line of video camera and light beam line intersect in luminous point place in space, but, due to the measuring error of plane of delineation coordinate, the impact of the factor such as noise and camera distortion, video camera sight line and light beam line can not meet at a bit completely, so the orientation problem that crosses is exactly ask the coordinate of different surface beeline common vertical line section mid point.The present invention adopts the space three-dimensional point of being tried to achieve by different surface beeline common vertical line section mid-point algorithm as spatial point three-dimensional coordinate corresponding to image spot, propose herein further to utilize back projection's process of iteration to optimize the space three-dimensional point coordinate of trying to achieve, concrete methods of realizing elaborates hereafter having.
Compared with prior art, the present invention has following beneficial effect
(1) monocular is rebuild introducing structured light active vision and is realized Euclidean three-dimensional reconstruction
The present invention is not that traditional structure light is rebuild, but introduces true scale factor by structure light on the basis using monocular vision to realize three-dimensional reconstruction, thus realizes the Euclidean three-dimensional reconstruction of true yardstick.First, use the monocular vision three-dimensional reconstruction algorithm realization fed back based on light stream to environment three-dimensional modeling fast and accurately, but, image sequence rebuild after three-dimensional scenic and real three-dimensional scenic between lack a true scale factor, do not reach the effect of Euclidean three-dimensional reconstruction, therefore, the method asking for true scale factor is exactly that the actual spatial coordinates utilizing structure light to demarcate the hot spot obtained is labeled as [X, Y, Z] be labeled as [X with the volume coordinate of the hot spot using three-dimensional reconstruction algorithm to obtain 0, Y 0, Z 0] ratio as scale factor λ, i.e. λ=[X, Y, Z]/[X 0, Y 0, Z 0] the Euclidean three-dimensional reconstruction of monocular vision can be realized like this.
(2) to rebuild the Euclidean reconstruction of various method unrestricted for monocular
Scene rebuilding based on multi views picture is all the study hotspot of computer vision field all the time, and look stereo reconstruction is utilize a series of images about certain scene taken at diverse location to recover real three-dimensional scenic more.Nowadays, the fine and close method for reconstructing of main flow comprises the method based on voxel, based on the method for polygonal mesh distortion, based on the method for looking depth map, and based on the method etc. that dough sheet is expanded more.But, the result that above method is rebuild differs a scale factor with the reconstructed results of real scene, the Euclidean reconstruction of real scene cannot be realized, the monocular vision Euclidean three-dimensional rebuilding method based on structure light being directed to this present invention proposition, all not by the restriction of method for reconstructing, can both realize Euclidean three-dimensional reconstruction.As long as project the real space point coordinate of hot spot on target according to structure light determination laser instrument, and determine that the corresponding relation of image spot coordinate and real space point coordinate just can try to achieve scale factor, so the restriction of method for reconstructing can not be subject in the three-dimensional reconstruction process using additive method, Euclidean three-dimensional reconstruction can both be realized.
Accompanying drawing explanation
Fig. 1 laser radiation asks for schematic diagram.
Fig. 2 overall flow figure.
Fig. 3 facula mass center is asked for.
Fig. 4 adds RANSAC fitting a straight line.
Embodiment
The present invention is described in further detail by reference to the accompanying drawings.
The present invention specifically comprises following step.
(1) centralized positioning of shaped laser spot is put
First choose mode by various filtering or threshold value and pre-service is carried out to image, and then center coordination is carried out to image spot.After terminating whole image pixel process, according to each hot spot final barycenter parameter group accumulated value, according to its barycenter ranks coordinate of following first moment centroid calculation formulae discovery:
X c = Σ y = 1 m Σ x = 1 n I ( x , y ) × x Σ y = 1 m Σ x = 1 n I ( x , y ) , Y c = Σ y = 1 m Σ x = 1 n I ( x , y ) × y Σ y = 1 m Σ x = 1 n I ( x , y )
In above formula, I (x, y) represents the gray-scale value of input image pixels, and x, y are ranks coordinate corresponding to this pixel, X c, Y cbe respectively the ranks coordinate of facula mass center.
(2) the laser radiation equation model that RANSAC rejects mechanism is added
RANSAC reaches target by repeatedly selecting in data one group of random subset.The subset be selected is assumed to be intra-office point, and verifies by following method, and its model is described below:
1) from all facula mass center I, n point is chosen as interior point, utilizing n according to least square method, put can estimation space straight-line equation L:Ax+By+Cz+D=0, i.e. all unknown parameter (A, the B of equation, C, D) can calculate from the intra-office point of hypothesis.
2) with 1) in the model that obtains go to test all other remaining facula mass centers (its facula mass center number is I-n), if certain barycenter of light spots is applicable to the space line model L estimated, if namely certain facula mass center coordinate (x ', y ', z ') meet Ax '+By '+Cz '+D<| σ |, think that this center of mass point is also intra-office point, wherein σ is the threshold value of setting.
3) if there is abundant barycenter of light spots to be classified as the intra-office point of hypothesis, namely abundant barycenter of light spots (supposing have m to put to meet) is had to meet Ax '+By '+Cz '+D<| σ |,, the model L so estimated is just enough reasonable.
4) then, the intra-office point duplicate removal new estimation model satisfied condition with m, because it is only by initial hypothesis intra-office point estimation.
5) last, by estimating that the error rate of intra-office point and model carrys out assessment models.
This process is repeatedly executed fixing number of times t time, each model of producing or because intra-office point is rejected very little, or because better and selected than existing model.
(3) flare three-dimensional spatial point coordinate is asked for
But the intersection point being sent the ray that ray and laser instrument through facula mass center are launched by camera photocentre can not meet at a bit due to source of error completely, therefore, adopt different surface beeline common vertical line mid-point algorithm, use the volume coordinate of line segment mid point as laser bright spot of this distance between centers of tracks in practical application, the spatial point three-dimensional coordinate that in image, the image coordinate of facula mass center is corresponding can be tried to achieve like this.Concrete mathematical model is as follows: the ray O sent by camera photocentre 1p 1direction vector is V 1, under camera coordinates system, the laser instrument of matching emits beam O 2p 2equation direction vector be expressed as V 2, then the direction vector of its common vertical line is V=V 1× V 2if, common vertical line and O 1p 1, O 2p 2intersection point be respectively M 1(x 1, y 1, z 1) and M 2(x 2, y 2, z 2), then the space three-dimensional point coordinate of laser facula is M (x, y, z), wherein x=(x 1+ x 2)/2, y=(y 1+ y 2)/2, z=(z 1+ z 2)/2.
Propose further herein to utilize back projection's process of iteration to optimize the space three-dimensional point coordinate of trying to achieve, concrete grammar is as follows: the image coordinate [u first trying to achieve facula mass center according to different surface beeline common vertical line mid-point algorithm, v] corresponding spatial point three-dimensional coordinate [x, y, z] after, by this spatial point back projection to image, namely ask for space three-dimensional point and camera photocentre line and image coordinate intersection point [u ', v '], can obtain getting mid point [(u+u ')/2 of image coordinate after facula mass center and back projection like this, (v+v ')/2], and using this as facula mass center, space three-dimensional point coordinate is asked for according to different surface beeline common vertical line mid-point algorithm, continuous repetition said process, computer memory point three-dimensional coordinate M (x *, y *, z *) launch the distance δ of straight line l (space line equation is Ax+By+Cz+D=0) with laser instrument, computing formula is if δ is less than certain threshold value, then so far iteration terminates.
(4) Euclidean three-dimensional reconstruction is realized
Fine and close method for reconstructing comprises the method based on voxel, based on the method for polygonal mesh distortion, based on the method for looking depth map, and based on the method that dough sheet is expanded more.Wherein first three methods is only applicable to the reconstruct of single 3D solid, thus can not meet the technical need such as robot navigation, virtual reality.Dough sheet development method precision is higher, and can rebuild large scenes such as buildingss.But the connectivity between dough sheet, the integrality of namely rebuilding is difficult to ensure.
For the problems referred to above, adopt the monocular vision large scene three-dimensional rebuilding method driven based on scene flows feedback herein, concrete steps are as follows: the multi views picture of hand-held free movable type collected by camera target scene, by between consecutive frame optical flow field set up run through the pixel matching relation of looking more, then utilize 5 Algorithm for Solvings look more between Euclidean space transformational relation.Choosing central view is reference frame, sets up world coordinate system (O w-X wy wz w), solve the evacuated space three-dimensional coordinate that corresponding picture point is corresponding.The basis of sparse reconstruct generates original mesh dough sheet, and feeds back to and respectively compare frame visual angle, by optical flow field quantitative evaluation feedback error, and be out of shape with the deviation driving model of each view picture.Because the motion vector field information of space object has been contained in light stream vector field, thus by the method that light stream-scene flows is analyzed, can effectively revise original polygonal mesh, after obtaining the scene after light stream-scene flows adjustment, utilize the space scale factor that above step obtains, the Euclidean reconstruction of three-dimensional scenic can be obtained.
The above, be only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention, and all any amendments done within the spirit and principles in the present invention, equivalent replacement and improvement etc., all should be included within protection scope of the present invention.

Claims (2)

1. the monocular vision based on structure light range finding rebuilds mesoscale factor determination method, it is characterized in that: scale factor is asked in process; Whole demarcation is all unified fastens to camera coordinates; Intrinsic parameters of the camera uses and demarcates based on the camera marking method of 2D plane target drone, when calibration structure photosystem parameter, keeps the locus of camera and laser instrument not change; Then laser facula is projeced in the plane residing for chessboard target, take image now, then turn off laser instrument and take image now again, repeat target to put according to this step and time obtain one group of data image arbitrarily, facula information corresponding to the target of each position can be obtained according to background subtraction; Utilize the angle point information on target to be put the outer parameter matrix of target plane at every turn, add the center-of-mass coordinate information of laser facula in each uncalibrated image, the volume coordinate of laser facula barycenter when just at every turn being put target; Like this result of repeatedly putting the laser facula center-of-mass coordinate of trying to achieve is carried out Levenberg-Marquardt matching, the space equation of laser beam can be obtained; The volume coordinate of laser facula on the plane of delineation is obtained by the sensor intrinsic parameter information of demarcating; Picpointed coordinate straight line on image can overlap laser beam completely with camera photocentre and laser facula in theory, but, due to the error of demarcating and calculate, these two straight lines can form different surface beeline, the common vertical line mid point of setting two different surface beelines is the actual spatial coordinates of laser bright spot, the ratio of the actual spatial coordinates that namely true scale factor utilizes structure light to demarcate to obtain hot spot and the hot spot volume coordinate of using three-dimensional reconstruction algorithm to obtain, and then reach Euclidean reconstruction effect.
2. the monocular vision based on structure light range finding according to claim 1 rebuilds mesoscale factor determination method, it is characterized in that: this method specifically comprises following step;
(1) centralized positioning of shaped laser spot is put
First choose mode by various filtering or threshold value and pre-service is carried out to image, and then center coordination is carried out to image spot; After terminating whole image pixel process, according to each hot spot final barycenter parameter group accumulated value, according to its barycenter ranks coordinate of following first moment centroid calculation formulae discovery:
X c = &Sigma; y = 1 m &Sigma; x = 1 n I ( x , y ) &times; x &Sigma; y = 1 m &Sigma; x = 1 n I ( x , y ) , Y c = &Sigma; y = 1 m &Sigma; x = 1 n I ( x , y ) &times; y &Sigma; y = 1 m &Sigma; x = 1 n I ( x , y )
In above formula, I (x, y) represents the gray-scale value of input image pixels, and x, y are ranks coordinate corresponding to this pixel, X c, Y cbe respectively the ranks coordinate of facula mass center;
(2) the laser radiation equation model that RANSAC rejects mechanism is added
RANSAC reaches target by repeatedly selecting in data one group of random subset; The subset be selected is assumed to be intra-office point, and verifies by following method, and its model is described below:
1) from all facula mass center I, n point is chosen as interior point, utilizing n according to least square method, put can estimation space straight-line equation L:Ax+By+Cz+D=0, i.e. all unknown parameter (A, the B of equation, C, D) can calculate from the intra-office point of hypothesis;
2) with 1) in the model that obtains go to test all other remaining facula mass centers (its facula mass center number is I-n), if certain barycenter of light spots is applicable to the space line model L estimated, if namely certain facula mass center coordinate (x ', y ', z ') meet Ax '+By '+Cz '+D<| σ |, think that this center of mass point is also intra-office point, wherein σ is the threshold value of setting;
3) if there is abundant barycenter of light spots to be classified as the intra-office point of hypothesis, namely abundant barycenter of light spots (supposing have m to put to meet) is had to meet Ax '+By '+Cz '+D<| σ |,, the model L so estimated is just enough reasonable;
4) then, the intra-office point duplicate removal new estimation model satisfied condition with m, because it is only by initial hypothesis intra-office point estimation;
5) last, by estimating that the error rate of intra-office point and model carrys out assessment models;
This process is repeatedly executed fixing number of times t time, each model of producing or because intra-office point is rejected very little, or because better and selected than existing model;
(3) flare three-dimensional spatial point coordinate is asked for
But the intersection point being sent the ray that ray and laser instrument through facula mass center are launched by camera photocentre can not meet at a bit due to source of error completely, therefore, adopt different surface beeline common vertical line mid-point algorithm, use the volume coordinate of line segment mid point as laser bright spot of this distance between centers of tracks in practical application, the spatial point three-dimensional coordinate that in image, the image coordinate of facula mass center is corresponding can be tried to achieve like this; Concrete mathematical model is as follows: the ray O sent by camera photocentre 1p 1direction vector is V 1, under camera coordinates system, the laser instrument of matching emits beam O 2p 2equation direction vector be expressed as V 2, then the direction vector of its common vertical line is V=V 1× V 2if, common vertical line and O 1p 1, O 2p 2intersection point be respectively M 1(x 1, y 1, z 1) and M 2(x 2, y 2, z 2), then the space three-dimensional point coordinate of laser facula is M (x, y, z), wherein x=(x 1+x 2)/2, y=(y 1+y 2)/2, z=(z 1+z 2)/2;
Propose further herein to utilize back projection's process of iteration to optimize the space three-dimensional point coordinate of trying to achieve, concrete grammar is as follows: the image coordinate [u first trying to achieve facula mass center according to different surface beeline common vertical line mid-point algorithm, v] corresponding spatial point three-dimensional coordinate [x, y, z] after, by this spatial point back projection to image, namely ask for space three-dimensional point and camera photocentre line and image coordinate intersection point [u ', v '], can obtain getting mid point [(u+u ')/2 of image coordinate after facula mass center and back projection like this, (v+v ')/2], and using this as facula mass center, space three-dimensional point coordinate is asked for according to different surface beeline common vertical line mid-point algorithm, continuous repetition said process, computer memory point three-dimensional coordinate M (x *, y *, z *) launch the distance δ of straight line l (space line equation is Ax+By+Cz+D=0) with laser instrument, computing formula is if δ is less than certain threshold value, then so far iteration terminates,
(4) Euclidean three-dimensional reconstruction is realized
This method adopts the monocular vision large scene three-dimensional rebuilding method driven based on scene flows feedback, concrete steps are as follows: the multi views picture of hand-held free movable type collected by camera target scene, by between consecutive frame optical flow field set up run through the pixel matching relation of looking more, then utilize 5 Algorithm for Solvings look more between Euclidean space transformational relation; Choosing central view is reference frame, sets up world coordinate system (O w-X wy wz w), solve the evacuated space three-dimensional coordinate that corresponding picture point is corresponding; The basis of sparse reconstruct generates original mesh dough sheet, and feeds back to and respectively compare frame visual angle, by optical flow field quantitative evaluation feedback error, and be out of shape with the deviation driving model of each view picture; Because the motion vector field information of space object has been contained in light stream vector field, thus by the method that light stream-scene flows is analyzed, can effectively revise original polygonal mesh, after obtaining the scene after light stream-scene flows adjustment, utilize the space scale factor that above step obtains, the Euclidean reconstruction of three-dimensional scenic can be obtained.
CN201510580648.5A 2015-09-13 2015-09-13 Monocular vision based on structure light ranging rebuilds mesoscale factor determination method Active CN105184857B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510580648.5A CN105184857B (en) 2015-09-13 2015-09-13 Monocular vision based on structure light ranging rebuilds mesoscale factor determination method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510580648.5A CN105184857B (en) 2015-09-13 2015-09-13 Monocular vision based on structure light ranging rebuilds mesoscale factor determination method

Publications (2)

Publication Number Publication Date
CN105184857A true CN105184857A (en) 2015-12-23
CN105184857B CN105184857B (en) 2018-05-25

Family

ID=54906907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510580648.5A Active CN105184857B (en) 2015-09-13 2015-09-13 Monocular vision based on structure light ranging rebuilds mesoscale factor determination method

Country Status (1)

Country Link
CN (1) CN105184857B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678785A (en) * 2016-02-01 2016-06-15 西安交通大学 Method for calibrating posture relation of laser and camera
CN105806318A (en) * 2016-03-09 2016-07-27 大连理工大学 Visual measurement method for space three-dimensional information based on motion time quantity
CN106204535A (en) * 2016-06-24 2016-12-07 天津清研智束科技有限公司 A kind of scaling method of high energy beam spot
CN106441151A (en) * 2016-09-30 2017-02-22 中国科学院光电技术研究所 Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion
CN108010125A (en) * 2017-12-28 2018-05-08 中国科学院西安光学精密机械研究所 True scale three-dimensional reconstruction system and method based on line-structured light and image information
CN108447067A (en) * 2018-03-19 2018-08-24 哈尔滨工业大学 It is a kind of that the visible images sea horizon detection method being fitted with RANSAC is cut out based on energy seam
CN108535097A (en) * 2018-04-20 2018-09-14 大连理工大学 A kind of method of triaxial test sample cylindrical distortion measurement of full field
CN108680182A (en) * 2017-12-01 2018-10-19 深圳市沃特沃德股份有限公司 Measure the method and system of vision sweeping robot odometer penalty coefficient
CN109410325A (en) * 2018-11-01 2019-03-01 中国矿业大学(北京) A kind of pipeline inner wall three-dimensional reconstruction algorithm based on monocular image sequence
WO2019041349A1 (en) * 2017-09-04 2019-03-07 大连理工大学 Three-dimensional visual information measuring method based on rotating lens
CN109816724A (en) * 2018-12-04 2019-05-28 中国科学院自动化研究所 Three-dimensional feature extracting method and device based on machine vision
WO2020063987A1 (en) * 2018-09-30 2020-04-02 先临三维科技股份有限公司 Three-dimensional scanning method and apparatus and storage medium and processor
CN112588621A (en) * 2020-11-30 2021-04-02 山东农业大学 Agricultural product sorting method and system based on visual servo
CN117782030A (en) * 2023-11-24 2024-03-29 北京天数智芯半导体科技有限公司 Distance measurement method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070057946A1 (en) * 2003-07-24 2007-03-15 Dan Albeck Method and system for the three-dimensional surface reconstruction of an object
CN101667303A (en) * 2009-09-29 2010-03-10 浙江工业大学 Three-dimensional reconstruction method based on coding structured light
CN102175261A (en) * 2011-01-10 2011-09-07 深圳大学 Visual measuring system based on self-adapting targets and calibrating method thereof
CN103411553A (en) * 2013-08-13 2013-11-27 天津大学 Fast calibration method of multiple line structured light visual sensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070057946A1 (en) * 2003-07-24 2007-03-15 Dan Albeck Method and system for the three-dimensional surface reconstruction of an object
CN101667303A (en) * 2009-09-29 2010-03-10 浙江工业大学 Three-dimensional reconstruction method based on coding structured light
CN102175261A (en) * 2011-01-10 2011-09-07 深圳大学 Visual measuring system based on self-adapting targets and calibrating method thereof
CN103411553A (en) * 2013-08-13 2013-11-27 天津大学 Fast calibration method of multiple line structured light visual sensor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
江洁等: "点结构光动态姿态角测量系统", 《红外与激光工程》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678785A (en) * 2016-02-01 2016-06-15 西安交通大学 Method for calibrating posture relation of laser and camera
CN105806318A (en) * 2016-03-09 2016-07-27 大连理工大学 Visual measurement method for space three-dimensional information based on motion time quantity
CN106204535A (en) * 2016-06-24 2016-12-07 天津清研智束科技有限公司 A kind of scaling method of high energy beam spot
CN106204535B (en) * 2016-06-24 2018-12-11 天津清研智束科技有限公司 A kind of scaling method of high energy beam spot
CN106441151A (en) * 2016-09-30 2017-02-22 中国科学院光电技术研究所 Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion
WO2019041349A1 (en) * 2017-09-04 2019-03-07 大连理工大学 Three-dimensional visual information measuring method based on rotating lens
CN108680182A (en) * 2017-12-01 2018-10-19 深圳市沃特沃德股份有限公司 Measure the method and system of vision sweeping robot odometer penalty coefficient
CN108010125A (en) * 2017-12-28 2018-05-08 中国科学院西安光学精密机械研究所 True scale three-dimensional reconstruction system and method based on line-structured light and image information
CN108447067A (en) * 2018-03-19 2018-08-24 哈尔滨工业大学 It is a kind of that the visible images sea horizon detection method being fitted with RANSAC is cut out based on energy seam
CN108535097A (en) * 2018-04-20 2018-09-14 大连理工大学 A kind of method of triaxial test sample cylindrical distortion measurement of full field
WO2020063987A1 (en) * 2018-09-30 2020-04-02 先临三维科技股份有限公司 Three-dimensional scanning method and apparatus and storage medium and processor
CN109410325A (en) * 2018-11-01 2019-03-01 中国矿业大学(北京) A kind of pipeline inner wall three-dimensional reconstruction algorithm based on monocular image sequence
CN109816724A (en) * 2018-12-04 2019-05-28 中国科学院自动化研究所 Three-dimensional feature extracting method and device based on machine vision
CN109816724B (en) * 2018-12-04 2021-07-23 中国科学院自动化研究所 Three-dimensional feature extraction method and device based on machine vision
CN112588621A (en) * 2020-11-30 2021-04-02 山东农业大学 Agricultural product sorting method and system based on visual servo
CN117782030A (en) * 2023-11-24 2024-03-29 北京天数智芯半导体科技有限公司 Distance measurement method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN105184857B (en) 2018-05-25

Similar Documents

Publication Publication Date Title
CN105184857A (en) Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging
CN110458939B (en) Indoor scene modeling method based on visual angle generation
CN107945268B (en) A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
Kumar et al. Monocular fisheye camera depth estimation using sparse lidar supervision
CN107679537B (en) A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matching
CN111325794A (en) Visual simultaneous localization and map construction method based on depth convolution self-encoder
CN109003325A (en) A kind of method of three-dimensional reconstruction, medium, device and calculate equipment
CN110335337A (en) A method of based on the end-to-end semi-supervised visual odometry for generating confrontation network
CN109816704A (en) The 3 D information obtaining method and device of object
CN108416840A (en) A kind of dense method for reconstructing of three-dimensional scenic based on monocular camera
CN112150575A (en) Scene data acquisition method, model training method, device and computer equipment
CN109671120A (en) A kind of monocular SLAM initial method and system based on wheel type encoder
CN106803267A (en) Indoor scene three-dimensional rebuilding method based on Kinect
CN107240129A (en) Object and indoor small scene based on RGB D camera datas recover and modeling method
CN106780592A (en) Kinect depth reconstruction algorithms based on camera motion and image light and shade
CN106447725B (en) Spatial target posture method of estimation based on the matching of profile point composite character
CN106091984A (en) A kind of three dimensional point cloud acquisition methods based on line laser
CN107170037A (en) A kind of real-time three-dimensional point cloud method for reconstructing and system based on multiple-camera
CN105469389B (en) A kind of grid ball target for vision sensor calibration and corresponding scaling method
CN103400409A (en) 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
CN110246124A (en) Target size measurement method and system based on deep learning
CN105046743A (en) Super-high-resolution three dimensional reconstruction method based on global variation technology
CN113362457B (en) Stereoscopic vision measurement method and system based on speckle structured light
CN111524233A (en) Three-dimensional reconstruction method for dynamic target of static scene
CN106780546A (en) The personal identification method of the motion blur encoded point based on convolutional neural networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant