CN110310331A - A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature - Google Patents

A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature Download PDF

Info

Publication number
CN110310331A
CN110310331A CN201910526419.3A CN201910526419A CN110310331A CN 110310331 A CN110310331 A CN 110310331A CN 201910526419 A CN201910526419 A CN 201910526419A CN 110310331 A CN110310331 A CN 110310331A
Authority
CN
China
Prior art keywords
straight line
line
point
mark
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910526419.3A
Other languages
Chinese (zh)
Other versions
CN110310331B (en
Inventor
苏丽
刘钲
张智
夏桂华
秦绪杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201910526419.3A priority Critical patent/CN110310331B/en
Publication of CN110310331A publication Critical patent/CN110310331A/en
Application granted granted Critical
Publication of CN110310331B publication Critical patent/CN110310331B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of position and orientation estimation methods based on linear feature in conjunction with point cloud feature, comprising: (1) merges the extraction of straight line of priori knowledge;(2) matching line segments in binocular image;(3) three-dimensional reconstruction of linear feature;(4) pose calculates.Point cloud of the invention comes from edge feature, with good anti-interference ability and accurate stationkeeping ability, and it remains to be effectively matched after becoming a cloud even if phenomena such as length transformation or fracture occurs in line segment with the robust sexual clorminance that point cloud matching replaces line match that can give full play to point cloud matching;Point cloud limited amount, overlay area is the limited line-segment sets in space, this greatly reduces invocation point cloud quantity, improves matching speed, but be from again with time point cloud and contribute maximum object edge to positioning, positioning accuracy can't be decreased obviously;Extraction of straight line and matching process are not necessarily to fine and close depth field information, can guarantee precision for the object of complex texture and simple textures.

Description

A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature
Technical field
The present invention relates to a kind of method of pose estimation, especially a kind of position based on linear feature in conjunction with point cloud feature The method of appearance estimation, belongs to technical field of image processing.
Background technique
Pose estimation problem is one of the subjects such as photogrammetry, computer vision, computer graphics and robot Important problem is vision servo system and Mobile Robotics Navigation and positioning, object identification and tracking, virtual reality, camera Many engineering practices and the theoretical research key problems to be solved such as self-calibration, Robot Hand-eye calibration.
A kind of simple way be it is direct extract several corner features of object, according to feature point set in the position of visual coordinate system Set and calculate in the position of object itself pose of object relative visual system, but the corner feature limited amount of object, shadow Solving precision is rung, or even can not be solved because of feature quantity deficiency.In recent years the position and orientation estimation method based on cloud feature is popular, This method can position complex structure, and have good robustness, but method needs image capture device can The three-dimensional point cloud information of high-precision and densification is obtained, and to realize the point cloud segmentation between testee and environment, splicing etc. Complex operations, and since cloud quantity is larger, cause algorithm calculating process time-consuming.
Certain methods consider that the linear feature based on object is positioned, and linear feature becomes apparent from than corner feature, is anti-dry It is stronger to disturb ability, and method is simple, calculation amount is small and without measuring picture depth, but linear feature in matching process without the image of Buddha (particularly evident in stereoscopic vision) is equally accurately positioned in point feature, when because block or illumination effect cause extract straight length When phenomena such as variation, fracture occurs, it is difficult to ensure that accurate reproduction goes out the three-dimensional pose of object, existing method majority is focused on It the extraction of linear feature and matches, last positioning accuracy and reliability is not thoroughly discussed.
The present invention considers to combine the thought of corner feature, linear feature and point cloud matching, design calculation amount it is small, can By property height, and the method for adapting to normal image input (without fine and close depth information).Specific method be based on linear feature, Based on binocular vision system, three-dimensional line segment collection is reduced to by the matching of left and right mesh, and " beat line-segment sets by discrete sampling Dissipate " at three-dimensional point set, the estimation of object space posture is then realized using the method for point cloud matching, and with angle point spy before matching The object pose result that sign calculates substitutes into condition as initial.
Summary of the invention
For the above-mentioned prior art, small, high reliablity that the technical problem to be solved in the present invention is to provide a kind of calculation amounts, and Adapt to normal image input, the side estimated based on pose of the linear feature in conjunction with point cloud feature without fine and close depth information Method.
In order to solve the above technical problems, a kind of pose estimation side based on linear feature in conjunction with point cloud feature of the invention Method includes the following steps:
Step 1: merge the extraction of straight line of priori knowledge with merge;
Step 2: the matching line segments in binocular image:
Straight line set in the mesh image of left and rightWithThe straight line in left figure is taken, from It arrivesSuccessively with the straight line in right meshIt calculates, then has formula:
Wherein, Mark is the score value of two matching line segments, in initial Markfinal=Mark0=100, MarkiFor certain Score value during one, f are specified each item rating condition, and the angle including straight line, horizontal restraint, it is visually poor to control;
Take two straight lines respectively in the straight line set of left and right mesh image(l represents left mesh image)(r represents right mesh figure Picture), it brings into included angle of straight line constraint, horizontal restraint and left and right mesh disparity constraint;
By above step, the end value of Mark is obtained, Mark is comparediWith Marki-1Value, Markfinal=max (Marki,Marki-1), while and writing down the number k of straight line in mesh image graphic in left and right corresponding to the score valuetempL, jtempR; All straight lines in right mesh and then the number with straight line in right mesh image graphic are successively traversed in left mesh image kth straight line jtempRIt removes successively to traverse all straight lines in left mesh, executes formula
Final iteration goes out maximum Mark value and corresponding itempRIf itempL=itempR, this left mesh graph line volume Number be itempLStraight line and right mesh graph line number be jtempLSuccessful match is stored in matching line set M;Otherwise it matches Failure, repeats the above steps, and successively iteration is finally obtained until algorithm termination
Step 3: the three-dimensional reconstruction of linear feature
3.1: the solution of space line
If graph line equation is ax+by+c=0, according to central projection principle it is found that the straight line on image is corresponding In one, space plane, if spatial plane equation are as follows:
AX+BY+CZ+D=0
Had by the center imaging model put:
Wherein m11To m34For the product of the internal reference matrix of video camera imaging, comparison equation can be obtained:
By the two coplanarity equation simultaneous solutions determined as straight line matched on image, intersection obtains the sky as line correspondences Between straight line, if space line LEFIt is corresponding as straight line isList the equation of space line are as follows:
Using the parameter type equation of space line, any x=x is taken0It substitutes into above-mentioned equation, solves a certain on space line Coordinate (the X of point0,Y0,Z0);
Oneself knows the law vector of vertical two planeDetermine the direction vector of straight line
The equation of space line are as follows:
3.2 solution room line segments
By optical center point O in left meshlWith the straightway for projecting to left mesh imaging planeBoth ends respectively constitute two spaces Straight lineWithFind out space line lEFWithlEFWithSpace intersection (if two straight line antarafacials, it is straight to find out distance two The point of the line shortest distance) it is respectively EleftWith Fleft.The intersection point E of space line can be similarly found out in right meshrightWith Fright。 To guarantee that straight line information obtains the reservation of maximum possible, in resulting 4 points of { Eleft,Eright,Fleft,FrightIn, take length most Line segment Origin And Destination of the big two o'clock as reduction.To realize the reduction of space line section EF.By obtained space Straight line, which successively calculates, obtains the straightway set N={ L in space1,L2···Ln}。
Step 4: pose calculates
4.1 select left purpose image as thick matched two-dimensional surface: obtaining line segment aggregate in step 3Obtain in set that the intersection point between line segment, the collection for obtaining intersection point are combined into M two-by-twopnp={ (x1,y1),(x2, y2)···(xn,yn)};
Three-dimensional point in measurement, the real coordinate system of foundation, is stored in point set G3dpot={ (X1,Y1,Z1),(X2,Y2, Z2)···(Xn,Yn,Zn) in, by point set MpnpIn all points arrange in sequence, make its sequence with it is corresponding Point set G3dpotIn sequence be consistent, substitute into camera imaging model formula in:
Wherein [1, n] i ∈
Wherein K is known camera internal reference, and λ is the proportionality coefficient of imaging model, by MpnpAnd G3dpotPoint substitute into public affairs respectively In formula, using EPNP algorithm, thick matched rotary variable R is finally obtainedpnp, translation variable Tpnp
4.2 obtain point cloud: by space line, being split sampling according to certain threshold value, final acquisition is special based on straight line The point cloud data of sign takes certain straight line in space, and the specific method is as follows:
Wherein θ is the angle of straight line and positive direction of the x-axis, and len is the length of the straight line, and k is the quantity of the Points on Straight Line cloud, The discrete point set collection of the straight line is combined into
To, to obtained 3 d-line, point set P being successively separated into according to above-mentioned algorithm, for manually establishing in step 3 Template library after the same method, formed point set Q;
4.3 are set as the result in 4.1 spin matrix and translation vector of initialization, i.e. R=Rpnp, T=Tpnp, update number Strong point collection obtains new transformation point set to the P translation acquired using 4.1 and rotation parameter, and calculates errorWherein, pi∈ P, qi∈ Q, if the difference of iterative estimate error is less than given threshold value, i.e.,It then calculates and finishes;If not satisfied, repeating above-mentioned iterative process, rotary variable R is obtainedicpWith translation variable Ticp
Finally, camera coordinate system rotary variable R=R corresponding with physical coordinates systempnp·Ricp, translate variable T=Tpnp +Ticp
The invention also includes:
1. the extraction of straight line of step 1 fusion priori knowledge includes: with merging
1.1: straight line number l1,l2···lb, the coordinate value of straight line starting point is (xbegin1,ybegin1), (xbegin2, ybegin2) ..., (xbeginb,ybeginb), end coordinate values are (xend1,yend1), (xend2,yend2) ..., (xendb,yendb);It is former Point O to each straight line l1,l2···lbDistance be denoted as d1,d2, db, l1,l2···lbJust with image x-axis The angle in direction is respectively θ12···θb
1.2: the identical straight line of slope being grouped, { group is denoted as1,group2, groupm, wherein m is Group, the value of each group is by θ12···θbDifferent number determines;If groupiIn straight line item number be greater than 1, remember lij And likRespectively indicate groupiJ-th strip straight line and kth straight line in i-th group, then calculate groupiIn different straight line two-by-two lijAnd likBetween relative distance:
Δ d=dlij-dlik
Wherein: dlij, dlikRespectively origin is to straight line lijWith likThe shortest distance;The threshold value that combined distance is arranged is d, If Δ d < d, lijWith likIt is classified as set groups, the straightway in set is merged;
1.3: according to above-mentioned steps, until groupsIn straight line item number be 1, straight line merging terminate.
2. included angle of straight line constrains specifically: the threshold value T of the difference of the angle of two straight lines of settingangleIf meetingWherein,WithRespectivelyWithWith the angle of the positive direction of image x-axis, then the right purpose straight line is full Sufficient angle restriction, otherwise score value Mark is zeroed.
3. horizontal restraint specifically: Dy is horizontal restraint value, DymaxFor horizontal restraint threshold value, enableStarting point coordinate (xst1,yst1), terminal point coordinate (xend1,yend1),Starting point coordinate (xstr,ystr), terminal point coordinate (xendr,yendr), Dy meets:
Wherein, If Dy < Dymax, then this two straight lines meet horizontal restraint, then carry out the deduction of score value: MarkDy=Mark-Dy α, α are weight,;If Dy > Dymax, then score value Mark is zeroed.
4. or so mesh disparity constraint specifically:
Dx is horizontal parallax, DxmaxFor horizontal parallax threshold value, Dx meets:
Wherein,If Dx < Dxmax, then this two straight lines meet left and right constraint, and otherwise score value Mark is zeroed.
Beneficial effects of the present invention:
1) point cloud comes from edge feature, inherently has good anti-interference ability and accurate stationkeeping ability, and use a little Cloud matching replaces line match that can give full play to the robust sexual clorminance of point cloud matching (even if length transformation or fracture etc. occurs in line segment Phenomenon remains to be effectively matched after becoming a cloud);
2) cloud limited amount is put, overlay area is not three-dimension curved surface, but the limited line-segment sets in space, this makes invocation point Cloud quantity greatly reduces, and improves matching speed, but be from again with time point cloud and contribute maximum object edge to positioning, fixed Position precision can't be decreased obviously;
3) extraction of straight line and matching process are not necessarily to fine and close depth field information, for complex texture and simple textures Object can guarantee precision.
Detailed description of the invention
The identical straight line of Fig. 1 (a) slope merges schematic diagram
The straight line that Fig. 1 (b) slope differs θ merges schematic diagram
Fig. 2 is space three-dimensional line reconstruction schematic diagram
Fig. 3 is the acquisition schematic diagram of point cloud
Fig. 4 is based on the pose estimation algorithm flow chart based on linear feature in conjunction with point cloud feature.
Specific embodiment
The present invention is described further below with reference to Fig. 1 (a) to Fig. 4:
Pose estimation problem is one of the subjects such as photogrammetry, computer vision, computer graphics and robot Important problem is vision servo system and Mobile Robotics Navigation and positioning, object identification and tracking, virtual reality, camera Many engineering practices and the theoretical research key problems to be solved such as self-calibration, Robot Hand-eye calibration.
A kind of simple way be it is direct extract several corner features of object, according to feature point set in the position of visual coordinate system Set and calculate in the position of object itself pose of object relative visual system, but the corner feature limited amount of object, shadow Solving precision is rung, or even can not be solved because of feature quantity deficiency.In recent years the position and orientation estimation method based on cloud feature is popular, This method can position complex structure, and have good robustness, but method needs image capture device can The three-dimensional point cloud information of high-precision and densification is obtained, and to realize the point cloud segmentation between testee and environment, splicing etc. Complex operations, and since cloud quantity is larger, cause algorithm calculating process time-consuming.Linear feature becomes apparent from than corner feature, resists Interference performance is stronger, and method is simple, calculation amount is small and without measuring picture depth, but linear feature matching process can not (particularly evident in stereoscopic vision) is equally accurately positioned in picture point feature, when because block or illumination effect cause extract length When phenomena such as degree variation, fracture occurs, it is difficult to ensure that accurate reproduction goes out the three-dimensional pose of object, existing method majority puts emphasis Linear feature extraction and match, last positioning accuracy and reliability are not thoroughly discussed.
The present invention considers to combine the thought of corner feature, linear feature and point cloud matching, design calculation amount it is small, can By property height, and the method for adapting to normal image input (without fine and close depth information).Specific method be based on linear feature, Based on binocular vision system, three-dimensional line segment collection is reduced to by the matching of left and right mesh, and " beat line-segment sets by discrete sampling Dissipate " at three-dimensional point set, the estimation of object space posture is then realized using the method for point cloud matching, and with angle point spy before matching The object pose result that sign calculates substitutes into condition as initial.To realize the estimation of object space posture.
Step of the invention is as follows:
Step 1, the extraction of straight line for realizing object.It needs first to carry out edge extracting to image before executing algorithm, due to side Result after edge detection not only includes the linear edge of target object, further includes other interference characteristics (edge of such as desk), It must try to filter out interference.Identification by priori knowledge and the identification of the colouring information to examined object, to realize Filter out the interference of other linear features.It is found in experiment also, the result after straight-line detection does not ensure that each seamed edge is corresponding A unique line segment (this will affect subsequent pose estimation), then gives the post-processing approach of straightway fusion.
The matching of step 2, left and right mesh straight line pair.Respectively by the angle of straight line, horizontal restraint, left and right mesh disparity constraint with And length difference matches the constraint of this quadruple, to be accurately obtained the matching pair of left and right mesh straight line.
The three-dimensional reconstruction of step 3, linear feature.Object straight line in space is EF, is observed in two video cameras Two images arrived are elflAnd erfr.Then according to pin-hole imaging model and central projection principle it is found that space line EF is by Ol With elflThe plane S of composition1With by OrWith erfrThe plane S of composition2Intersection.Projection line in two camera photography planes and The intersection of two planes of respective optical center composition can determine space line.
Step 4, the position and attitude for estimating object.In the present invention, 3 d-line previous step obtained first according to Certain threshold value carries out cutting and breaks up, to generate a series of 3 dimension point set.In the same of the physical coordinates system space of artificial settings Sample establishes complete three-dimensional point set model.
Then the ePnP Attitude estimation algorithm of 2D to 3D is used in the rough estimate initialized here, and this method is to phase It can be very good for coordinate system to be adjusted to approximate consistent like biggish two panels point cloud data is spent.
Calculating finally is iterated with regard to the method for near point with iteration, the result estimated roughly that a upper procedure is obtained As the condition of initialization, relatively precisely position and attitude is obtained by heavy iteration.
Embodiment:
1. merging the extraction of straight line of priori knowledge
The present invention realizes the extraction of straight line of object using the straightway feature extraction of Hough transform as basic skills.It calculates Method needs first to carry out edge extracting to image before Hough transform, it is assumed that cuboid is object to be observed in figure, since edge is examined Result after survey not only includes the linear edge of target object, further includes other interference characteristics (edge of such as desk), it is necessary to Try to filter out interference.In vision system actual search target object location, certain features of object can be generally predicted (as tentatively Position range, color, size etc.), this section will be based on such known constraints, provide the method for filtering out interference (even if using upper Strategy is stated, cannot still filter out interference completely sometimes, but the point cloud matching strategy of subsequent use is with certain interference It remains to effectively work, is detailed in experimental result hereinafter).In addition, finding in research, the result after Hough straight-line detection can not be protected It demonstrate,proves each seamed edge and corresponds to a unique line segment (this will affect subsequent pose estimation), The present invention gives the rear places of straightway fusion Reason method.
1.1) edge extracting and interference filtering
In the algorithm of edge extracting, to guarantee the edge (especially outline of straight line) extracted clearly, it is not interrupted continuously. Following criterion, high s/n ratio criterion, high position precision criterion, single skirt response criterion should usually be met.So in the present invention In, the edge of detection object is come using Canny edge detection algorithm.
After the edge contour information obtained, the marginal information for having table edge and other objects is special to the straight line of object Sign detection interference.According to priori knowledge, for object in the initial ranges of the edge contour of picture, color gamut is all known.Firstly, According to the range that the profile of priori selects, the present invention uses a kind of color image contour detecting algorithm based on color space.It introduces Component fields come indicate under color space from H, S, V component form cylinder volume, improve color difference measure, on this basis Various components are detected, the edge image of the color gamut is finally obtained, the image binaryzation that will be obtained.It selects to be detected Object specific region in the picture.
1.2) line segment extraction merges with line segment is repeated
Using the method for the extraction of straight line of Hough transform, the result of extraction such as Fig. 2 during lines detection (a) shown in, in the shape decision due to target.The side length longer linear edge different in size for leading to detection of straight line can detect Duplicate straight line out.So there is the step of repetition straight line merging after straight-line detection.
For the needs of subsequent match, now belongs to collinear straightway and merge.Straight-line segments mergence it is main Strategy is that angle between straightway is less than threshold value 5T (experiment selects 2 degree), the notch length (minimum between straightway endpoint between straightway Distance) it is less than threshold value 6T (half that two length of straigh line minimum values are selected as in experiment), straightway two sides intensity profile is consistent Straightway merges to obtain fitting a straight line section, then digital simulation error, and error is less than threshold value 7T (5 pixels are selected as in experiment) As final merging straightway.The specific method is as follows:
1) straight line number l1,l2···lb, there is the coordinate value (x of straight line starting pointst1,yst1), (xst2,yst2)··· (xstb,ystb), end coordinate values (xend1,yend1), (xend2,yend2)···(xendb,yendb).Origin O to each straight line Distance d1,d2···dnAnd corresponding straight line is θ about the angle in picture level direction12···θn
2) the identical straight line of slope is grouped { group1,group2···groupm, wherein the value of m is by θ1, θ2···θnDifferent number determines, if groupiIn straight line item number be greater than 2, then start calculate groupiIn two-by-two Relative distance δ d=d between straight linelij-dlik, the threshold value that combined distance is arranged is d, if δ d < d.Then lijWith likIt will close And.
3) straight line after merging straight line is denoted as luv.The angle for calculating original two straight line straight lines both ends, that is, calculate separately VectorWithAngle,WithAngle.By formula
The size of θ can be obtained.If θ < pi/2, chooses P as basic point, point W was the straight line and l of P pointrsIntersection point, can Obtain the midpoint U of line segment PW.With the available V of identical method.
It is δ's to initial point distance difference when the identical Straight-line segments mergence of slope is completed, then by slope difference within θ Straightway continues to merge.Since the step 1 of a upper method, the straight line of slope differences within the set threshold range is grouped, The setting θ of threshold value=± 3 ° in the present invention.Other the step of, carry out according to the third step of a upper process.
Matching line segments in 2 binocular images
Pass through step 1.Respectively obtain straight line set stable in the mesh image of left and rightWithThe straight line in left figure is taken, fromIt arrivesSuccessively with the straight line in right meshIt calculates, then has public affairs Formula:
Wherein, Mark is the score value of two matching line segments, in initial Markfinal=Mark0=100.F is scoring item Part, by the angle of straight line, horizontal restraint, left and right mesh disparity constraint and length difference match this constraint.
1) difference of the angle of straight line, if meeting(T in the present inventionangle=20 °), then the right purpose is straight Line meets angle restriction.Otherwise score value Mark is zeroed.
2) core line level of approximation line establishes horizontal restraint.Starting point coordinate (xst1,yst1), terminal point coordinate (xend1, yend1),Starting point coordinate (xstr,ystr), terminal point coordinate (xendr,yendr)
Wherein, If Dy < Dymax(Dy in the present inventionmax=100), then this two straight lines meet horizontal restraint, then carry out the deduction of score value, MarkDy(α is weight to=Mark-Dy α, 0.2) value of the present invention is.If Dy > Dymax, then score value Mark is zeroed.
3) horizontal parallax constraint is established by left and right mesh overlapping region.There is formula:
Wherein,If Dx < Dxmax(DxmaxScene of different sizes have a different values, Dx in the present inventionmax=240), then this two straight lines completely control about Beam.Otherwise score value Mark is zeroed.
4) length difference matches.There is formula
WhereinObtain the end value of Mark.
Compare MarkiWith Marki-1Value, Markfinal=max (Marki,Marki-1), while and writing down the score value institute The number k of straight line in corresponding left and right mesh image graphictempL, jtempR.It is successively traversed in right mesh in left mesh image kth straight line After all straight lines.Again with the number j of straight line in right mesh image graphictempRIt removes successively to traverse all straight lines in left mesh, executes public affairs Formula
Final iteration goes out maximum Mark value and corresponding itempRIf itempL=itempR, this left mesh graph line volume Number be itempLStraight line and right mesh graph line number be jtempLSuccessful match is stored in matching line set M.Otherwise it matches Failure.It repeats the above steps, successively iteration is finally obtained until algorithm termination
The three-dimensional reconstruction of 3 linear features
As shown in Fig. 2, having straight line EF on the artificial solid in space, it is in two images that two video cameras are observed elflAnd erfr.Then according to pin-hole imaging model and central projection principle it is found that space line EF is by OlWith elflComposition is put down Face S1With by OrWith erfrThe plane S of composition2Intersection.What projection line and respective optical center in two camera photography planes formed The intersection of two planes can determine space line.
3.1) solution of space line
Straight line has been obtained during step 2 to set M.
If graph line equation is ax+by+c=0, according to central projection principle it is found that the straight line on image is corresponding In one, space plane, if spatial plane equation are as follows:
AX+BY+CZ+D=0
Had by the center imaging model put:
Wherein m11To m34For the product of the internal reference matrix of video camera imaging, comparison equation can be obtained:
Spatial plane equation is referred to as object line, optical center and the coplanarity equation as line.It will be matched as straight line determines on image Two coplanarity equation simultaneous solutions, intersection obtain the space line as line correspondences.If space line LEFCorresponding picture straight line ForThe equation for listing space line is
Space line equation in this paper uses the parameter type equation of space line.Take any x=x0Substitution side (4), Coordinate (the X of certain point on space line can be solved0,Y0,Z0)。
Oneself knows the law vector of vertical two planeDetermine the direction vector of straight line
The equation of space line are as follows:
3.2) solution room line segment
By optical center point O in left meshlWith the straightway for projecting to left mesh imaging planeBoth ends respectively constitute two spaces Straight lineWithFind out space line lEFWithlEFWithSpace intersection (if two straight line antarafacials, it is straight to find out distance two The point of the line shortest distance) it is respectively EleftWith Fleft.The intersection point E of space line can be similarly found out in right meshrightWith Fright。 To guarantee that straight line information obtains the reservation of maximum possible, in resulting 4 points of { Eleft,Eright,Fleft,FrightIn, take length most Line segment Origin And Destination of the big two o'clock as reduction.To realize the reduction of space line section EF.By obtained space Straight line, which successively calculates, obtains the straightway set N={ L in space1,L2···Ln}。
4 poses calculate
The position and attitude algorithm for estimating object in the present invention is mainly the inverse mapping principle for passing through binocular imaging, by left and right Matching of the straight line that mesh detected on two-dimensional surface, then three-dimensional straight line is reduced by inverse mapping.Again 3 d-line It carries out cutting according to certain threshold value to break up, to generate a series of three-dimensional point set.It is empty in the physical coordinates system of artificial settings Between equally foundation it is complete 3 dimension point-based surface.Calculating finally is iterated with regard to proximal point algorithm with iteration, is obtained relatively precisely Position and attitude.
Before two point sets are iterated with regard to proximal point algorithm, the rough estimate for being equivalent to initialization will be also carried out, Here the ePnP Attitude estimation algorithm of 2 d-to-3 d is used, this method can be fine to the biggish two panels point cloud data of similarity Coordinate system is adjusted to approximate consistent.
4.1) realization of target slightly matched algorithm
Select two-dimensional surface of the left purpose image as ePnP algorithm.In step 1, line segment aggregate is obtainedObtain in set the intersection point between line segment two-by-two.It as a result is Mpnp={ (x1,y1),(x2,y2)··· (xn,yn)}。
In the ideal case, in the straight-line detection of plane, it may appear that three straight lines meet at same point.But in actual conditions Under, straight-line detection has error, it may appear that three straight lines intersect two-by-two and the unequal situation of intersection point, to carry out approximate point at this time Merge.
Three-dimensional point in measurement, the real coordinate system of foundation, is stored in point set G3dpot={ (X1,Y1,Z1),(X2,Y2, Z2)···(Xn,Yn,Zn) in.All points in point set M are arranged in sequence, make its sequence and corresponding point Sequence in collection G is consistent. M2dpot={ (x1,y1),(x2,y2)···(xm,ym)}.It is public to substitute into camera imaging model In formula:
Wherein [1, n] i ∈
Wherein K is camera internal reference, is assumed known to the parameter in the present invention.Point is substituted into formula respectively, is calculated using EPNP Method finally obtains thick matched rotary variable Rpnp, translation variable Tpnp
4.2) acquisition methods of cloud are put
It is that space line is split sampling according to certain threshold value that the method for obtaining point cloud is used in the present invention, most The point cloud data based on linear feature is obtained eventually.Certain straight line in space is taken, the specific method is as follows:
Wherein θ is the angle of straight line and positive direction of the x-axis, and len is the length of the straight line, and k is the quantity of the Points on Straight Line cloud. The discrete point set collection of the straight line is combined into
Three-dimensional reconstruction has been carried out to all space characteristics straight lines in step 3, for obtained 3 d-line, has been pressed Point set P is successively separated into according to above-mentioned algorithm.It is also after the same method, to form point set Q for the template library manually established.
4.3) realization of the matched algorithm of target essence
Initialize R and T.Result in 4.1 is set as to the spin matrix and translation vector of initialization.That is R=Rpnp, T= Tpnp.Update data point set.The translation acquired to P using previous step and rotation parameter, obtain new transformation point set, and calculate mistake DifferenceWherein, pi∈ P, qi∈Q.If the difference of iterative estimate error is less than given threshold value, i.e.,It then calculates and finishes;If not satisfied, repeating above-mentioned iterative process, rotary variable R is obtainedicpWith translation variable Ticp
Finally, camera coordinate system rotary variable R=R corresponding with physical coordinates systempnp·Ricp, translation variable T=Tpnp +Ticp
The specific embodiment of the invention further include:
The present invention the following steps are included:
(1) merge priori knowledge extraction of straight line with merge
Come the edge of detection object using Canny edge detection algorithm.Use a kind of cromogram based on color space HSI It as contour detecting algorithm, reapplies Hough transform and extracts linear feature, obtain extracting again and repetition line segment after straightway Merge, specific steps are as follows:
1.1) straight line number l1,l2···lb, the coordinate value of straight line starting point is (xbegin1,ybegin1), (xbegin2, ybegin2) ..., (xbeginn,ybeginn), end coordinate values are (xend1,yend1), (xend2,yend2) ..., (xendb,yendb).It is former Point O to each straight line l1,l2···lbDistance be denoted as d1,d2, dbAnd corresponding straight line and image x-axis The angle of positive direction is respectively θ12···θb
1.2) the identical straight line of slope is grouped, is denoted as { group1,group2, groupm, wherein m is Group, value is by θ12···θbDifferent number determines.If groupiIn straight line item number be greater than 1, remember lijAnd lik Respectively indicate groupiJ-th strip straight line and kth straight line in i-th group then start to calculate groupiIn different straight line two-by-two lijAnd likBetween relative distance:
Δ d=dlij-dlik
Wherein: dlij, dlikRespectively origin is to straight line lijWith likThe shortest distance.The threshold value that combined distance is arranged is d, If Δ d < d, lijWith likIt is classified as set groupλ, the straightway in set is merged.
1.3) according to above-mentioned steps, until groupλIn straight line item number be 1, straight line merging terminate.
(2) matching line segments in binocular image
Straight line set in the mesh image of left and rightWithThe straight line in left figure is taken, from It arrivesSuccessively with the straight line in right meshIt calculates, then has formula:
Wherein, Mark is the score value of two matching line segments, in initial Markfinal=Mark0=100.MarkiFor certain Score value during one, f are specified each item rating condition, and by the angle of straight line, horizontal restraint, left and right is visually poor to constitute this A constraint.
Take two straight lines respectively in the straight line set of left and right mesh image(l represents left mesh image)(r represents right mesh figure Picture).It brings into following constraint:
2.1) included angle of straight line constrains.Set the threshold value T of the difference of the angle of two straight linesangleIf meeting (wherein,WithThe respectively angle of the positive direction of straight line and image x-axis, T in the present inventionangle=20 °), then the right purpose is straight Line meets angle restriction.Otherwise score value Mark is zeroed.
2.2) core line level of approximation line establishes horizontal restraint.It enablesStarting point coordinate (xst1,yst1), terminal point coordinate (xend1, yend1),Starting point coordinate (xstr,ystr), terminal point coordinate (xendr,yendr)
Wherein, If Dy < Dymax(Dy is horizontal restraint threshold value in the present invention, wherein Dymax=100), then This two straight lines meet horizontal restraint, then carry out the deduction of score value, MarkDy(α is weight to=Mark-Dy α, and the present invention takes 0.2) value is.If Dy > Dymax, then score value Mark is zeroed.
2.3) horizontal parallax constraint is established in mesh overlapping region in left and right.There is formula:
Wherein, If Dx < Dxmax(DxmaxFor horizontal parallax threshold value, Dx in the present inventionmax=240), then this The completely left and right constraint of two straight lines.Otherwise score value Mark is zeroed.
By above step, the end value of Mark is obtained, Mark is comparediWith Marki-1Value, Markfinal=max (Marki,Marki-1), while and writing down the number k of straight line in mesh image graphic in left and right corresponding to the score valuetempL, jtempR。 It is successively traversed in right mesh after all straight lines in left mesh image kth straight line.Again with the number of straight line in right mesh image graphic jtempRIt removes successively to traverse all straight lines in left mesh, executes formula
Final iteration goes out maximum Mark value and corresponding itempRIf itempL=itempR, this left mesh graph line volume Number be itempLStraight line and right mesh graph line number be jtempLSuccessful match is stored in matching line set M.Otherwise it matches Failure.It repeats the above steps, successively iteration is finally obtained until algorithm termination
(3) three-dimensional reconstruction of linear feature
3.1) solution of space line
If graph line equation is ax+by+c=0, according to central projection principle it is found that the straight line on image is corresponding In one, space plane, if spatial plane equation are as follows:
AX+BY+CZ+D=0
Had by the center imaging model put:
Wherein m11To m34For the product of the internal reference matrix of video camera imaging, comparison equation can be obtained:
Spatial plane equation is referred to as object line, optical center and the coplanarity equation as line.It will be matched as straight line determines on image Two coplanarity equation simultaneous solutions, intersection obtain the space line as line correspondences.If space line LEFCorresponding picture straight line ForThe equation for listing space line is
Space line equation in this paper uses the parameter type equation of space line.Take any x=x0Substitute into above-mentioned side Cheng Zhong can solve the coordinate (X of certain point on space line0,Y0,Z0)。
Oneself knows the law vector of vertical two planeDetermine the direction vector of straight line
The equation of space line are as follows:
3.2) solution room line segment
By optical center point and straight line in left meshTwo endpoints respectively constitute two straight linesWithFind out lEFWith lEFWithThe intersection point (if two straight line antarafacials, finding out the point of the two straight line shortest distance of distance) of space line be respectively Eleft With Fleft.E can be similarly found out in right meshrightWith Fright.To guarantee that straight line information obtains the reservation of maximum possible, in gained 4 points of { Eleft,Eright,Fleft,FrightIn, take the maximum two o'clock of length as the line segment Origin And Destination of reduction.To Realize the reduction of space line section EF.Obtained space line is successively calculated and obtains the straightway set N=in space {L1,L2···Ln}。
(4) pose calculates
4.1) select left purpose image as thick matched two-dimensional surface.During (3), line segment aggregate is obtainedObtain in set the intersection point between line segment two-by-two.The collection for obtaining intersection point is combined into
Three-dimensional point in measurement, the real coordinate system of foundation, is stored in point set G3dpot={ (X1,Y1,Z1),(X2,Y2, Z2)···(Xn,Yn,Zn) in.By point set MpnpIn all points arrange in sequence, make its sequence with it is corresponding Point set G3dpotIn sequence be consistent.It substitutes into camera imaging model formula:
Wherein [1, n] i ∈
Wherein K is camera internal reference, is assumed known to the parameter in the present invention.Point is substituted into formula respectively, is calculated using EPNP Method finally obtains thick matched rotary variable Rpnp, translation variable Tpnp
4.2) method for obtaining point cloud is that space line is split sampling according to certain threshold value, finally obtains base In the point cloud data of linear feature.Certain straight line in space is taken, the specific method is as follows:
Wherein θ is the angle of straight line and positive direction of the x-axis, and len is the length of the straight line, and k is the quantity of the Points on Straight Line cloud. The discrete point set collection of the straight line is combined into
Three-dimensional reconstruction has been carried out to all space characteristics straight lines in step 3, for obtained 3 d-line, has been pressed Point set P is successively separated into according to above-mentioned algorithm.It is also after the same method, to form point set Q for the template library manually established.
4.3) result in 4.1 is set as to the spin matrix and translation vector of initialization.That is R=Rpnp, T=Tpnp.It updates Data point set.The translation acquired to P using previous step and rotation parameter, obtain new transformation point set, and calculate errorWherein, pi∈ P, qi∈Q。
If the difference of iterative estimate error is less than given threshold value, i.e.,It then calculates and finishes;If not satisfied, repeating Above-mentioned iterative process obtains rotary variable RicpWith translation variable Ticp
Finally, camera coordinate system rotary variable R=R corresponding with physical coordinates systempnp·Ricp, translation variable T=Tpnp +Ticp

Claims (5)

1. a kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature, which comprises the steps of:
Step 1: merge the extraction of straight line of priori knowledge with merge;
Step 2: the matching line segments in binocular image:
Straight line set in the mesh image of left and rightWithThe straight line in left figure is taken, fromIt arrives Successively with the straight line in right meshIt calculates, then has formula:
Wherein, Mark is the score value of two matching line segments, in initial Markfinal=Mark0=100, MarkiFor a certain mistake Score value in journey, f are specified each item rating condition, and the angle including straight line, horizontal restraint, it is visually poor to control;
Take two straight lines respectively in the straight line set of left and right mesh image(l represents left mesh image)(r represents right mesh image), It brings into included angle of straight line constraint, horizontal restraint and left and right mesh disparity constraint;
By above step, the end value of Mark is obtained, Mark is comparediWith Marki-1Value, Markfinal=max (Marki, Marki-1), while and writing down the number k of straight line in mesh image graphic in left and right corresponding to the score valuetempL, jtempR;In left mesh Image kth straight line successively traverses all straight lines and then the number j with straight line in right mesh image graphic in right meshtempRGo according to All straight lines in the secondary left mesh of traversal, execute formula
Final iteration goes out maximum Mark value and corresponding itempRIf itempL=itempR, this left mesh graph line, which is numbered, is itempLStraight line and right mesh graph line number be jtempLSuccessful match is stored in matching line set M;Otherwise it fails to match, It repeats the above steps, successively iteration is finally obtained until algorithm termination
Step 3: the three-dimensional reconstruction of linear feature
3.1: the solution of space line
If graph line equation is ax+by+c=0, according to central projection principle it is found that the straight line on image corresponds to sky Between a plane, if spatial plane equation are as follows:
AX+BY+CZ+D=0
Had by the center imaging model put:
Wherein m11To m34For the product of the internal reference matrix of video camera imaging, comparison equation can be obtained:
By the two coplanarity equation simultaneous solutions determined as straight line matched on image, intersection is obtained as the space of line correspondences is straight Line, if space line LEFIt is corresponding as straight line isList the equation of space line are as follows:
Using the parameter type equation of space line, any x=x is taken0It substitutes into above-mentioned equation, solves certain point on space line Coordinate (X0,Y0,Z0);
Oneself knows the law vector of vertical two planeDetermine the direction vector of straight line
The equation of space line are as follows:
3.2 solution room line segments
By optical center point O in left meshlWith the straightway for projecting to left mesh imaging planeBoth ends respectively constitute two space linesWithFind out space line lEFWithIEFWithSpace intersection (if two straight line antarafacials, find out two straight line of distance most Short-range point) it is respectively EleftWith Fleft.The intersection point E of space line can be similarly found out in right meshrightWith Fright.To protect Card straight line information obtains the reservation of maximum possible, in resulting 4 points of { Eleft,Eright,Fleft,FrightIn, take length maximum Line segment Origin And Destination of the two o'clock as reduction.To realize the reduction of space line section EF.By obtained space line It successively calculates and obtains the straightway set N={ L in space1,L2…Ln}。
Step 4: pose calculates
4.1 select left purpose image as thick matched two-dimensional surface: obtaining line segment aggregate in step 3 Obtain in set that the intersection point between line segment, the collection for obtaining intersection point are combined into M two-by-twopnp={ (x1,y1),(x2,y2)…(xn,yn)};
Three-dimensional point in measurement, the real coordinate system of foundation, is stored in point set G3dpot={ (X1,Y1,Z1),(X2,Y2,Z2)…(Xn,Yn, Zn) in, by point set MpnpIn all points arrange in sequence, make its sequence and corresponding point set G3dpotIn sequence It is consistent, substitutes into camera imaging model formula:
Wherein [1, n] i ∈
Wherein K is known camera internal reference, and λ is the proportionality coefficient of imaging model, by MpnpAnd G3dpotPoint substitute into formula respectively In, using EPNP algorithm, finally obtain thick matched rotary variable Rpnp, translation variable Tpnp
4.2 obtain point cloud: by space line, it is split sampling according to certain threshold value, it is final to obtain based on linear feature Point cloud data takes certain straight line in space, and the specific method is as follows:
Wherein θ is the angle of straight line and positive direction of the x-axis, and len is the length of the straight line, and k is the quantity of the Points on Straight Line cloud, this is straight The discrete point set collection of line is combined into
To, to obtained 3 d-line, point set P being successively separated into according to above-mentioned algorithm, for the mould manually established in step 3 Plate library after the same method, forms point set Q;
4.3 are set as the result in 4.1 spin matrix and translation vector of initialization, i.e. R=Rpnp, T=Tpnp, more new data point Collection, to the P translation acquired using 4.1 and rotation parameter, obtains new transformation point set, and calculate errorWherein, pi∈ P, qi∈ Q, if the difference of iterative estimate error is less than given threshold value, i.e.,It then calculates and finishes;If not satisfied, repeating above-mentioned iterative process, rotary variable R is obtainedicpWith translation variable Ticp
Finally, camera coordinate system rotary variable R=R corresponding with physical coordinates systempnp·Ricp, translate variable T=Tpnp+ Ticp
2. a kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature according to claim 1, feature Be: the extraction of straight line of step 1 fusion priori knowledge includes: with merging
1.1: straight line number l1,l2…lb, the coordinate value of straight line starting point is (xbegin1,ybegin1), (xbegin2,ybegin2) ..., (xbeginb,ybeginb), end coordinate values are (xend1,yend1), (xend2,yend2) ..., (xendb,yendb);Origin O is straight to each item Line l1,l2…lbDistance be denoted as d1,d2..., db, l1,l2…lbAngle with the positive direction of image x-axis is respectively θ12…θb
1.2: the identical straight line of slope being grouped, { group is denoted as1,group2..., groupm, wherein m is group, respectively The value of group is by θ12…θbDifferent number determines;If groupiIn straight line item number be greater than 1, remember lijAnd likIt respectively indicates groupiJ-th strip straight line and kth straight line in i-th group, then calculate groupiIn different straight line l two-by-twoijAnd likBetween Relative distance:
Δ d=dlij-dlik
Wherein: dlij, dlikRespectively origin is to straight line lijWith likThe shortest distance;The threshold value that combined distance is arranged is d, if Δ d < d, then lijWith likIt is classified as set groups, the straightway in set is merged;
1.3: according to above-mentioned steps, until groupsIn straight line item number be 1, straight line merging terminate.
3. a kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature according to claim 1, feature It is: included angle of straight line constraint specifically: the threshold value T of the difference of the angle of two straight lines of settingangleIf meeting Wherein,WithRespectivelyWithWith the angle of the positive direction of image x-axis, then the right purpose straight line meets angle restriction, no Then score value Mark is zeroed.
4. a kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature according to claim 1, feature It is: horizontal restraint specifically: Dy is horizontal restraint value, DymaxFor horizontal restraint threshold value, enableStarting point coordinate (xst1, yst1), terminal point coordinate (xend1,yend1),Starting point coordinate (xstr,ystr), terminal point coordinate (xendr,yendr), Dy meets:
Wherein, If Dy < Dymax, then this two straight lines meet horizontal restraint, then carry out the deduction of score value: MarkDy=Mark-Dy α, α are weight,;If Dy > Dymax, then score value Mark is zeroed.
5. a kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature according to claim 1, feature It is: left and right mesh disparity constraint specifically:
Dx is horizontal parallax, DxmaxFor horizontal parallax threshold value, Dx meets:
Wherein, If Dx < Dxmax, then this two straight lines meet left and right constraint, and otherwise score value Mark is zeroed.
CN201910526419.3A 2019-06-18 2019-06-18 Pose estimation method based on combination of linear features and point cloud features Active CN110310331B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910526419.3A CN110310331B (en) 2019-06-18 2019-06-18 Pose estimation method based on combination of linear features and point cloud features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910526419.3A CN110310331B (en) 2019-06-18 2019-06-18 Pose estimation method based on combination of linear features and point cloud features

Publications (2)

Publication Number Publication Date
CN110310331A true CN110310331A (en) 2019-10-08
CN110310331B CN110310331B (en) 2023-04-14

Family

ID=68076179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910526419.3A Active CN110310331B (en) 2019-06-18 2019-06-18 Pose estimation method based on combination of linear features and point cloud features

Country Status (1)

Country Link
CN (1) CN110310331B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111308481A (en) * 2020-02-21 2020-06-19 深圳市银星智能科技股份有限公司 Laser positioning method and device and mobile robot
CN111325796A (en) * 2020-02-28 2020-06-23 北京百度网讯科技有限公司 Method and apparatus for determining pose of vision device
CN111813882A (en) * 2020-06-18 2020-10-23 浙江大华技术股份有限公司 Robot map construction method, device and storage medium
CN112577500A (en) * 2020-11-27 2021-03-30 北京迈格威科技有限公司 Positioning and map construction method and device, robot and computer storage medium
CN112720477A (en) * 2020-12-22 2021-04-30 泉州装备制造研究所 Object optimal grabbing and identifying method based on local point cloud model
CN112862692A (en) * 2021-03-30 2021-05-28 煤炭科学研究总院 Image splicing method applied to underground coal mine roadway
CN116523984A (en) * 2023-07-05 2023-08-01 矽瞻科技(成都)有限公司 3D point cloud positioning and registering method, device and medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833761A (en) * 2010-04-20 2010-09-15 南京航空航天大学 Unmanned aerial vehicle (UAV) position and orientation estimation method based on cooperative target characteristic lines
US20140267614A1 (en) * 2013-03-15 2014-09-18 Seiko Epson Corporation 2D/3D Localization and Pose Estimation of Harness Cables Using A Configurable Structure Representation for Robot Operations
CN105976353A (en) * 2016-04-14 2016-09-28 南京理工大学 Spatial non-cooperative target pose estimation method based on model and point cloud global matching
CN106485690A (en) * 2015-08-25 2017-03-08 南京理工大学 Cloud data based on a feature and the autoregistration fusion method of optical image
CN107945220A (en) * 2017-11-30 2018-04-20 华中科技大学 A kind of method for reconstructing based on binocular vision
CN108982901A (en) * 2018-06-14 2018-12-11 哈尔滨工业大学 A kind of rotating speed measurement method of at the uniform velocity rotary body
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109373898A (en) * 2018-11-27 2019-02-22 华中科技大学 A kind of complex parts pose estimating system and method based on three-dimensional measurement point cloud
CN109544599A (en) * 2018-11-22 2019-03-29 四川大学 A kind of three-dimensional point cloud method for registering based on the estimation of camera pose
CN109801337A (en) * 2019-01-21 2019-05-24 同济大学 A kind of 6D position and orientation estimation method of Case-based Reasoning segmentation network and iteration optimization

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833761A (en) * 2010-04-20 2010-09-15 南京航空航天大学 Unmanned aerial vehicle (UAV) position and orientation estimation method based on cooperative target characteristic lines
US20140267614A1 (en) * 2013-03-15 2014-09-18 Seiko Epson Corporation 2D/3D Localization and Pose Estimation of Harness Cables Using A Configurable Structure Representation for Robot Operations
CN106485690A (en) * 2015-08-25 2017-03-08 南京理工大学 Cloud data based on a feature and the autoregistration fusion method of optical image
CN105976353A (en) * 2016-04-14 2016-09-28 南京理工大学 Spatial non-cooperative target pose estimation method based on model and point cloud global matching
CN107945220A (en) * 2017-11-30 2018-04-20 华中科技大学 A kind of method for reconstructing based on binocular vision
CN108982901A (en) * 2018-06-14 2018-12-11 哈尔滨工业大学 A kind of rotating speed measurement method of at the uniform velocity rotary body
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109544599A (en) * 2018-11-22 2019-03-29 四川大学 A kind of three-dimensional point cloud method for registering based on the estimation of camera pose
CN109373898A (en) * 2018-11-27 2019-02-22 华中科技大学 A kind of complex parts pose estimating system and method based on three-dimensional measurement point cloud
CN109801337A (en) * 2019-01-21 2019-05-24 同济大学 A kind of 6D position and orientation estimation method of Case-based Reasoning segmentation network and iteration optimization

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
DIYI LIU: "Point Pair Feature-Based Pose Estimation with Multiple Edge Appearance Models (PPF-MEAM) for Robotic Bin Picking", 《HTTPS://WWW.MDPI.COM/1424-8220/18/8/2719》 *
GUO NINGBO: "A fusion method for TOF point cloud and digital photograph", 《A FUSION METHOD FOR TOF POINT CLOUD AND DIGITAL PHOTOGRAPH》 *
任祥华: "激光雷达室内SLAM方法", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *
吕强等: "基于点云配准的室内移动机器人6自由度位姿估计", 《装甲兵工程学院学报》 *
张名芳: "基于激光雷达的远距离运动车辆位姿估计", 《公路交通科技》 *
张智: "机器人双目图像实时立体匹配算法", 《北京理工大学学报》 *
石广升: "基于Kinect的物体三维模型构建和姿态估计方法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *
童磊: "面向机器人抓取的零件识别与定位方法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111308481B (en) * 2020-02-21 2021-10-15 深圳市银星智能科技股份有限公司 Laser positioning method and device and mobile robot
CN111308481A (en) * 2020-02-21 2020-06-19 深圳市银星智能科技股份有限公司 Laser positioning method and device and mobile robot
CN111325796B (en) * 2020-02-28 2023-08-18 北京百度网讯科技有限公司 Method and apparatus for determining pose of vision equipment
CN111325796A (en) * 2020-02-28 2020-06-23 北京百度网讯科技有限公司 Method and apparatus for determining pose of vision device
CN111813882A (en) * 2020-06-18 2020-10-23 浙江大华技术股份有限公司 Robot map construction method, device and storage medium
CN111813882B (en) * 2020-06-18 2024-05-14 浙江华睿科技股份有限公司 Robot map construction method, device and storage medium
CN112577500A (en) * 2020-11-27 2021-03-30 北京迈格威科技有限公司 Positioning and map construction method and device, robot and computer storage medium
WO2022110767A1 (en) * 2020-11-27 2022-06-02 北京迈格威科技有限公司 Localization and mapping method, apparatus, robot, and computer-readable storage medium
CN112720477A (en) * 2020-12-22 2021-04-30 泉州装备制造研究所 Object optimal grabbing and identifying method based on local point cloud model
CN112720477B (en) * 2020-12-22 2024-01-30 泉州装备制造研究所 Object optimal grabbing and identifying method based on local point cloud model
CN112862692A (en) * 2021-03-30 2021-05-28 煤炭科学研究总院 Image splicing method applied to underground coal mine roadway
CN116523984A (en) * 2023-07-05 2023-08-01 矽瞻科技(成都)有限公司 3D point cloud positioning and registering method, device and medium
CN116523984B (en) * 2023-07-05 2023-09-26 矽瞻科技(成都)有限公司 3D point cloud positioning and registering method, device and medium

Also Published As

Publication number Publication date
CN110310331B (en) 2023-04-14

Similar Documents

Publication Publication Date Title
CN110310331A (en) A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature
CN107392947B (en) 2D-3D image registration method based on contour coplanar four-point set
CN108369741B (en) Method and system for registration data
CN108921895B (en) Sensor relative pose estimation method
CN113178009B (en) Indoor three-dimensional reconstruction method utilizing point cloud segmentation and grid repair
CN106683173A (en) Method of improving density of three-dimensional reconstructed point cloud based on neighborhood block matching
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN109658444B (en) Regular three-dimensional color point cloud registration method based on multi-modal features
CN107492107B (en) Object identification and reconstruction method based on plane and space information fusion
WO2015179216A1 (en) Orthogonal and collaborative disparity decomposition
CN108428249A (en) A kind of initial position and orientation estimation method based on optical flow tracking and double geometrical models
CN111640158A (en) End-to-end camera based on corresponding mask and laser radar external reference calibration method
CN107886471B (en) Method for removing redundant objects of photo based on super-pixel voting model
Yang et al. Stereo matching using epipolar distance transform
CN113393524A (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN113642397B (en) Object length measurement method based on mobile phone video
CN109345570B (en) Multi-channel three-dimensional color point cloud registration method based on geometric shape
Kurazume et al. Mapping textures on 3D geometric model using reflectance image
CN104156933A (en) Image registering method based on optical flow field
Rodrigues et al. On the representation of rigid body transformations for accurate registration of free-form shapes
Shen et al. A 3D modeling method of indoor objects using Kinect sensor
CN111899293B (en) Virtual and real shielding processing method in AR application
Kang et al. 3D urban reconstruction from wide area aerial surveillance video
Abzal et al. Development of an automatic map drawing system for ancient bas-reliefs
CN109919886A (en) A kind of image split-joint method based on Shi-Tomasi corners Matching and multi-resolution Fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant