CN110310331A - A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature - Google Patents

A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature Download PDF

Info

Publication number
CN110310331A
CN110310331A CN201910526419.3A CN201910526419A CN110310331A CN 110310331 A CN110310331 A CN 110310331A CN 201910526419 A CN201910526419 A CN 201910526419A CN 110310331 A CN110310331 A CN 110310331A
Authority
CN
China
Prior art keywords
straight line
point
mark
straight
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910526419.3A
Other languages
Chinese (zh)
Other versions
CN110310331B (en
Inventor
苏丽
刘钲
张智
夏桂华
秦绪杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201910526419.3A priority Critical patent/CN110310331B/en
Publication of CN110310331A publication Critical patent/CN110310331A/en
Application granted granted Critical
Publication of CN110310331B publication Critical patent/CN110310331B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of position and orientation estimation methods based on linear feature in conjunction with point cloud feature, comprising: (1) merges the extraction of straight line of priori knowledge;(2) matching line segments in binocular image;(3) three-dimensional reconstruction of linear feature;(4) pose calculates.Point cloud of the invention comes from edge feature, with good anti-interference ability and accurate stationkeeping ability, and it remains to be effectively matched after becoming a cloud even if phenomena such as length transformation or fracture occurs in line segment with the robust sexual clorminance that point cloud matching replaces line match that can give full play to point cloud matching;Point cloud limited amount, overlay area is the limited line-segment sets in space, this greatly reduces invocation point cloud quantity, improves matching speed, but be from again with time point cloud and contribute maximum object edge to positioning, positioning accuracy can't be decreased obviously;Extraction of straight line and matching process are not necessarily to fine and close depth field information, can guarantee precision for the object of complex texture and simple textures.

Description

Pose estimation method based on combination of linear features and point cloud features
Technical Field
The invention relates to a pose estimation method, in particular to a pose estimation method based on combination of linear features and point cloud features, and belongs to the technical field of image processing.
Background
The pose estimation problem is an important problem of the subjects of photogrammetry, computer vision, computer graphics, robots and the like, and is a core problem to be solved by many engineering practices and theoretical researches, such as navigation and positioning of a vision servo system and a mobile robot, object identification and tracking, virtual reality, camera self-calibration, robot hand-eye calibration and the like.
A simple method is to directly extract a plurality of angular point characteristics of an object, and calculate the pose of the object relative to a visual system according to the positions of a characteristic point set in the visual coordinate system and the position of the object, but the number of the angular point characteristics of the object is limited, so that the solving precision is influenced, and even the solving cannot be carried out due to the insufficient number of the angular point characteristics. In recent years, a pose estimation method based on point cloud characteristics is popular, the method can be used for positioning objects with complex shapes and has good robustness, but the method needs image acquisition equipment to acquire high-precision and compact three-dimensional point cloud information, complex operations such as point cloud segmentation and splicing between a measured object and the environment need to be realized, and the time consumption of an algorithm calculation process is caused by the large number of point clouds.
Some methods consider positioning based on the linear features of an object, the linear features are more obvious than angular point features and stronger in anti-jamming capability, the method is simple, the calculated amount is small, and image depth does not need to be measured, but the linear features cannot be accurately positioned in the matching process as the point features (particularly obvious in stereoscopic vision), when the extracted linear length changes, fractures and other phenomena occur due to shielding or illumination influence, the accurate recovery of the three-dimensional pose of the object is difficult to guarantee, most of the existing methods put emphasis on the extraction and matching of the linear features, and the final positioning accuracy and reliability are not deeply discussed.
The invention considers the combination of the thought of the corner point characteristic, the straight line characteristic and the point cloud matching, and designs the method which has small calculated amount and high reliability and can adapt to the input of common images (without dense depth information). The method is mainly characterized in that linear features are used as main points, the linear features are restored into a three-dimensional line segment set through left-right eye matching based on a binocular vision system, the line segment set is scattered into a three-dimensional point set through discrete sampling, then the estimation of the position and the posture of an object is achieved through a point cloud matching method, and the result of the position and the posture of the object calculated through angular point features is used as an initial substitution condition before matching.
Disclosure of Invention
Aiming at the prior art, the invention aims to provide a pose estimation method based on combination of linear features and point cloud features, which has the advantages of small calculated amount, high reliability, capability of adapting to common image input and no need of dense depth information.
In order to solve the technical problem, the invention provides a pose estimation method based on combination of straight line features and point cloud features, which comprises the following steps:
step 1: linear feature extraction and combination of the prior knowledge are fused;
step 2: straight line matching in binocular images:
set of straight lines in left and right eye imagesAndtaking the straight line in the left picture fromToIn turn aligned with the line in the right eyeAnd (3) calculating according to the formula:
wherein, Mark is the score value of matching two straight lines, and the Mark at the beginningfinal=Mark0=100,MarkiF is a score value in a certain process, and f is a designated scoring condition, including a linear included angle, horizontal constraint and left-right eye parallax;
respectively taking two straight lines from the straight line set of the left eye image and the right eye image(l represents a left eye image)(r represents a right eye image) to be brought into a linear included angle constraint, a horizontal constraint and a left-right visual difference constraint;
through the steps, the final value of the Mark is obtained, and the Mark is comparediAnd Marki-1Value of (1), Markfinal=max(Marki,Marki-1) And simultaneously recording the number k of the straight line in the left and right image pictures corresponding to the score valuetempL,jtempR(ii) a After the k-th straight line of the left eye image traverses all straight lines in the right eye in sequence, the number j of the straight line in the right eye image picture is usedtempRSequentially traversing all straight lines in the left eye and executing a formula
Finally, iterating to obtain the maximum Mark value and corresponding itempRIf i istempL=itempRThe left eye image line is numbered itempLThe straight line of (a) and the straight line of the right eye image are numbered as jtempLSuccessfully matching and storing the matching into a matching straight line set M; otherwise, the matching fails, the steps are repeated, and the iteration is carried out in sequence until the algorithm is terminated, and finally the result is obtained
And step 3: three-dimensional reconstruction of straight line features
3.1 solving of spatial lines
An image straight line equation is set to be ax + by + c as 0, and as can be known from the central projection principle, a straight line on an image corresponds to a plane in space, and a space plane equation is set to be:
AX+BY+CZ+D=0
the central imaging model from points has:
wherein m is11To m34For the product of the internal reference matrix imaged by the camera, the comparison equation can be derived:
simultaneously solving two coplanar equations determined by image straight lines matched on the image, intersecting to obtain a space straight line corresponding to the image straight line, and setting a space straight line LEFCorresponding image line isThe equation listing the spatial lines is:
adopting a parametric equation of a space straight line, and taking any x as x0Substituting into the above equation, the coordinate (X) of a certain point on the space straight line is solved0,Y0,Z0);
Knowing the normal vector of two perpendicular planesDetermining a direction vector of a straight line
The equation for the spatial line is:
3.2 solving spatial line segments
In the left eye from the center of light point OlAnd a straight line segment projected onto the left eye imaging planeTwo ends respectively forming two space straight linesAndcalculating a spatial straight line lEFAndlEFandthe spatial intersections (if the two straight lines are not coplanar, the point having the shortest distance from the two straight lines) are respectively EleftAnd Fleft. In the same way, the intersection point E of the space straight lines can be obtained in the right eyerightAnd Fright. To ensure that the straight line information is retained to the maximum possible extent, at four points { E ] are obtainedleft,Eright,Fleft,FrightAnd taking two points with the largest length as the starting point and the end point of the restored line segment. Thereby realizing the reduction of the spatial straight line section EF. Sequentially calculating the obtained space straight lines to obtain a space straight line segment set N ═ L1,L2···Ln}。
And 4, step 4: pose calculation
4.1 select left destination image as coarse matched two-dimensional plane: in step 3, a line segment set is obtainedAcquiring the intersection point between every two line segments in the set, and acquiring the set of the intersection point as Mpnp={(x1,y1),(x2,y2)···(xn,yn)};
Measuring and establishing three-dimensional points in a real coordinate system, and storing the three-dimensional points into a point set G3dpot={(X1,Y1,Z1),(X2,Y2,Z2)···(Xn,Yn,Zn) In the preceding, set M pointspnpAll the points in (b) are arranged in a certain order, so that the order is corresponding to the point set G3dpotThe sequence in (1) is kept consistent and is substituted into a camera imaging model formula:
where i ∈ [1, n ]]
Wherein K is known camera internal parameter, lambda is proportional coefficient of imaging model, and M ispnpAnd G3dpotRespectively substituting the points into a formula, applying an EPNP algorithm, and finally obtaining a coarse matching rotation variable RpnpTranslation variable Tpnp
4.2, point cloud acquisition: the method comprises the following steps of carrying out segmentation sampling on a spatial straight line according to a certain threshold value, finally obtaining point cloud data based on straight line characteristics, and taking a certain straight line of the space, wherein the specific method comprises the following steps:
theta is the included angle between the straight line and the positive direction of the x axis, len is the length of the straight line, k is the number of point clouds on the straight line, and the point set dispersed by the straight line is
Dispersing the three-dimensional straight lines obtained in the step 3 into point sets P in sequence according to the algorithm, and forming point sets Q for the manually established template library according to the same method;
4.3 set the result in 4.1 as the initialized rotation matrix and translation vector, i.e., R ═ Rpnp,T=TpnpUpdating data point set, obtaining new transformation point set by using translation and rotation parameters obtained by 4.1 for P, and calculating errorWherein p isi∈P,qiE.g. Q, if the difference between the iterative estimation errors is less than a given threshold, i.e.The calculation is finished; if not, repeating the iteration process to obtain a rotation variable RicpAnd a translation variable Ticp
Finally, the rotation variable R ═ R of the camera coordinate system corresponding to the physical coordinate systempnp·RicpTranslation variable T ═ Tpnp+Ticp
The invention also includes:
1. step one, the extraction and combination of the linear features fused with the prior knowledge comprise:
1.1: straight line number l1,l2···lbThe coordinate value of the starting point of the straight line is (x)begin1,ybegin1),(xbegin2,ybegin2),…, (xbeginb,ybeginb) The coordinate value of the end point is (x)end1,yend1),(xend2,yend2),…,(xendb,yendb) (ii) a From the origin O to the respective lines l1,l2···lbIs recorded as d1,d2,···,db,l1,l2···lbThe included angles with the positive direction of the x axis of the image are theta12···θb
1.2: grouping the lines with the same slope, and recording as group1,group2,···,groupmWhere m is a group, each group having a value represented by θ12···θbDetermining the number of the differences; if groupiIn (1)The number of straight lines is greater than 1, and l is countedijAnd likRespectively represent groupiThe jth straight line and the kth straight line in the ith group are calculatediMiddle two different straight lines lijAnd likRelative distance therebetween:
Δd=dlij-dlik
wherein: dlij,dlikRespectively from origin to line lijAnd likThe shortest distance of (d); setting a threshold value of the merging distance as d, and if delta d is less than d, then lijAnd likGrouped into a collective groupsMerging the straight line segments in the set;
1.3: according to the steps, until groupsThe number of straight lines in (1) is 1, and the merging of straight lines is completed.
2. The linear included angle constraint specifically comprises: setting a threshold T for the difference between the angles of two straight linesangleIf it satisfiesWherein,andare respectively asAndand if the angle is included with the positive direction of the x axis of the image, the right target straight line meets the angle constraint, and otherwise, the Mark value returns to zero.
3. The horizontal constraint is specifically: dy is a horizontal confinement valuemaxTo horizontally constrain the threshold, orderCoordinate of starting point (x)st1,yst1) End point coordinate (x)end1,yend1),Coordinate of starting point (x)str,ystr) End point coordinate (x)endr,yendr) And Dy satisfies:
wherein, if Dy is less than DymaxIf the two straight lines satisfy the horizontal constraint, the deduction of the score value is carried out: markDyMark-Dy · α, α is the weight; if Dy > DymaxThe score value Mark is zeroed.
4. The left-right eye parallax constraint specifically comprises:
dx is left-right parallax, DxmaxFor left and right disparity thresholds, Dx satisfies:
wherein,if Dx < DxmaxThen the two lines satisfy the left and right constraints, otherwise the score value Mark returns to zero.
The invention has the beneficial effects that:
1) the point cloud is from the edge characteristics, so that the point cloud has good anti-interference capability and accurate positioning capability, and the robustness advantage of point cloud matching can be fully exerted by replacing line segment matching with point cloud matching (even if the line segment has the phenomena of length transformation or breakage and the like, the line segment can still be effectively matched after being changed into the point cloud);
2) the number of the point clouds is limited, the coverage area of the point clouds is not a three-dimensional curved surface but a line segment set with limited space, so that the number of the point clouds is greatly reduced, the matching speed is improved, but the point clouds are all from the edges of objects which have the largest contribution to positioning, and the positioning precision cannot be obviously reduced;
3) the linear feature extraction and matching process does not need dense depth field information, and the precision can be guaranteed for objects with complex textures and simple textures.
Drawings
FIG. 1(a) is a schematic view of merging straight lines having the same slope
FIG. 1(b) is a schematic view showing the combination of straight lines having a slope difference of θ
FIG. 2 is a schematic diagram of spatial three-dimensional linear reconstruction
FIG. 3 is a schematic diagram of point cloud acquisition
FIG. 4 is a flow chart of a pose estimation algorithm based on the combination of linear features and point cloud features.
Detailed Description
The invention is further described below in conjunction with fig. 1(a) to 4:
the pose estimation problem is an important problem of the subjects of photogrammetry, computer vision, computer graphics, robots and the like, and is a core problem to be solved by many engineering practices and theoretical researches, such as navigation and positioning of a vision servo system and a mobile robot, object identification and tracking, virtual reality, camera self-calibration, robot hand-eye calibration and the like.
A simple method is to directly extract a plurality of angular point characteristics of an object, and calculate the pose of the object relative to a visual system according to the positions of a characteristic point set in the visual coordinate system and the position of the object, but the number of the angular point characteristics of the object is limited, so that the solving precision is influenced, and even the solving cannot be carried out due to the insufficient number of the angular point characteristics. In recent years, a pose estimation method based on point cloud characteristics is popular, the method can be used for positioning objects with complex shapes and has good robustness, but the method needs image acquisition equipment to acquire high-precision and compact three-dimensional point cloud information, complex operations such as point cloud segmentation and splicing between a measured object and the environment need to be realized, and the time consumption of an algorithm calculation process is caused by the large number of point clouds. The linear features are more obvious than the angular point features, the anti-interference capability is stronger, the method is simple, the calculated amount is small, the image depth does not need to be measured, but the linear features cannot be accurately positioned in the matching process (particularly obvious in stereoscopic vision) as the point features, when the phenomena of change of the length of the extracted linear, fracture and the like caused by shielding or illumination influence occur, the accurate recovery of the three-dimensional pose of an object is difficult to ensure, most of the existing methods put emphasis on the extraction and matching of the linear features, and the final positioning accuracy and reliability are not deeply discussed.
The invention considers the combination of the thought of the corner point characteristic, the straight line characteristic and the point cloud matching, and designs the method which has small calculated amount and high reliability and can adapt to the input of common images (without dense depth information). The method is mainly characterized in that linear features are used as main points, the linear features are restored into a three-dimensional line segment set through left-right eye matching based on a binocular vision system, the line segment set is scattered into a three-dimensional point set through discrete sampling, then the estimation of the position and the posture of an object is achieved through a point cloud matching method, and the result of the position and the posture of the object calculated through angular point features is used as an initial substitution condition before matching. Thereby realizing the estimation of the position and the attitude of the object.
The method comprises the following steps:
step 1, linear feature extraction of the object is achieved. Before the algorithm is executed, the edge of the image needs to be extracted, and since the result after the edge detection not only contains the linear edge of the target object, but also contains other interference features (such as the edge of a table, etc.), the interference must be filtered out. The interference of other linear characteristics is filtered through the identification of the priori knowledge and the identification of the color information of the object to be detected. Experiments also find that the result after the straight line detection cannot ensure that each edge corresponds to a unique line segment (which influences subsequent pose estimation), and then a post-processing method for the straight line segment fusion is provided.
And 2, matching the left and right eye straight line pairs. The matching pairs of the left eye line and the right eye line are accurately obtained through the quadruple constraints of the included angle, the horizontal constraint, the left eye difference constraint, the right eye difference constraint and the length difference matching of the straight lines.
And 3, three-dimensional reconstruction of the linear characteristics. One straight line of the object in the space is EF, and the straight line is observed by two camerasTwo images of elflAnd erfr. Then according to the pinhole imaging model and the central projection principle, the spatial straight line EF is represented by OlAnd elflComposed of plane S1And from OrAnd erfrComposed of plane S2The intersection line of (a). The projection lines on the image pick-up planes of the two cameras and the intersection line of the two planes formed by the respective optical centers can determine a space straight line.
And 4, estimating the position and the posture of the object. In the invention, firstly, the three-dimensional straight line obtained in the last step is cut and scattered according to a certain threshold value, thereby generating a series of 3-dimensional point sets. And establishing a complete three-dimensional point set model in the artificially set physical coordinate system space.
And then carrying out initial rough estimation, wherein a 2D-to-3D ePNP attitude estimation algorithm is used, and the method can well adjust the coordinate system to approximate consistency for two pieces of point cloud data with larger similarity.
And finally, carrying out iterative computation by using an iterative near point method, taking the result of rough estimation obtained in the previous process as an initialization condition, and carrying out repeated iteration to obtain a relatively precise position posture.
Example (b):
1. linear feature extraction with a priori knowledge fused
The invention takes the extraction of straight line segment characteristics of Hough transformation as a basic method to realize the extraction of straight line characteristics of an object. The algorithm needs to extract the edge of an image before Hough transformation, and assuming that a cuboid in the image is an object to be observed, because the result after edge detection not only contains the linear edge of a target object, but also contains other interference characteristics (such as the edge of a table), interference must be filtered out. When the vision system actually searches for the position of the target object, some characteristics (such as initial position range, color, size, etc.) of the object are generally predicted, and this section will provide a method for filtering interference based on such known constraints (even if the above strategy is adopted, the interference may not be completely filtered, but the point cloud matching strategy adopted subsequently still can effectively work under the condition of certain interference, which is detailed in the experimental results below). In addition, in the research, the result of Hough line detection cannot ensure that each edge corresponds to a unique line segment (which influences the subsequent pose estimation), and the invention provides a post-processing method for line segment fusion.
1.1) edge extraction and interference rejection
In the algorithm of edge extraction, in order to ensure that the extracted edge (especially the straight line contour) is clear, the edge is continuous without interruption. The following criteria, high signal-to-noise ratio criteria, high positioning accuracy criteria, single edge response criteria, should generally be met. Therefore, in the present invention, a Canny edge detection algorithm is applied to detect the edges of the object.
After the obtained edge profile information, the edge information of the table edge and other objects interferes with the detection of the straight line characteristics of the objects. According to the priori knowledge, the preliminary range and the color range of the edge contour of the object in the picture are known. First, the present invention uses a color space based color image contour detection algorithm based on a priori contour selection range. Introducing a component field to represent a cylinder volume composed of H, S, V components in a color space, improving a color difference measurement method, detecting various components on the basis, finally obtaining an edge image of the color range, and binarizing the obtained image. And selecting a specific area of the object to be detected in the image.
1.2) straight line segment extraction and repeated line segment merging
In the method of extracting the straight line feature using Hough transform, the extraction result is determined by the shape of the object as shown in fig. 2 (a). The different lengths of the straight lines lead to that the edge of the detected straight line with longer side length can detect repeated straight lines. There is a step of repeating the merging of the straight lines after the straight line detection.
Straight segments belonging to the same straight line are now merged for subsequent matching. The main strategy of the straight line segment combination is to combine straight line segments with the included angle smaller than a threshold value of 5T (2 degrees are selected in experiments), the gap length (the minimum distance between the endpoints of the straight line segments) between the straight line segments is smaller than a threshold value of 6T (half of the minimum length of the two straight line segments is selected in experiments), the straight line segments with the consistent gray scale distribution on the two sides of the straight line segments are combined to obtain a fitting straight line segment, then the fitting error is calculated, and the final combined straight line segment with the error smaller than a threshold value of 7T (5 pixels. The specific method comprises the following steps:
1) straight line number l1,l2···lbCoordinate value (x) of straight line start pointst1,yst1),(xst2,yst2)···(xstb,ystb) Coordinate value of end point (x)end1,yend1),(xend2,yend2)···(xendb,yendb). Distance d from origin O to each straight line1,d2···dnAnd the angle of the corresponding straight line relative to the horizontal direction of the picture is theta12···θn
2) Grouping the straight lines with the same slope into groups1,group2···groupmWherein the value of m is represented by θ12···θnThe number of different groups is determined if the groupiIf the number of straight lines in (1) is greater than 2, the group calculation is startediThe relative distance delta d between two middle straight lines is dlij-dlikSetting the threshold value of the merging distance as d if delta d<d. Then lijAnd likThe merging will be performed.
3) The straight lines after merging are denoted as luv. Calculating the included angle between two ends of the original two straight lines, i.e. calculating the vector respectivelyAndthe angle of,andthe included angle of (a). By the formula
The magnitude of θ can be obtained. If theta is less than pi/2, P is selected as a base point, and a point W is a straight line passing through the point P and lrsThe midpoint U of the line segment PW can be obtained. V can be obtained in the same manner.
And when the combination of the straight line segments with the same slope is finished, continuing to fuse the straight line segments with the slope difference within theta and the distance difference delta from the origin. From step 1 of the previous method, straight lines having a slope difference within a set threshold range are grouped, where in the present invention, the threshold is set to θ ═ 3 °. The other steps are performed according to the third step of the previous process.
Straight line matching in 2 binocular images
The method comprises the following step one. Respectively obtaining stable straight line sets in left and right eye imagesAndtaking the straight line in the left picture fromToIn turn aligned with the line in the right eyeAnd (3) calculating according to the formula:
wherein, Mark is the score value of matching two straight lines, and the Mark at the beginningfinal=Mark0100. f is a scoring condition, and is constrained by the condition of linear included angle, horizontal constraint, left-right visual difference constraint and length difference matching.
1) The difference of the included angles of the straight lines if satisfied(in the invention Tangle20 °), then the right destination line satisfies the angle constraint. Otherwise, the Mark value returns to zero.
2) The epipolar line approximates a horizontal line to establish a horizontal constraint.Coordinate of starting point (x)st1,yst1) End point coordinate (x)end1,yend1),Coordinate of starting point (x)str,ystr) End point coordinate (x)endr,yendr)
Wherein,if Dy is less than Dymax(Dy in the present inventionmax100), then the two lines satisfy the horizontal constraint and the deduction of the credit value is performed, MarkDyMark-Dy · α (α is a weight, the value of the present invention is 0.2). If Dy > DymaxThe score value Mark is zeroed.
3) And establishing left and right parallax constraints through the left and right eye overlapping regions. There is the formula:
wherein,if Dx < Dxmax(DxmaxHave different values in scenes with different sizes, Dx in the inventionmax240), then the two lines are fully constrained left and right. Otherwise, the Mark value returns to zero.
4) The length difference is matched. Has the formula
WhereinAnd obtaining the final value of Mark.
Comparison MarkiAnd Marki-1Value of (1), Markfinal=max(Marki,Marki-1) And simultaneously recording the number k of the straight line in the left and right image pictures corresponding to the score valuetempL,jtempR. After the k-th straight line of the left eye image traverses all the straight lines in the right eye in sequence. Using the number j of the straight line in the image picture of the right eyetempRSequentially traversing all straight lines in the left eye and executing a formula
Finally, iterating to obtain the maximum Mark value and corresponding itempRIf i istempL=itempRThe left eye image line is numbered itempLThe straight line of (a) and the straight line of the right eye image are numbered as jtempLAnd (5) successfully matching, and storing into a matching straight line set M. Otherwise the match fails. Repeating the steps, sequentially iterating until the algorithm is terminated, and finally obtaining
Three-dimensional reconstruction of 3-line features
As shown in FIG. 2, the artificial geometric space has a straight line EF, and two images e are observed by two cameraslflAnd erfr. Then according to the pinhole imaging model and the central projection principle, the spatial straight line EF is represented by OlAnd elflComposed of plane S1And from OrAnd erfrComposed of plane S2The intersection line of (a). Two cameras take picturesThe projection line on the image plane and the intersection line of the two planes formed by the respective optical centers can determine a spatial straight line.
3.1) solution of spatial lines
The set of line pairs M has been obtained during step two.
An image straight line equation is set to be ax + by + c as 0, and as can be known from the central projection principle, a straight line on an image corresponds to a plane in space, and a space plane equation is set to be:
AX+BY+CZ+D=0
the central imaging model from points has:
wherein m is11To m34For the product of the internal reference matrix imaged by the camera, the comparison equation can be derived:
the spatial plane equation is referred to as the coplanar equation for the object line, optical center, and image line. And simultaneously solving two coplanar equations determined by the image straight lines matched on the image, and intersecting to obtain a space straight line corresponding to the image straight lines. Setting a space straight line LEFCorresponding image line isThe equation for listing the spatial lines is
The spatial straight line equation in this paper adopts a parametric equation of a spatial straight line. Taking any x as x0Substituting the equation (4) can solve the coordinate (X) of a certain point on the space straight line0,Y0,Z0)。
Knowing the normal vector of two perpendicular planesDetermining a direction vector of a straight line
The equation for the spatial line is:
3.2) solving the spatial line segment
In the left eye from the center of light point OlAnd a straight line segment projected onto the left eye imaging planeTwo ends respectively forming two space straight linesAndcalculating a spatial straight line lEFAndlEFandthe spatial intersections (if the two straight lines are not coplanar, the point having the shortest distance from the two straight lines) are respectively EleftAnd Fleft. In the same way, the intersection point E of the space straight lines can be obtained in the right eyerightAnd Fright. To ensure that the straight line information is retained to the maximum possible extent, at four points { E ] are obtainedleft,Eright,Fleft,FrightAnd taking two points with the largest length as the starting point and the end point of the restored line segment. Thereby realizing the reduction of the spatial straight line section EF. Sequentially calculating the obtained space straight lines to obtain a space straight line segment set N ═ L1,L2···Ln}。
4 pose calculation
The algorithm for estimating the position and the attitude of the object is mainly used for matching straight lines detected by a left eye and a right eye on a two-dimensional plane by the inverse mapping principle of binocular imaging and restoring the straight lines into three-dimensional straight lines by inverse mapping. And cutting and scattering the three-dimensional straight line according to a certain threshold value to generate a series of three-dimensional point sets. A complete 3-dimensional point set model is also built in the artificially set physical coordinate system space. And finally, carrying out iterative computation by using an iterative closest point algorithm to obtain a relatively accurate position posture.
Before the two point sets are subjected to the iterative nearest point algorithm, a rough estimation equivalent to initialization is also carried out, the two-dimensional to three-dimensional ePnP attitude estimation algorithm is used, and the method can well adjust the coordinate system to be approximately consistent for two pieces of point cloud data with larger similarity.
4.1) implementation of the algorithm for the coarse matching of the targets
The left destination image is selected as the two-dimensional plane of the ePnP algorithm. In step one, a line segment set is obtainedAnd acquiring an intersection point between every two line segments in the set. The result is Mpnp={(x1,y1),(x2,y2)···(xn,yn)}。
In the ideal case, in the linear detection of the plane, three straight lines intersect at the same point. However, in an actual situation, there is an error in the detection of the straight lines, and a situation that three straight lines intersect each other pairwise and intersection points are not equal occurs, and at this time, the approximate points are merged.
Measuring and establishing three-dimensional points in a real coordinate system, and storing the three-dimensional points into a point set G3dpot={(X1,Y1,Z1),(X2,Y2,Z2)···(Xn,Yn,Zn) In (c) }. All the points in the point set M are arranged in a certain order, and the order of the points is consistent with the order of the corresponding point set G. M2dpot={(x1,y1),(x2,y2)···(xm,ym)}. Substituting into a camera imaging model formula:
where i ∈ [1, n ]]
Where K is the camera intrinsic parameter, this parameter is assumed to be known in the present invention. Respectively substituting the points into a formula, and applying an EPNP algorithm to finally obtain a coarse matching rotation variable RpnpTranslation variable Tpnp
4.2) method for acquiring point cloud
The method for acquiring the point cloud is to divide and sample a spatial straight line according to a certain threshold value, and finally, point cloud data based on straight line characteristics are obtained. Taking a certain straight line of the space, the specific method is as follows:
wherein theta is an included angle between the straight line and the positive direction of the x axis, len is the length of the straight line, and k is the number of point clouds on the straight line. The point set dispersed by the straight line is
And (3) performing three-dimensional reconstruction on all the spatial characteristic straight lines in the third step, and dispersing the obtained three-dimensional straight lines into a point set P in sequence according to the algorithm. The same method is also used for forming a point set Q for a manually established template library.
4.3) implementation of the Algorithm for the Fine matching of the targets
Initializing R and T. The results in 4.1 are set as the initialized rotation matrix and translation vector. I.e. R ═ Rpnp,T=Tpnp. The set of data points is updated. Obtaining a new transformation point set by using the translation and rotation parameters obtained in the previous step for P, and calculating the errorWherein p isi∈P,qiE.g. Q. If the difference between the iterative estimation errors is less than a given threshold, thenThe calculation is finished; if not, repeating the iteration process to obtain a rotation variable RicpAnd a translation variable Ticp
Finally, the rotation variable R ═ R of the camera coordinate system corresponding to the physical coordinate systempnp·RicpT ═ T for translation variablepnp+Ticp
The specific implementation mode of the invention also comprises:
the invention comprises the following steps:
(1) linear feature extraction and combination fused with priori knowledge
A Canny edge detection algorithm is applied to detect the edges of the object. The color image contour detection algorithm based on the color space HSI is used, Hough transformation is applied to extract straight line features, straight line segments are obtained, extraction and repeated line segment combination are carried out, and the method specifically comprises the following steps:
1.1) straight line number l1,l2···lbThe coordinate value of the starting point of the straight line is (x)begin1,ybegin1),(xbegin2,ybegin2),…, (xbeginn,ybeginn) The coordinate value of the end point is (x)end1,yend1),(xend2,yend2),…,(xendb,yendb). From the origin O to the respective lines l1,l2···lbIs recorded as d1,d2,···,dbAnd the included angles between the corresponding straight lines and the positive direction of the x axis of the image are respectively theta12···θb
1.2) grouping the straight lines with the same slope, and recording as group1,group2,···,groupmWherein m is a group having a value of θ12···θbThe number of the differences is determined. If groupiThe number of straight lines in (2) is largeIn 1, note lijAnd likRespectively represent groupiStarting to calculate group according to the jth straight line and the kth straight line in the ith groupiMiddle two different straight lines lijAnd likRelative distance therebetween:
Δd=dlij-dlik
wherein: dlij,dlikRespectively from origin to line lijAnd likThe shortest distance of (c). Setting a threshold value of the merging distance as d, and if delta d is less than d, then lijAnd likGrouped into a collective groupλStraight line segments within the set are merged.
1.3) following the above steps until groupλThe number of straight lines in (1) is 1, and the merging of straight lines is completed.
(2) Straight line matching in binocular images
Set of straight lines in left and right eye imagesAndtaking the straight line in the left picture fromToIn turn aligned with the line in the right eyeAnd (3) calculating according to the formula:
wherein, Mark is the score value of matching two straight lines, and the Mark at the beginningfinal=Mark0=100。MarkiF is the score value in a certain process, f is the assigned scoring condition, and is limited by the included angle of straight lines and the horizontal, left and right eyesDisparity constitutes this conditional constraint.
Respectively taking two straight lines from the straight line set of the left eye image and the right eye image(l represents a left eye image)(r represents a right eye image). Into the following constraints:
2.1) restraining the included angle of the straight line. Setting a threshold T for the difference between the angles of two straight linesangleIf it satisfies(wherein,andrespectively, the angle between the straight line and the positive direction of the x axis of the image, T in the inventionangle20 °), then the right destination line satisfies the angle constraint. Otherwise, the Mark value returns to zero.
2.2) epipolar line approximation horizontal line establishes horizontal constraints. Order toCoordinate of starting point (x)st1,yst1) End point coordinate (x)end1,yend1),Coordinate of starting point (x)str,ystr) End point coordinate (x)endr,yendr)
Wherein, if Dy is less than Dymax(Dy in the present invention is a horizontal confinement threshold, where Dymax100), then the two lines satisfy the horizontal constraint and the deduction of the credit value is performed, MarkDyMark-Dy · α (α is a weight, the value of the present invention is 0.2). If Dy > DymaxThe score value Mark is zeroed.
And 2.3) establishing left and right parallax constraints in the left and right eye overlapping areas. There is the formula:
wherein, if Dx < Dxmax(DxmaxAs left and right parallax threshold, Dx in the present inventionmax240), then the two lines are fully constrained left and right. Otherwise, the Mark value returns to zero.
Through the steps, the final value of the Mark is obtained, and the Mark is comparediAnd Marki-1Value of (1), Markfinal=max(Marki,Marki-1) And simultaneously recording the number k of the straight line in the left and right image pictures corresponding to the score valuetempL,jtempR. After the k-th straight line of the left eye image traverses all the straight lines in the right eye in sequence. Using the number j of the straight line in the image picture of the right eyetempRSequentially traversing all straight lines in the left eye and executing a formula
Finally, iterating to obtain the maximum Mark value and corresponding itempRIf i istempL=itempRThe left eye image line is numbered itempLStraight line of (1) and right eye image straight lineNumber jtempLAnd (5) successfully matching, and storing into a matching straight line set M. Otherwise the match fails. Repeating the steps, sequentially iterating until the algorithm is terminated, and finally obtaining
(3) Three-dimensional reconstruction of straight line features
3.1) solution of spatial lines
An image straight line equation is set to be ax + by + c as 0, and as can be known from the central projection principle, a straight line on an image corresponds to a plane in space, and a space plane equation is set to be:
AX+BY+CZ+D=0
the central imaging model from points has:
wherein m is11To m34For the product of the internal reference matrix imaged by the camera, the comparison equation can be derived:
the spatial plane equation is referred to as the coplanar equation for the object line, optical center, and image line. And simultaneously solving two coplanar equations determined by the image straight lines matched on the image, and intersecting to obtain a space straight line corresponding to the image straight lines. Setting a space straight line LEFCorresponding image line isThe equation for listing the spatial lines is
The spatial straight line equation in this paper adopts a parametric equation of a spatial straight line. Taking any x as x0By substituting the above equation, the coordinate (X) of a point on the space line can be solved0,Y0,Z0)。
Knowing the normal vector of two perpendicular planesDetermining a direction vector of a straight line
The equation for the spatial line is:
3.2) solving the spatial line segment
In the left eye from the center of light and the straight lineThe two end points respectively form two straight linesAndfind lEFAndlEFandthe intersection points of the spatial lines (if the two lines are not coplanar, the point at which the shortest distance from the two lines is obtained) are respectively EleftAnd Fleft. The same way can be found for E in the right eyerightAnd Fright. To ensure that the straight line information is retained to the maximum possible extent, at four points { E ] are obtainedleft,Eright,Fleft,FrightAnd taking two points with the largest length as the starting point and the end point of the restored line segment. Thereby realizing spaceReduction of the straight-chain segment EF. Sequentially calculating the obtained space straight lines to obtain a space straight line segment set N ═ L1,L2···Ln}。
(4) Pose calculation
4.1) selecting the left destination image as a two-dimensional plane of coarse matching. In the process of (3), a line segment set is obtainedAnd acquiring an intersection point between every two line segments in the set. Get a set of intersections of
Measuring and establishing three-dimensional points in a real coordinate system, and storing the three-dimensional points into a point set G3dpot={(X1,Y1,Z1),(X2,Y2,Z2)···(Xn,Yn,Zn) In (c) }. Set M of pointspnpAll the points in (b) are arranged in a certain order, so that the order is corresponding to the point set G3dpotThe order in (a) is kept consistent. Substituting into a camera imaging model formula:
where i ∈ [1, n ]]
Where K is the camera intrinsic parameter, this parameter is assumed to be known in the present invention. Respectively substituting the points into a formula, and applying an EPNP algorithm to finally obtain a coarse matching rotation variable RpnpTranslation variable Tpnp
And 4.2) the method for acquiring the point cloud is to perform segmentation sampling on the spatial straight line according to a certain threshold value, and finally obtain point cloud data based on the straight line characteristics. Taking a certain straight line of the space, the specific method is as follows:
wherein theta is an included angle between the straight line and the positive direction of the x axis, len is the length of the straight line, and k is the number of point clouds on the straight line. The point set dispersed by the straight line is
And (3) performing three-dimensional reconstruction on all the spatial characteristic straight lines in the third step, and dispersing the obtained three-dimensional straight lines into a point set P in sequence according to the algorithm. The same method is also used for forming a point set Q for a manually established template library.
4.3) set the results in 4.1 to the initialized rotation matrix and translation vector. I.e. R ═ Rpnp,T=Tpnp. The set of data points is updated. Obtaining a new transformation point set by using the translation and rotation parameters obtained in the previous step for P, and calculating the errorWherein p isi∈P,qi∈Q。
If the difference between the iterative estimation errors is less than a given threshold, thenThe calculation is finished; if not, repeating the iteration process to obtain a rotation variable RicpAnd a translation variable Ticp
Finally, the rotation variable R ═ R of the camera coordinate system corresponding to the physical coordinate systempnp·RicpT ═ T for translation variablepnp+Ticp

Claims (5)

1. A pose estimation method based on combination of straight line features and point cloud features is characterized by comprising the following steps:
step 1: linear feature extraction and combination of the prior knowledge are fused;
step 2: straight line matching in binocular images:
set of straight lines in left and right eye imagesAndtaking the straight line in the left picture fromToIn turn aligned with the line in the right eyeAnd (3) calculating according to the formula:
wherein, Mark is the score value of matching two straight lines, and the Mark at the beginningfinal=Mark0=100,MarkiF is a score value in a certain process, and f is a designated scoring condition, including a linear included angle, horizontal constraint and left-right eye parallax;
respectively taking two straight lines from the straight line set of the left eye image and the right eye image(l represents a left eye image)(r represents a right eye image) to be brought into a linear included angle constraint, a horizontal constraint and a left-right visual difference constraint;
through the steps, the final value of the Mark is obtained, and the Mark is comparediAnd Marki-1Value of (1), Markfinal=max(Marki,Marki-1) And simultaneously recording the number k of the straight line in the left and right image pictures corresponding to the score valuetempL,jtempR(ii) a After the k-th straight line of the left eye image traverses all straight lines in the right eye in sequence, the number j of the straight line in the right eye image picture is usedtempRGo to traverse all the straight lines in the left eye in sequence and execute the publicFormula (II)
Finally, iterating to obtain the maximum Mark value and corresponding itempRIf i istempL=itempRThe left eye image line is numbered itempLThe straight line of (a) and the straight line of the right eye image are numbered as jtempLSuccessfully matching and storing the matching into a matching straight line set M; otherwise, the matching fails, the steps are repeated, and the iteration is carried out in sequence until the algorithm is terminated, and finally the result is obtained
And step 3: three-dimensional reconstruction of straight line features
3.1 solving of spatial lines
An image straight line equation is set to be ax + by + c as 0, and as can be known from the central projection principle, a straight line on an image corresponds to a plane in space, and a space plane equation is set to be:
AX+BY+CZ+D=0
the central imaging model from points has:
wherein m is11To m34For the product of the internal reference matrix imaged by the camera, the comparison equation can be derived:
simultaneously solving two coplanar equations determined by image straight lines matched on the image, intersecting to obtain a space straight line corresponding to the image straight line, and setting a space straight line LEFCorresponding image line isThe equation listing the spatial lines is:
adopting a parametric equation of a space straight line, and taking any x as x0Substituting into the above equation, the coordinate (X) of a certain point on the space straight line is solved0,Y0,Z0);
Knowing the normal vector of two perpendicular planesDetermining a direction vector of a straight line
The equation for the spatial line is:
3.2 solving spatial line segments
In the left eye from the center of light point OlAnd a straight line segment projected onto the left eye imaging planeTwo ends respectively forming two space straight linesAndcalculating a spatial straight line lEFAndIEFandthe spatial intersections (if the two straight lines are not coplanar, the point having the shortest distance from the two straight lines) are respectively EleftAnd Fleft. In the same way, the intersection point E of the space straight lines can be obtained in the right eyerightAnd Fright. To ensure that the straight line information is retained to the maximum possible extent, at four points { E ] are obtainedleft,Eright,Fleft,FrightAnd taking two points with the largest length as the starting point and the end point of the restored line segment. Thereby realizing the reduction of the spatial straight line section EF. Sequentially calculating the obtained space straight lines to obtain a space straight line segment set N ═ L1,L2…Ln}。
And 4, step 4: pose calculation
4.1 select left destination image as coarse matched two-dimensional plane: in step 3, a line segment set is obtainedAcquiring the intersection point between every two line segments in the set, and acquiring the set of the intersection point as Mpnp={(x1,y1),(x2,y2)…(xn,yn)};
Measuring and establishing three-dimensional points in a real coordinate system, and storing the three-dimensional points into a point set G3dpot={(X1,Y1,Z1),(X2,Y2,Z2)…(Xn,Yn,Zn) In the preceding, set M pointspnpAll the points in (b) are arranged in a certain order, so that the order is corresponding to the point set G3dpotThe sequence in (1) is kept consistent and is substituted into a camera imaging model formula:
where i ∈ [1, n ]]
Wherein K is known camera internal parameter, lambda is proportional coefficient of imaging model, and M ispnpAnd G3dpotRespectively substituting the points into a formula, applying an EPNP algorithm, and finally obtaining a coarse matching rotation variable RpnpTranslation variable Tpnp
4.2, point cloud acquisition: the method comprises the following steps of carrying out segmentation sampling on a spatial straight line according to a certain threshold value, finally obtaining point cloud data based on straight line characteristics, and taking a certain straight line of the space, wherein the specific method comprises the following steps:
theta is the included angle between the straight line and the positive direction of the x axis, len is the length of the straight line, k is the number of point clouds on the straight line, and the point set dispersed by the straight line is
Dispersing the three-dimensional straight lines obtained in the step 3 into point sets P in sequence according to the algorithm, and forming point sets Q for the manually established template library according to the same method;
4.3 set the result in 4.1 as the initialized rotation matrix and translation vector, i.e., R ═ Rpnp,T=TpnpUpdating data point set, obtaining new transformation point set by using translation and rotation parameters obtained by 4.1 for P, and calculating errorWherein p isi∈P,qiE.g. Q, if the difference between the iterative estimation errors is less than a given threshold, i.e.The calculation is finished; if not, repeating the iteration process to obtain a rotation variable RicpAnd a translation variable Ticp
Finally, the rotation variable R ═ R of the camera coordinate system corresponding to the physical coordinate systempnp·RicpTranslation variable T ═ Tpnp+Ticp
2. The pose estimation method based on the combination of the straight line feature and the point cloud feature as claimed in claim 1, wherein: step one, the extraction and combination of the linear features fused with the prior knowledge comprise:
1.1: straight line number l1,l2…lbThe coordinate value of the starting point of the straight line is (x)begin1,ybegin1),(xbegin2,ybegin2),…,(xbeginb,ybeginb) The coordinate value of the end point is (x)end1,yend1),(xend2,yend2),…,(xendb,yendb) (ii) a From the origin O to the respective lines l1,l2…lbIs recorded as d1,d2,…,db,l1,l2…lbThe included angles with the positive direction of the x axis of the image are theta12…θb
1.2: grouping the lines with the same slope, and recording as group1,group2,…,groupmWhere m is a group, each group having a value represented by θ12…θbDetermining the number of the differences; if groupiThe number of straight lines in (1) is greater than 1ijAnd likRespectively represent groupiThe jth straight line and the kth straight line in the ith group are calculatediMiddle two different straight lines lijAnd likRelative distance therebetween:
Δd=dlij-dlik
wherein: dlij,dlikRespectively from origin to line lijAnd likThe shortest distance of (d); setting a threshold value of the merging distance as d, and if delta d is less than d, then lijAnd likGrouped into a collective groupsMerging the straight line segments in the set;
1.3: according to the steps, until groupsThe number of straight lines in (1) is 1, and the merging of straight lines is completed.
3. The pose estimation method based on the combination of the straight line feature and the point cloud feature as claimed in claim 1, wherein: the linear included angle constraint specifically comprises: setting upThreshold value T of difference between included angles of two straight linesangleIf it satisfiesWherein,andare respectively asAndand if the angle is included with the positive direction of the x axis of the image, the right target straight line meets the angle constraint, and otherwise, the Mark value returns to zero.
4. The pose estimation method based on the combination of the straight line feature and the point cloud feature as claimed in claim 1, wherein: the horizontal constraint is specifically: dy is a horizontal confinement valuemaxTo horizontally constrain the threshold, orderCoordinate of starting point (x)st1,yst1) End point coordinate (x)end1,yend1),Coordinate of starting point (x)str,ystr) End point coordinate (x)endr,yendr) And Dy satisfies:
wherein, if Dy is less than DymaxIf the two straight lines satisfy the horizontal constraint, the deduction of the score value is carried out: markDyMark-Dy · α, α is the weight; if Dy > DymaxThe score value Mark is zeroed.
5. The pose estimation method based on the combination of the straight line feature and the point cloud feature as claimed in claim 1, wherein: the left-right eye parallax constraint specifically comprises:
dx is left-right parallax, DxmaxFor left and right disparity thresholds, Dx satisfies:
wherein,if Dx < DxmaxThen the two lines satisfy the left and right constraints, otherwise the score value Mark returns to zero.
CN201910526419.3A 2019-06-18 2019-06-18 Pose estimation method based on combination of linear features and point cloud features Active CN110310331B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910526419.3A CN110310331B (en) 2019-06-18 2019-06-18 Pose estimation method based on combination of linear features and point cloud features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910526419.3A CN110310331B (en) 2019-06-18 2019-06-18 Pose estimation method based on combination of linear features and point cloud features

Publications (2)

Publication Number Publication Date
CN110310331A true CN110310331A (en) 2019-10-08
CN110310331B CN110310331B (en) 2023-04-14

Family

ID=68076179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910526419.3A Active CN110310331B (en) 2019-06-18 2019-06-18 Pose estimation method based on combination of linear features and point cloud features

Country Status (1)

Country Link
CN (1) CN110310331B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111308481A (en) * 2020-02-21 2020-06-19 深圳市银星智能科技股份有限公司 Laser positioning method and device and mobile robot
CN111325796A (en) * 2020-02-28 2020-06-23 北京百度网讯科技有限公司 Method and apparatus for determining pose of vision device
CN111813882A (en) * 2020-06-18 2020-10-23 浙江大华技术股份有限公司 Robot map construction method, device and storage medium
CN112577500A (en) * 2020-11-27 2021-03-30 北京迈格威科技有限公司 Positioning and map construction method and device, robot and computer storage medium
CN112720477A (en) * 2020-12-22 2021-04-30 泉州装备制造研究所 Object optimal grabbing and identifying method based on local point cloud model
CN112862692A (en) * 2021-03-30 2021-05-28 煤炭科学研究总院 Image splicing method applied to underground coal mine roadway
CN116523984A (en) * 2023-07-05 2023-08-01 矽瞻科技(成都)有限公司 3D point cloud positioning and registering method, device and medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833761A (en) * 2010-04-20 2010-09-15 南京航空航天大学 Unmanned aerial vehicle (UAV) position and orientation estimation method based on cooperative target characteristic lines
US20140267614A1 (en) * 2013-03-15 2014-09-18 Seiko Epson Corporation 2D/3D Localization and Pose Estimation of Harness Cables Using A Configurable Structure Representation for Robot Operations
CN105976353A (en) * 2016-04-14 2016-09-28 南京理工大学 Spatial non-cooperative target pose estimation method based on model and point cloud global matching
CN106485690A (en) * 2015-08-25 2017-03-08 南京理工大学 Cloud data based on a feature and the autoregistration fusion method of optical image
CN107945220A (en) * 2017-11-30 2018-04-20 华中科技大学 A kind of method for reconstructing based on binocular vision
CN108982901A (en) * 2018-06-14 2018-12-11 哈尔滨工业大学 A kind of rotating speed measurement method of at the uniform velocity rotary body
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109373898A (en) * 2018-11-27 2019-02-22 华中科技大学 A kind of complex parts pose estimating system and method based on three-dimensional measurement point cloud
CN109544599A (en) * 2018-11-22 2019-03-29 四川大学 A kind of three-dimensional point cloud method for registering based on the estimation of camera pose
CN109801337A (en) * 2019-01-21 2019-05-24 同济大学 A kind of 6D position and orientation estimation method of Case-based Reasoning segmentation network and iteration optimization

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833761A (en) * 2010-04-20 2010-09-15 南京航空航天大学 Unmanned aerial vehicle (UAV) position and orientation estimation method based on cooperative target characteristic lines
US20140267614A1 (en) * 2013-03-15 2014-09-18 Seiko Epson Corporation 2D/3D Localization and Pose Estimation of Harness Cables Using A Configurable Structure Representation for Robot Operations
CN106485690A (en) * 2015-08-25 2017-03-08 南京理工大学 Cloud data based on a feature and the autoregistration fusion method of optical image
CN105976353A (en) * 2016-04-14 2016-09-28 南京理工大学 Spatial non-cooperative target pose estimation method based on model and point cloud global matching
CN107945220A (en) * 2017-11-30 2018-04-20 华中科技大学 A kind of method for reconstructing based on binocular vision
CN108982901A (en) * 2018-06-14 2018-12-11 哈尔滨工业大学 A kind of rotating speed measurement method of at the uniform velocity rotary body
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109544599A (en) * 2018-11-22 2019-03-29 四川大学 A kind of three-dimensional point cloud method for registering based on the estimation of camera pose
CN109373898A (en) * 2018-11-27 2019-02-22 华中科技大学 A kind of complex parts pose estimating system and method based on three-dimensional measurement point cloud
CN109801337A (en) * 2019-01-21 2019-05-24 同济大学 A kind of 6D position and orientation estimation method of Case-based Reasoning segmentation network and iteration optimization

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
DIYI LIU: "Point Pair Feature-Based Pose Estimation with Multiple Edge Appearance Models (PPF-MEAM) for Robotic Bin Picking", 《HTTPS://WWW.MDPI.COM/1424-8220/18/8/2719》 *
GUO NINGBO: "A fusion method for TOF point cloud and digital photograph", 《A FUSION METHOD FOR TOF POINT CLOUD AND DIGITAL PHOTOGRAPH》 *
任祥华: "激光雷达室内SLAM方法", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *
吕强等: "基于点云配准的室内移动机器人6自由度位姿估计", 《装甲兵工程学院学报》 *
张名芳: "基于激光雷达的远距离运动车辆位姿估计", 《公路交通科技》 *
张智: "机器人双目图像实时立体匹配算法", 《北京理工大学学报》 *
石广升: "基于Kinect的物体三维模型构建和姿态估计方法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *
童磊: "面向机器人抓取的零件识别与定位方法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111308481B (en) * 2020-02-21 2021-10-15 深圳市银星智能科技股份有限公司 Laser positioning method and device and mobile robot
CN111308481A (en) * 2020-02-21 2020-06-19 深圳市银星智能科技股份有限公司 Laser positioning method and device and mobile robot
CN111325796B (en) * 2020-02-28 2023-08-18 北京百度网讯科技有限公司 Method and apparatus for determining pose of vision equipment
CN111325796A (en) * 2020-02-28 2020-06-23 北京百度网讯科技有限公司 Method and apparatus for determining pose of vision device
CN111813882A (en) * 2020-06-18 2020-10-23 浙江大华技术股份有限公司 Robot map construction method, device and storage medium
CN111813882B (en) * 2020-06-18 2024-05-14 浙江华睿科技股份有限公司 Robot map construction method, device and storage medium
CN112577500A (en) * 2020-11-27 2021-03-30 北京迈格威科技有限公司 Positioning and map construction method and device, robot and computer storage medium
WO2022110767A1 (en) * 2020-11-27 2022-06-02 北京迈格威科技有限公司 Localization and mapping method, apparatus, robot, and computer-readable storage medium
CN112720477A (en) * 2020-12-22 2021-04-30 泉州装备制造研究所 Object optimal grabbing and identifying method based on local point cloud model
CN112720477B (en) * 2020-12-22 2024-01-30 泉州装备制造研究所 Object optimal grabbing and identifying method based on local point cloud model
CN112862692A (en) * 2021-03-30 2021-05-28 煤炭科学研究总院 Image splicing method applied to underground coal mine roadway
CN116523984A (en) * 2023-07-05 2023-08-01 矽瞻科技(成都)有限公司 3D point cloud positioning and registering method, device and medium
CN116523984B (en) * 2023-07-05 2023-09-26 矽瞻科技(成都)有限公司 3D point cloud positioning and registering method, device and medium

Also Published As

Publication number Publication date
CN110310331B (en) 2023-04-14

Similar Documents

Publication Publication Date Title
CN110310331B (en) Pose estimation method based on combination of linear features and point cloud features
CN105913489B (en) A kind of indoor three-dimensional scenic reconstructing method using plane characteristic
CN107063228B (en) Target attitude calculation method based on binocular vision
CN111563921B (en) Underwater point cloud acquisition method based on binocular camera
CN107392947B (en) 2D-3D image registration method based on contour coplanar four-point set
Stamos et al. Integration of range and image sensing for photo-realistic 3D modeling
CN109615654B (en) Method for measuring corrosion depth and area of inner surface of drainage pipeline based on binocular vision
CN107588721A (en) The measuring method and system of a kind of more sizes of part based on binocular vision
CN107274483A (en) A kind of object dimensional model building method
CN107492107B (en) Object identification and reconstruction method based on plane and space information fusion
CN103971378A (en) Three-dimensional reconstruction method of panoramic image in mixed vision system
CN113393524B (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN112381886A (en) Multi-camera-based three-dimensional scene reconstruction method, storage medium and electronic device
JP2017532695A (en) Method and system for scanning an object using an RGB-D sensor
CN113393439A (en) Forging defect detection method based on deep learning
CN110425983A (en) A kind of monocular vision three-dimensional reconstruction distance measuring method based on polarization multi-spectrum
CN113781562A (en) Lane line virtual and real registration and self-vehicle positioning method based on road model
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision
CN114170284B (en) Multi-view point cloud registration method based on active landmark point projection assistance
CN113340201B (en) Three-dimensional measurement method based on RGBD camera
CN110517323A (en) 3 D positioning system and method based on manipulator one camera multi-vision visual
CN112712566B (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
CN113393413B (en) Water area measuring method and system based on monocular and binocular vision cooperation
Shen et al. A 3D modeling method of indoor objects using Kinect sensor
CN112991372B (en) 2D-3D camera external parameter calibration method based on polygon matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant