CN110310331B - Pose estimation method based on combination of linear features and point cloud features - Google Patents
Pose estimation method based on combination of linear features and point cloud features Download PDFInfo
- Publication number
- CN110310331B CN110310331B CN201910526419.3A CN201910526419A CN110310331B CN 110310331 B CN110310331 B CN 110310331B CN 201910526419 A CN201910526419 A CN 201910526419A CN 110310331 B CN110310331 B CN 110310331B
- Authority
- CN
- China
- Prior art keywords
- straight line
- point
- image
- mark
- straight
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a pose estimation method based on combination of linear features and point cloud features, which comprises the following steps of: (1) linear feature extraction fusing prior knowledge; (2) matching straight lines in the binocular images; (3) three-dimensional reconstruction of the linear features; and (4) calculating the pose. The point cloud of the invention is from the edge characteristic, have good anti-jamming capability and accurate localization ability, and replace line segment matching with the point cloud matching can give full play to the robustness advantage of point cloud matching, even the line segment has the phenomena such as length transformation or fracture, etc., still can match effectively after becoming the point cloud; the point cloud number is limited, and the coverage area is a line segment set with limited space, so that the point cloud number is greatly reduced, the matching speed is improved, but the point clouds come from the edge of an object which has the largest contribution to positioning, and the positioning precision is not obviously reduced; the linear feature extraction and matching process does not need dense depth field information, and the precision can be guaranteed for objects with complex textures and simple textures.
Description
Technical Field
The invention relates to a pose estimation method, in particular to a pose estimation method based on combination of linear features and point cloud features, and belongs to the technical field of image processing.
Background
The pose estimation problem is an important problem in the disciplines of photogrammetry, computer vision, computer graphics, robots and the like, and is a core problem to be solved by many engineering practices and theoretical researches, such as navigation and positioning of a vision servo system and a mobile robot, object identification and tracking, virtual reality, camera self-calibration, robot hand-eye calibration and the like.
A simple method is to directly extract a plurality of angular point characteristics of an object, and calculate the pose of the object relative to a visual system according to the positions of a characteristic point set in the visual coordinate system and the position of the object, but the number of the angular point characteristics of the object is limited, so that the solving precision is influenced, and even the solving cannot be carried out due to the insufficient number of the angular point characteristics. In recent years, a pose estimation method based on point cloud characteristics is popular, the method can be used for positioning objects with complex shapes and has good robustness, but the method needs image acquisition equipment to acquire high-precision and compact three-dimensional point cloud information, complex operations such as point cloud segmentation and splicing between a measured object and the environment need to be realized, and the time consumption of an algorithm calculation process is caused by the large number of point clouds.
Some methods consider positioning based on the linear features of an object, the linear features are more obvious than the angular point features and stronger in anti-jamming capability, the method is simple, the calculated amount is small, and image depth does not need to be measured, but the linear features cannot be accurately positioned in the matching process as the point features (particularly obvious in stereoscopic vision), when the extracted linear length changes, fractures and other phenomena occur due to shielding or illumination influence, the accurate recovery of the three-dimensional pose of the object is difficult to guarantee, most of the existing methods put emphasis on the extraction and matching of the linear features, and the final positioning accuracy and reliability are not deeply discussed.
The invention considers the combination of the thought of the corner point characteristic, the straight line characteristic and the point cloud matching, and designs the method which has small calculated amount and high reliability and can adapt to the input of common images (without dense depth information). The method is mainly characterized in that linear features are used as main points, the linear features are restored into a three-dimensional line segment set through left-right eye matching based on a binocular vision system, the line segment set is scattered into a three-dimensional point set through discrete sampling, then the estimation of the position and the posture of an object is achieved through a point cloud matching method, and the result of the position and the posture of the object calculated through angular point features is used as an initial substitution condition before matching.
Disclosure of Invention
Aiming at the prior art, the invention aims to provide a pose estimation method based on combination of linear features and point cloud features, which has the advantages of small calculated amount, high reliability, capability of adapting to common image input and no need of dense depth information.
In order to solve the technical problem, the invention provides a pose estimation method based on combination of straight line features and point cloud features, which comprises the following steps:
step 1: linear feature extraction and combination of the prior knowledge are fused;
step 2: straight line matching in binocular images:
set of straight lines in left and right eye imagesAnd &>Take the straight line in the left picture fromTo/>In turn is in line with the right eye>And calculating according to the formula: />
Wherein, mark is the score value of matching two straight lines, and the Mark at the beginning final =Mark 0 =100,Mark i F is a score value in a certain process, and f is a designated scoring condition, including a linear included angle, horizontal constraint and left-right eye parallax;
respectively taking two straight lines from the straight line set of the left eye image and the right eye image(1 represents a left eye image) </or >>(r represents a right eye image) to be brought into a linear included angle constraint, a horizontal constraint and a left-right visual difference constraint;
through the steps, the final value of the Mark is obtained, and the Mark is compared i And Mark i-1 Value of (1), mark final =max(Mark i ,Mark i-1 ) And simultaneously recording the number k of the straight line in the left and right image pictures corresponding to the score value tempL ,j tempR (ii) a After the kth straight line of the left eye image traverses all straight lines in the right eye in sequence, the number j of the straight line in the right eye image picture is used tempR Sequentially traversing all straight lines in the left eye and executing a formula
Finally, iterating to obtain the maximum Mark value and corresponding i tempR If i is tempL =i tempR The left eye image line is numbered i tempL The straight line of (a) and the straight line of the right eye image are numbered as j tempL Successfully matching and storing the matching into a matching straight line set M; otherwise, the matching fails, the steps are repeated, the iteration is carried out in sequence until the algorithm is terminated, and finally the result is obtained
And step 3: three-dimensional reconstruction of straight line features
3.1: solution of spatial lines
Let the image straight line equation be ax + by + c =0, and as can be known from the central projection principle, a straight line on the image corresponds to a plane in space, and let the spatial plane equation be:
AX+BY+CZ+D=0
the central imaging model from points has:
wherein m is 11 To m 34 For the product of the internal reference matrix imaged by the camera, the comparison equation can be derived:
simultaneously solving two coplanar equations determined by image straight lines matched on the image, intersecting to obtain a space straight line corresponding to the image straight line, and setting a space straight line L EF Corresponding image line is The equation listing the spatial lines is:
adopting a parameter equation of a space straight line, and taking any x = x 0 Substituting into the above equation, the coordinate (X) of a certain point on the space straight line is solved 0 ,Y 0 ,Z 0 );
Knowing the normal vector of two perpendicular planesDetermining a direction vector of a straight line/>
The equation for the spatial line is:
3.2 solving spatial line segments
In the left eye from the center of light point O l And a straight line segment projected onto the left eye imaging planeTwo ends which respectively form two spatial straight lines>And/or>Calculating a spatial straight line l EF And/or>l EF And &>The spatial intersections (if the two straight lines are not coplanar, the point having the shortest distance from the two straight lines) are respectively E left And F left . In the same way, the intersection point E of the space straight lines can be obtained in the right eye right And F right . To ensure that the straight line information is retained to the maximum possible extent, at four points { E ] are obtained left ,E right ,F left ,F right And taking two points with the largest length as a starting point and an end point of the restored line segment. Thereby realizing the reduction of the spatial straight line section EF. Sequentially calculating the obtained space straight lines to obtain a space straight line segment set N = { L = { (L) } 1 ,L 2 …L n }。
And 4, step 4: pose calculation
4.1 select left destination image as coarse matched two-dimensional plane: in step 3, a line segment set is obtainedObtaining the intersection point between every two line segments in the set, and obtaining the set of the intersection point as
Measuring and establishing three-dimensional points in a real coordinate system, and storing the three-dimensional points into a point set G 3dpot ={(X 1 ,Y 1 ,Z 1 ),(X 2 ,Y 2 ,Z 2 )…(X n ,Y n ,Z n ) In the preceding, set M points pnp All the points in (a) are arranged in a certain order, so that the order is equal toCorresponding set of points G 3dpot The sequence in (1) is kept consistent and is substituted into a camera imaging model formula:
Wherein K is known camera internal parameter, lambda is proportional coefficient of imaging model, and M is pnp And G 3dpot Respectively substituting the points into a formula, applying an EPNP algorithm, and finally obtaining a coarse matching rotation variable R pnp Translation variable T pnp ;
4.2, point cloud acquisition: the method comprises the following steps of carrying out segmentation sampling on a spatial straight line according to a certain threshold value, finally obtaining point cloud data based on straight line characteristics, and taking a certain straight line of the space, wherein the specific method comprises the following steps:
where θ is the angle between the straight line and the positive direction of the x-axis, len is the length of the straight line, k is the number of point clouds on the straight line, and the point set dispersed by the straight line is
Dispersing the three-dimensional straight lines obtained in the step 3 into point sets P in sequence according to the algorithm, and forming point sets Q for the manually established template library according to the same method;
4.3 set the results in 4.1 to the initialized rotation matrix and translation vector, i.e. R = R pnp ,T=T pnp Updating data point set, obtaining new transformation point set by using translation and rotation parameters obtained by 4.1 for P, and calculating errorWherein p is i ∈P,q i E.g. Q, if the difference between the iterative estimation errors is less than a given threshold, i.e.The calculation is finished; if not, repeating the iteration process to obtain a rotation variable R icp And a translation variable T icp ;
Finally, the rotation variable R = R of the camera coordinate system corresponding to the physical coordinate system pnp ·R icp Translation variable T = T pnp +T icp 。
The invention also includes:
1. step one, the extraction and combination of the linear features fused with the prior knowledge comprise:
1.1: straight line number l 1 ,l 2 …l b The coordinate value of the starting point of the straight line is (x) begin1 ,y begin1 ),(x begin2 ,y begin2 ),…,(x beginb ,y beginb ) The coordinate value of the end point is (x) end1 ,y end1 ),(x end2 ,y end2 ),…,(x endb ,y endb ) (ii) a From the origin O to the respective lines l 1 ,l 2 …l b Is denoted as d 1 ,d 2 ,…,d b ,l 1 ,l 2 …l b The included angles with the positive direction of the x axis of the image are theta 1 ,θ 2 …θ b ;
1.2: grouping the lines with the same slope, and recording as group 1 ,group 2 ,…,group m Where m is a group, each group having a value represented by θ 1 ,θ 2 …θ b Determining the number of the differences; if group i The number of straight lines in (1) is greater than 1 ii And l ik Respectively represent group i The jth straight line and the kth straight line in the ith group are calculated i Middle two different straight lines l ij And l ik Relative distance therebetween:
Δd=d lij -d lik
wherein: d lij ,d lik Respectively from origin to line l ij And l ik The shortest distance of (c); setting a threshold value of the merging distance as d, and if delta d is less than d, then l ij And l ik Grouped into a set group s Merging the straight line segments in the set;
1.3: according to the steps, until group s The number of straight lines in (2) is 1, and the merging of straight lines ends.
2. The linear included angle constraint specifically comprises: setting a threshold T for the difference between the angles of two straight lines angle If it satisfiesWherein it is present>And/or>Are respectively based on>And &>And if the angle is included with the positive direction of the x axis of the image, the right target straight line meets the angle constraint, and otherwise, the Mark value returns to zero.
3. The horizontal constraint is specifically: dy is a horizontal confinement value max To horizontally constrain the threshold, orderCoordinate of starting point (x) st1 ,y st1 ) End point coordinate (x) end1 ,y end1 ),/>Starting point coordinates (X) str ,y str ) End point coordinate (x) endr ,y endr ) And Dy satisfies:
wherein, the first and the second end of the pipe are connected with each other, if Dy is less than Dy max If the two straight lines satisfy the horizontal constraint, deducting the score value: mark Dy = Mark-Dy · α, α is the weight; if Dy>Dy max The score value Mark is zeroed.
4. The left-right eye parallax constraint specifically comprises:
dx is left-right parallax, dx max For left and right disparity thresholds, dx satisfies:
wherein the content of the first and second substances,if Dx < Dx max Then the two lines satisfy the left and right constraints, otherwise the score value Mark returns to zero.
The invention has the beneficial effects that:
1) The point cloud comes from the edge characteristics, so that the point cloud has good anti-interference capability and accurate positioning capability, and the robustness advantage of point cloud matching can be fully exerted by replacing line segment matching with point cloud matching (even if the line segment has the phenomena of length transformation or breakage and the like, the line segment can still be effectively matched after being changed into the point cloud);
2) The number of the point clouds is limited, the coverage area of the point clouds is not a three-dimensional curved surface but a line segment set with limited space, so that the number of the point clouds is greatly reduced, the matching speed is improved, but the point clouds are all from the edges of objects which have the largest contribution to positioning, and the positioning precision cannot be obviously reduced;
3) The linear feature extraction and matching process does not need dense depth field information, and the precision can be guaranteed for objects with complex textures and simple textures.
Drawings
FIG. 1 (a) is a schematic view of merging straight lines having the same slope
FIG. 1 (b) is a schematic view of merging lines with slope difference θ
FIG. 2 is a schematic diagram of spatial three-dimensional linear reconstruction
FIG. 3 is a schematic diagram of point cloud acquisition
FIG. 4 is a flow chart of a pose estimation algorithm based on the combination of linear features and point cloud features.
Detailed Description
The invention is further described below in conjunction with fig. 1 (a) to 4:
the pose estimation problem is an important problem in the disciplines of photogrammetry, computer vision, computer graphics, robots and the like, and is a core problem to be solved by many engineering practices and theoretical researches, such as navigation and positioning of a vision servo system and a mobile robot, object identification and tracking, virtual reality, camera self-calibration, robot hand-eye calibration and the like.
A simple method is to directly extract a plurality of angular point characteristics of an object, and calculate the pose of the object relative to a visual system according to the positions of a characteristic point set in the visual coordinate system and the position of the object, but the number of the angular point characteristics of the object is limited, so that the solving precision is influenced, and even the solving cannot be carried out due to the insufficient number of the angular point characteristics. In recent years, a pose estimation method based on point cloud characteristics is popular, the method can be used for positioning objects with complex shapes and has good robustness, but the method needs image acquisition equipment to acquire high-precision and compact three-dimensional point cloud information, complex operations such as point cloud segmentation and splicing between a measured object and the environment need to be realized, and the time consumption of an algorithm calculation process is caused by the large number of point clouds. The linear features are more obvious than the angular point features, the anti-interference capability is stronger, the method is simple, the calculated amount is small, and the image depth is not required to be measured, but the linear features cannot be accurately positioned in the matching process (particularly obvious in stereoscopic vision) as the point features, when the phenomena of change of the length of the extracted linear, fracture and the like caused by shielding or illumination influence occur, the accurate recovery of the three-dimensional pose of an object is difficult to ensure, most of the existing methods put emphasis on the extraction and matching of the linear features, and the final positioning accuracy and reliability are not deeply discussed.
The invention considers the combination of the thought of the corner point characteristic, the straight line characteristic and the point cloud matching, and designs the method which has small calculated amount and high reliability and can adapt to the input of common images (without dense depth information). The method is mainly characterized in that linear features are used as main points, the linear features are restored into a three-dimensional line segment set through left-right eye matching based on a binocular vision system, the line segment set is scattered into a three-dimensional point set through discrete sampling, then the estimation of the position and the posture of an object is achieved through a point cloud matching method, and the result of the position and the posture of the object calculated through angular point features is used as an initial substitution condition before matching. Thereby realizing the estimation of the position and the attitude of the object.
The method comprises the following steps:
And 2, matching the left and right eye straight line pairs. The matching pairs of the left eye line and the right eye line are accurately obtained through the quadruple constraints of the included angle, the horizontal constraint, the left eye difference constraint, the right eye difference constraint and the length difference matching of the straight lines.
And 3, three-dimensional reconstruction of the linear characteristics. One straight line of the object in space is EF, and two images observed by the two cameras are e l f l And e r f r . Then according to the pinhole imaging model and the central projection principle, the spatial straight line EF is represented by O l And e l f l Composed of plane S 1 And from O r And e r f r Composed of plane S 2 The intersection line of (a). The projection lines on the image pick-up planes of the two cameras and the intersection line of the two planes formed by the respective optical centers can determine a spatial straight line.
And 4, estimating the position and the posture of the object. In the invention, the three-dimensional straight line obtained in the previous step is cut and scattered according to a certain threshold value, so that a series of 3-dimensional point sets are generated. And establishing a complete three-dimensional point set model in the artificially set physical coordinate system space.
And then carrying out initial rough estimation, wherein a 2D-to-3D ePNP attitude estimation algorithm is used, and the method can well adjust the coordinate system to approximate consistency for two pieces of point cloud data with larger similarity.
And finally, carrying out iterative computation by using an iterative near point method, taking the result of rough estimation obtained in the previous process as an initialization condition, and carrying out repeated iteration to obtain a relatively precise position posture.
The embodiment is as follows:
1. linear feature extraction with a priori knowledge fused
The invention takes the extraction of straight line segment characteristics of Hough transformation as a basic method to realize the extraction of straight line characteristics of an object. The algorithm needs to extract the edge of an image before Hough transformation, and supposing that a cuboid in the image is an object to be observed, because the result after edge detection not only contains the linear edge of the target object but also contains other interference characteristics (such as the edge of a table, and the like), some characteristics (such as initial position range, color, size, and the like) of the object can be generally predicted when the visual system actually searches the position of the target object by trying to filter interference, and the section gives a method for filtering interference based on such known constraints (even if the above strategy is adopted, sometimes the interference cannot be completely filtered, but the point cloud matching strategy adopted subsequently can still effectively work under the condition of certain interference, and the experimental result is detailed in the following text). In addition, in the research, the result of Hough line detection cannot guarantee that each edge corresponds to a unique line segment (which influences the subsequent pose estimation), and the invention provides a post-processing method for line segment fusion.
1.1 ) edge extraction and interference filtering
In the algorithm of edge extraction, in order to ensure that the extracted edge (especially the straight line contour) is clear, the edge is continuous and has no break. The following criteria, high signal-to-noise ratio criteria, high positioning accuracy criteria, single edge response criteria, should generally be met. Therefore, in the present invention, a Canny edge detection algorithm is applied to detect the edges of the object.
After the obtained edge profile information, the edge information of the table edge and other objects interferes with the detection of the straight line characteristics of the objects. According to the priori knowledge, the preliminary range and the color range of the edge contour of the object in the picture are known. First, the present invention uses a color space-based color image contour detection algorithm based on a priori contour selection range. Introducing a component field to represent the volume of a cylinder consisting of H, S and V components in a color space, improving a color difference measurement method, detecting various components on the basis, finally obtaining an edge image in the color range, and binarizing the obtained image. And selecting a specific area of the object to be detected in the image.
1.2 ) straight line segment extraction and merging of repeated line segments
In the method of extracting straight line features using Hough transform in the process of straight line extraction, the extraction result is determined by the shape of the object as shown in fig. 1 (a). The different lengths of the straight lines lead to that the edge of the detected straight line with longer side length can detect repeated straight lines. There is a step of repeating the merging of the straight lines after the straight line detection.
Straight segments belonging to the same straight line are now merged for subsequent matching. The main strategy of the straight line segment combination is to combine straight line segments with the included angle smaller than a threshold value of 5T (2 degrees are selected in experiments), the gap length (the minimum distance between the endpoints of the straight line segments) between the straight line segments is smaller than a threshold value of 6T (half of the minimum length of the two straight line segments is selected in experiments), the straight line segments with the consistent gray scale distribution on the two sides of the straight line segments are combined to obtain a fitting straight line segment, then the fitting error is calculated, and the final combined straight line segment with the error smaller than a threshold value of 7T (5 pixels are selected in experiments). The specific method comprises the following steps:
1) Straight line number l 1 ,l 2 …l b Coordinate value (x) of straight line start point st1 ,y st1 ),(x st2 ,y st2 )···(x stb ,y stb ) End point coordinate value (X) end1 ,y end1 ),(x end2 ,y end2 )···(x endb ,y endb ). Distance d from origin O to each straight line 1 ,d 2 …d n And the included angle of the corresponding straight line relative to the horizontal direction of the picture is theta 1 ,θ 2 …θ n 。
2) Grouping the straight lines with the same slope into groups 1 ,group 2 …group m Wherein the value of m is represented by θ 1 ,θ 2 …θ n The number of the groups is determined differently i If the number of straight lines in (1) is greater than 2, the group calculation is started i Relative distance between two middle straight lines delta d = d lij -d lik Setting the threshold value of the merging distance as d, if delta d is less than d. Then l ij And l ik The merging will be performed.
3) The straight lines after merging are denoted as l uv . Calculating the included angle between two ends of the original two straight lines, i.e. calculating the vector respectivelyAnd/or>Is included angle of (B)>And &>The included angle of (a). By the formula
The magnitude of θ can be obtained. If theta is less than pi/2, P is selected as a base point, and a point W is a straight line passing through the point P and l rs The midpoint U of the line segment PW can be obtained. V can be obtained in the same manner.
And when the combination of the straight line segments with the same slope is finished, continuing to fuse the straight line segments with the slope difference within theta and the distance difference delta from the origin. From step 1 of the above method, straight lines having a slope difference within a set threshold range are grouped, and the threshold value is set to θ = ± 3 ° in the present invention. The other steps are performed according to the third step of the previous process.
2 straight line matching in binocular images
The method comprises the following step one. Respectively obtaining stable straight line sets in left and right eye imagesAndtake the straight line in the left image, slave>To/>In turn is in line with the right eye>And calculating according to the formula:
wherein, mark is the score value of matching two straight lines, and the Mark is at the beginning final =Mark 0 =100.f is a scoring condition, and is constrained by the condition of linear included angle, horizontal constraint, left-right visual difference constraint and length difference matching.
1) The difference of the included angles of the straight lines if satisfied(in the invention T angle =20 °), then the right destination line satisfies the angle constraint. Otherwise, the Mark value returns to zero. />
2) The epipolar line approximates a horizontal constraint.Coordinate of starting point (x) st1 ,y st1 ) End point coordinate (x) end1 ,y end1 ),/>Start point coordinate (x) of (2) str ,y str ) End point coordinate (x) endr ,y endr )
Wherein, the first and the second end of the pipe are connected with each other,if Dy is less than Dy max (Dy in the present invention max = 100), then the two straight lines satisfy the horizontal constraint, and then the deduction of the score value is performed, mark Dy = Mark-Dy · α (α is weight, the value of the invention is 0.2). If Dy>Dy max The score value Mark is zeroed.
3) And establishing left and right parallax constraints through the left and right eye overlapping regions. There are the formulas:
wherein the content of the first and second substances,if Dx < Dx max (Dx max Have different values in scenes with different sizes, dx in the invention max = 240), then the two straight lines are fully left-right constrained. Otherwise, the Mark value returns to zero.
4) The length difference is matched. Has the formula
Comparison Mark i And Mark i-1 Value of (2), mark final =max(Mark i ,Mark i-1 ) And simultaneously recording the number k of the straight line in the left and right image pictures corresponding to the score value tempL ,j tempR . After the k-th straight line of the left eye image traverses all the straight lines in the right eye in sequence. Using the number j of the straight line in the image picture of the right eye tempR Sequentially traversing all straight lines in the left eye and executing a formula
Finally, iterating to obtain the maximum Mark value and corresponding i tempR If i is tempL =i tempR The left eye image line is numbered i tempL The straight line of (a) and the straight line of the right eye image are numbered as j tempL And (5) successfully matching, and storing into a matching straight line set M. Otherwise, the matching fails. Repeating the steps, sequentially iterating until the algorithm is terminated, and finally obtaining
Three-dimensional reconstruction of 3-line features
As shown in FIG. 2, the artificial geometric space has a straight line EF, and two images e are observed by two cameras l f l And e r f r . Then according to the pinhole imaging model and the central projection principle, the spatial straight line EF is represented by O l And e l f l Composed of plane S 1 And from O r And e r f r Composed of plane S 2 The intersection line of (a). The projection lines on the image pick-up planes of the two cameras and the intersection line of the two planes formed by the respective optical centers can determine a space straight line.
3.1 Solution of spatial lines
The set of line pairs M has been obtained during step two.
An image straight line equation is set as ax + by + c =0, and according to the central projection principle, one straight line on an image corresponds to one spatial plane, and the spatial plane equation is set as follows:
AX+BY+CZ+D=0
the central imaging model from points has:
wherein m is 11 To m 34 For the product of the internal reference matrix imaged by the camera, the comparative equation can be given by:
the spatial plane equation is referred to as the coplanarity equation of the object line, the optical center and the image line. And simultaneously solving two coplanar equations determined by the image straight lines matched on the image, and intersecting to obtain a space straight line corresponding to the image straight lines. Setting a space straight line L EF Corresponding image line is The equation for listing the spatial lines is
The spatial straight line equation in this paper adopts a parametric equation of a spatial straight line. Take arbitrary x = x 0 Substituting the equation (4) can solve the coordinate (X) of a certain point on the space straight line 0 ,Y 0 ,Z 0 )。
Knowing the normal vector of two perpendicular planesDetermining a direction vector of a straight line
The equation for the spatial line is:
3.2 Solving for a spatial line segment
In the left eye from the center of light point O l And a straight line segment projected onto the left eye imaging planeTwo ends which respectively form two spatial straight lines>And &>Calculating a spatial straight line l EF And/or>l EF And &>The spatial intersections (if the two straight lines are not coplanar, the point having the shortest distance from the two straight lines) are respectively E left And F left . In the same way, the intersection point E of the space straight lines can be obtained in the right eye right And F right . To ensure that the straight line information is retained to the maximum possible extent, at four points { E ] are obtained left ,E right ,F left ,F right And taking two points with the largest length as the starting point and the end point of the restored line segment. Thereby realizing the reduction of the spatial straight line section EF. Sequentially calculating the obtained space straight lines to obtain space straight linesSet of segments N = { L 1 ,L 2 …L n }。
4 pose calculation
The algorithm for estimating the position and the attitude of the object is mainly used for matching straight lines detected by a left eye and a right eye on a two-dimensional plane through the inverse mapping principle of binocular imaging and restoring the straight lines into three-dimensional straight lines through inverse mapping. And cutting and scattering the three-dimensional straight lines according to a certain threshold value to generate a series of three-dimensional point sets. A complete 3-dimensional point set model is also built in the artificially set physical coordinate system space. And finally, carrying out iterative computation by using an iterative closest point algorithm to obtain a relatively accurate position posture.
Before the two point sets are subjected to the iterative nearest point algorithm, a rough estimation equivalent to initialization is also carried out, the two-dimensional to three-dimensional ePnP attitude estimation algorithm is used, and the method can well adjust the coordinate system to be approximately consistent for two pieces of point cloud data with larger similarity.
4.1 Implementation of an algorithm for coarse target matching
The left destination image is selected as the two-dimensional plane of the ePnP algorithm. In step one, a line segment set is obtainedAnd acquiring an intersection point between every two line segments in the set. The result is M pnp ={(x 1 ,y 1 ),(x 2 ,y 2 )…(x n ,y n )}。
In the ideal case, three straight lines intersecting at the same point appear in the straight line detection of the plane. However, in an actual situation, there is an error in the detection of the straight lines, and a situation that three straight lines intersect each other pairwise and intersection points are not equal occurs, and at this time, the approximate points are merged.
Measuring and establishing three-dimensional points in a real coordinate system, and storing the three-dimensional points into a point set G 3dpot ={(X 1 ,Y 1 ,Z 1 ),(X 2 ,Y 2 ,Z 2 )…(X n ,Y n ,Z n ) In (c) }. Arranging all the points in the point set M according to a certain sequence, and enabling the sequence to be corresponding to the point setThe order in G remains consistent. M 2dpot ={(x 1 ,y 1 ),(x 2 ,y 2 )…(x m ,y m ) }. Substituting into a camera imaging model formula:
Where K is the camera intrinsic parameter, this parameter is assumed to be known in the present invention. Respectively substituting the points into a formula, and applying an EPNP algorithm to finally obtain a coarse matching rotation variable R pnp Translation variable T pnp 。
4.2 Method for obtaining point clouds
The method for acquiring the point cloud is used for segmenting and sampling the spatial straight line according to a certain threshold value, and finally point cloud data based on the straight line characteristics are obtained. Taking a certain straight line of the space, the specific method is as follows:
where θ is the angle between the straight line and the positive direction of the x-axis, len is the length of the straight line, and k is the number of point clouds on the straight line. The point set dispersed by the straight line is
And (3) performing three-dimensional reconstruction on all the spatial characteristic straight lines in the third step, and dispersing the obtained three-dimensional straight lines into a point set P in sequence according to the algorithm. The same method is also used for forming a point set Q for a manually established template library.
4.3 Implementation of an algorithm for fine matching of targets
Initializing R and T. The results in 4.1 are set as the initialized rotation matrix and translation vector. I.e. R = R pnp ,T=T pnp . The set of data points is updated. Obtaining a new transformation point set by using the translation and rotation parameters obtained in the previous step for P, and calculating the errorWherein p is i ∈P,q i E.g. Q. If the difference between the iterative estimation errors is less than a given threshold, thenThe calculation is finished; if not, repeating the iteration process to obtain a rotation variable R icp And a translation variable T icp 。
Finally, the rotation variable R = R of the camera coordinate system corresponding to the physical coordinate system pnp ·R icp Translation variable T = T pnp +T icp 。
The specific implementation mode of the invention also comprises:
the invention comprises the following steps:
(1) Linear feature extraction and merging fusing priori knowledge
A Canny edge detection algorithm is applied to detect the edges of the object. The color image contour detection algorithm based on the color space HSI is used, hough transformation is applied to extract straight line features, straight line segments are obtained, extraction and repeated line segment combination are carried out, and the method specifically comprises the following steps:
1.1 Straight line number l 1 ,l 2 …l b The coordinate value of the straight line starting point is (x) begin1 ,y begin1 ),(x begin2 ,y begin2 ),…,(x beginn ,y beginn ) The coordinate value of the end point is (x) end1 ,y end1 ),(x end2 ,y end2 ),…,(x endb ,y endb ). From the origin O to the respective lines l 1 ,l 2 …l b Is recorded as d 1 ,d 2 ,…,d b And the included angles between the corresponding straight lines and the positive direction of the x axis of the image are respectively theta 1 ,θ 2 …θ b 。
1.2 Groups lines with the same slope, and records as group 1 ,group 2 ,…,group m Wherein m is a group having a value of θ 1 ,θ 2 …θ b The number of the differences is determined. If group i The number of straight lines in (1) is greater than 1 ij And l ik Respectively represent group i Starting to calculate group according to the jth straight line and the kth straight line in the ith group i Middle two different straight lines l ij And l ik Relative distance therebetween:
Δd=d lij -d lik
wherein: d is a radical of lij ,d lik Respectively from origin to line l ij And l ik The shortest distance of (c). Setting the threshold value of the merging distance as d, if delta d is less than d, then l ij And l ik Grouped into a set group λ Straight line segments within the set are merged.
1.3 According to the above steps until group λ The number of straight lines in (1) is 1, and the merging of straight lines is completed.
(2) Straight line matching in binocular images
Set of straight lines in left and right eye imagesAnd &>Take the straight line in the left image, slave>To/>In turn is in line with the right eye>And calculating according to the formula:
wherein, mark is the score value of matching two straight lines, and the Mark at the beginning final =Mark 0 =100。Mark i F is the score value in a certain process, and f is the specified scoring condition, and the condition constraint is formed by the included angle of straight lines, horizontal constraint and left-right visual difference.
Two straight lines are respectively taken from the straight line sets of the left eye image and the right eye image(I stands for left eye image) </or >>(r represents a right eye image). Into the following constraints:
2.1 Angle of line constraint. Setting a threshold T for the difference between the angles of two straight lines angle If it satisfies(wherein, is selected>And/or>Respectively, the angle between the straight line and the positive direction of the x axis of the image, T in the invention angle =20 °), then the right destination line satisfies the angle constraint. Otherwise, the Mark value returns to zero.
2.2 Epipolar line approximate horizontal line establishes horizontal constraints. Order toStart point coordinate (x) of (2) st1 ,y st1 ) End point coordinate (X) end1 ,y end1 ),/>Coordinate of starting point (x) str ,y str ) End point coordinate (X) endr ,y endr )
Wherein the content of the first and second substances, if Dy < Dy max (Dy is a horizontal constraint threshold in the present invention, where Dy max = 100), then the two straight lines satisfy the horizontal constraint, and then the deduction of the credit value is performed, mark Dy And = Mark-Dy · α (α is a weight, the value of the invention is 0.2). If Dy>Dy max The score value Mark is zeroed.
2.3 Left and right eye overlap regions establish left and right disparity constraints. There is the formula:
wherein the content of the first and second substances, if Dx < Dx max (Dx max As left and right parallax threshold, dx in the present invention max = 240), then the two straight lines are fully constrained left and right. Otherwise, the Mark value returns to zero.
Through the steps, the final value of the Mark is obtained, and the Mark is compared i And Mark i-1 Value of (1), mark final =max(Mark i ,Mark i-1 ) And simultaneously recording the number k of the straight line in the left and right image pictures corresponding to the score value tempL ,j tempR . After the k-th straight line of the left eye image traverses all the straight lines in the right eye in sequence. Using the number j of the straight line in the image picture of the right eye tempR Sequentially traversing all straight lines in the left eye and executing a formula
Finally, iterating to obtain the maximum Mark value and the corresponding i tempR If i is tempL =i tempR The left eye image line is numbered i tempL Is numbered j with the line of the right eye image tempL And (5) successfully matching, and storing into a matching straight line set M. Otherwise, the matching fails. Repeating the steps, sequentially iterating until the algorithm is terminated, and finally obtaining
(3) Three-dimensional reconstruction of straight line features
3.1 Solution of spatial lines
Let the image straight line equation be ax + by + c =0, and as can be known from the central projection principle, a straight line on the image corresponds to a plane in space, and let the spatial plane equation be:
AX+BY+CZ+D=0
the central imaging model from points has:
wherein m is 11 To m 34 For the product of the internal reference matrix imaged by the camera, the comparison equation can be derived:
the spatial plane equation is referred to as the coplanar equation for the object line, optical center, and image line. And (4) simultaneously solving two coplanar equations determined by the image straight lines matched on the image, and intersecting to obtain a space straight line corresponding to the image straight line. Setting a space straight line L EF Corresponding image line isThe equation for listing the spatial lines is
The spatial straight line equation in this paper adopts a parametric equation of a spatial straight line. Take arbitrary x = x 0 By substituting the above equation, the coordinate (X) of a certain point on the space straight line can be solved 0 ,Y 0 ,Z 0 )。
Knowing the normal vector of two perpendicular planesDetermining a direction vector of a straight line
The equation for the spatial line is:
3.2 Solving for spatial line segments
In the left eye, the light center point and the straight lineThe two end points respectively form two straight lines>And/or>Find l EF And/or>l EF And/or>The intersection point of the spatial straight lines (if the two straight lines are not coplanar, the distance is obtainedThe point of the shortest distance between the two straight lines) is respectively E left And F left . The same way can be found for E in the right eye right And F right . To ensure that the straight line information is retained to the maximum possible extent, at four points { E ] are obtained left ,E right ,F left ,F right And taking two points with the largest length as a starting point and an end point of the restored line segment. Thereby realizing the reduction of the spatial straight line section EF. Sequentially calculating the obtained space straight lines to obtain a space straight line segment set N = { L = { (L) } 1 ,L 2 …L n }。
(4) Pose calculation
4.1 The left destination image is selected as the coarsely matched two-dimensional plane. In the process of (3), a line segment set is obtainedAnd acquiring an intersection point between every two line segments in the set. Get the set of the intersection points as M pnp ={(x 1 ,y 1 ),(x 2 ,y 2 )…(x n ,y n )}。
Measuring and establishing three-dimensional points in a real coordinate system, and storing the three-dimensional points into a point set G 3dpot ={(X 1 ,Y 1 ,Z 1 ),(X 2 ,Y 2 ,Z 2 )…(X n ,Y n ,Z n ) In (c) }. Set M of points pnp All the points in (b) are arranged in a certain order, so that the order is corresponding to the point set G 3dpot The order in (a) is kept consistent. Substituting into the camera imaging model formula:
Where K is the camera intrinsic parameter, this parameter is assumed to be known in the present invention. Respectively substituting the points into a formula, and applying an EPNP algorithm to finally obtain a coarse matching rotation variable R pnp Translation variable T pnp 。
4.2 The method for acquiring the point cloud is to divide and sample the spatial straight line according to a certain threshold value, and finally obtain point cloud data based on the straight line characteristics. Taking a certain straight line of the space, the specific method is as follows:
wherein theta is an included angle between the straight line and the positive direction of the x axis, len is the length of the straight line, and k is the number of point clouds on the straight line. The point set dispersed by the straight line is
And in the third step, three-dimensional reconstruction is carried out on all the spatial characteristic straight lines, and the obtained three-dimensional straight lines are sequentially dispersed into a point set P according to the algorithm. The same method is also used for forming a point set Q for a manually established template library.
4.3 Set the results in 4.1 to the initialized rotation matrix and translation vector. I.e. R = R pnp ,T=T pnp . The set of data points is updated. Obtaining a new transformation point set by using the translation and rotation parameters obtained in the previous step for P, and calculating the errorWherein p is i ∈P,q i ∈Q。
If the difference between the iterative estimation errors is less than a given threshold, thenThe calculation is finished; if not, repeating the iteration process to obtain a rotation variable R icp And a translation variable T icp 。
Finally, the rotation variable R = R of the camera coordinate system corresponding to the physical coordinate system pnp ·R icp Translation variable T = T pnp +T icp 。
Claims (4)
1. A pose estimation method based on combination of straight line features and point cloud features is characterized by comprising the following steps:
step 1: the method for extracting and combining the linear features fused with the prior knowledge comprises the following steps:
1.1: straight line number l 1 ,l 2 …l b The coordinate value of the starting point of the straight line is (x) begin1 ,y begin1 ),(x begin2 ,y begin2 ),…,(x beginb ,y beginb ) The coordinate value of the end point is (x) end1 ,y end1 ),(x end2 ,y end2 ),…,(x endb ,y endb ) (ii) a From the origin O to each line l 1 ,l 2 …l b Is recorded as d 1 ,d 2 ,…,d b ,l 1 ,l 2 …l b The included angles with the positive direction of the x axis of the image are theta 1 ,θ 2 …θ b ;
1.2: grouping the lines with the same slope, and recording as group 1 ,group 2 ,…,group m Wherein m is a group, each group having a value represented by θ 1 ,θ 2 …θ b Determining the number of the differences; if group i The number of straight lines in (1) is greater than 1 ij And l ik Respectively represent group i The jth straight line and the kth straight line in the ith group are calculated i Middle two different straight lines l ij And l ik Relative distance therebetween:
Δd=d lij -d lik
wherein: d lij ,d lik Respectively from origin to straight line l ij And l ik The shortest distance of (d); setting a threshold value of the merging distance as d, and if delta d is less than d, then l ij And l ik Grouped into a collective group s Merging the straight line segments in the set;
1.3: according to the steps, until group s The number of straight lines in (1) is 1, and the straight line combination is finished;
and 2, step: straight line matching in binocular images:
set of straight lines in left and right eye imagesAnd &>Take the straight line in the left image, slave>To>In turn is in line with the right eye>And calculating according to the formula:
wherein, mark is the score value of matching two straight lines, and the Mark at the beginning final =Mark 0 =100,Mark i F is the score value in a certain process, and f is the assigned scoring conditions, including the included angle of a straight line, horizontal constraint and left-right eye parallax;
respectively taking two straight lines from the straight line set of the left eye image and the right eye imageAnd &>l represents a left eye image, r represents a right eye image, and the left eye image and the right eye image are brought into a linear included angle constraint, a horizontal constraint and a left-right visual difference constraint;
through the steps, the final value of the Mark is obtained, and the Mark is compared i And Mark i-1 Value of (2), mark final =max(Mark i ,Mark i-1 ) And simultaneously recording the number k of the straight line in the left and right image pictures corresponding to the score value tempL ,j tempR (ii) a Sequentially traversing the k-th straight line of the left eye image to the rightAfter all the straight lines in the eyes, the number j of the straight line in the image picture of the right eye is used tempR Sequentially traversing all straight lines in the left eye and executing a formula
Finally, iterating to obtain the maximum Mark value and corresponding i tempR If i is tempL =k tempL The left eye image line is numbered k tempL Is numbered j with the line of the right eye image tempR Successfully matching and storing the matching into a matching straight line set M; otherwise, the matching fails, the steps are repeated, and the iteration is carried out in sequence until the algorithm is terminated, and finally the result is obtained
And step 3: three-dimensional reconstruction of straight line features
3.1 solving of spatial lines
Let the image straight line equation be ax + by + c =0, and as can be known from the central projection principle, a straight line on the image corresponds to a plane in space, and let the spatial plane equation be:
AX+BY+CZ+D=0
the central imaging model from points has:
wherein m is 11 To m 34 For the product of the internal reference matrix imaged by the camera, the comparison equation can be derived:
simultaneously solving two coplanar equations determined by image straight lines matched on the image, intersecting to obtain a space straight line corresponding to the image straight line, and setting a space straight line L EF Corresponding image straight lineIs composed ofThe equation that lists the spatial lines is:
adopting a parametric equation of a spatial straight line, and taking any x = x 0 Substituting into the above equation, the coordinate (X) of a certain point on the space straight line is solved 0 ,Y 0 ,Z 0 );
Knowing the normal vector of two perpendicular planesDetermining a direction vector of a straight line>
The equation for the spatial line is:
3.2 solving spatial line segments
In the left eye from the center of light point O l And a straight line segment projected onto the left eye imaging planeTwo ends respectively forming two space straight linesAnd/or>Calculating a spatial straight line l EF And/or>,l EF />Respectively, are E left And F left If the two straight lines are not in the same plane, the point with the shortest distance between the two straight lines is obtained, and the intersection point E of the space straight lines can be obtained in the right eye in the same way right And F right (ii) a To ensure that the straight line information is retained to the maximum possible extent, at four points { E } obtained left ,E right ,F left ,F right In the three-dimensional space straight line, two points with the largest length are taken as a starting point and an end point of a restored line segment, so that the restoration of a space straight line segment EF is realized, and an obtained space straight line is sequentially calculated to obtain a space straight line segment set N = { L = (L) } 1 ,L 2 …L n };
And 4, step 4: pose calculation
4.1 select left destination image as coarse matched two-dimensional plane: in step 3, a line segment set is obtainedAcquiring the intersection point between every two line segments in the set, and acquiring the set of the intersection point as M pnp ={(x 1 ,y 1 ),(x 2 ,y 2 )…(x n ,y n )};
Measuring and establishing three-dimensional points in a real coordinate system, and storing the three-dimensional points into a point set G 3dpot ={(X 1 ,Y 1 ,Z 1 ),(X 2 ,Y 2 ,Z 2 )…(X n ,Y n ,Z n ) In the preceding, set M points pnp All the points in (b) are arranged in a certain order, so that the order is corresponding to the corresponding point set G 3dpot The sequence in (1) is kept consistent and is substituted into a camera imaging model formula:
Wherein K is known camera internal parameter, lambda is proportional coefficient of imaging model, and M is pnp And G 3dpot Respectively substituting the points into a formula, applying an EPNP algorithm, and finally obtaining a coarse matching rotation variable R pnp Translation variable T pnp ;
4.2, point cloud acquisition: the method comprises the following steps of carrying out segmentation sampling on a spatial straight line according to a certain threshold value, finally obtaining point cloud data based on straight line characteristics, and taking a certain straight line of the space, wherein the specific method comprises the following steps:
theta is the included angle between the straight line and the positive direction of the x axis, len is the length of the straight line, k is the number of point clouds on the straight line, and the point set dispersed by the straight line is
Dispersing the three-dimensional straight lines obtained in the step 3 into point sets P in sequence according to the algorithm, and forming point sets Q for the manually established template library according to the same method;
4.3 set the results in 4.1 to the initialized rotation matrix and translation vector, i.e. R = R pnp ,T=T pnp Updating data point set, obtaining new transformation point set by using translation and rotation parameters obtained by 4.1 for P, and calculating errorWherein p is i ∈P,q i E.g. Q, if the difference between the iterative estimation errors is less than a given threshold, i.e.The calculation is finished; if not, repeating the iteration process to obtain a rotation variable R icp And a translation variable T icp ;
Finally, the rotation variable R = R of the camera coordinate system corresponding to the physical coordinate system pnp ·R icp Translation variable T = T pnp +T icp 。
2. The pose estimation method based on the combination of the straight line feature and the point cloud feature as claimed in claim 1, wherein: the linear included angle constraint specifically comprises: setting a threshold T for the difference between the angles of two straight lines angle If it satisfiesWherein it is present>And/or>Are respectively based on>And &>And if the angle is included with the positive direction of the x axis of the image, the right target straight line meets the angle constraint, and otherwise, the Mark value returns to zero.
3. The pose estimation method based on the combination of the straight line feature and the point cloud feature as claimed in claim 1, wherein: the horizontal constraint is specifically: dy is a horizontal confinement value max To horizontally constrain the threshold, orderCoordinate of starting point (x) st1 ,y st1 ) End point coordinate (x) end1 ,y end1 ),/>Coordinate of starting point (x) str ,y str ) End point coordinate (x) endr ,y endr ) And Dy satisfies:
4. The pose estimation method based on the combination of the straight line feature and the point cloud feature as claimed in claim 1, wherein: the left-right eye parallax constraint specifically comprises:
dx is left-right parallax, dx max For left and right disparity thresholds, dx satisfies:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910526419.3A CN110310331B (en) | 2019-06-18 | 2019-06-18 | Pose estimation method based on combination of linear features and point cloud features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910526419.3A CN110310331B (en) | 2019-06-18 | 2019-06-18 | Pose estimation method based on combination of linear features and point cloud features |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110310331A CN110310331A (en) | 2019-10-08 |
CN110310331B true CN110310331B (en) | 2023-04-14 |
Family
ID=68076179
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910526419.3A Active CN110310331B (en) | 2019-06-18 | 2019-06-18 | Pose estimation method based on combination of linear features and point cloud features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110310331B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111308481B (en) * | 2020-02-21 | 2021-10-15 | 深圳市银星智能科技股份有限公司 | Laser positioning method and device and mobile robot |
CN111325796B (en) * | 2020-02-28 | 2023-08-18 | 北京百度网讯科技有限公司 | Method and apparatus for determining pose of vision equipment |
CN111813882B (en) * | 2020-06-18 | 2024-05-14 | 浙江华睿科技股份有限公司 | Robot map construction method, device and storage medium |
CN112577500A (en) * | 2020-11-27 | 2021-03-30 | 北京迈格威科技有限公司 | Positioning and map construction method and device, robot and computer storage medium |
CN112720477B (en) * | 2020-12-22 | 2024-01-30 | 泉州装备制造研究所 | Object optimal grabbing and identifying method based on local point cloud model |
CN112862692A (en) * | 2021-03-30 | 2021-05-28 | 煤炭科学研究总院 | Image splicing method applied to underground coal mine roadway |
CN116523984B (en) * | 2023-07-05 | 2023-09-26 | 矽瞻科技(成都)有限公司 | 3D point cloud positioning and registering method, device and medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107945220A (en) * | 2017-11-30 | 2018-04-20 | 华中科技大学 | A kind of method for reconstructing based on binocular vision |
CN109544599A (en) * | 2018-11-22 | 2019-03-29 | 四川大学 | A kind of three-dimensional point cloud method for registering based on the estimation of camera pose |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101833761A (en) * | 2010-04-20 | 2010-09-15 | 南京航空航天大学 | Unmanned aerial vehicle (UAV) position and orientation estimation method based on cooperative target characteristic lines |
US9154773B2 (en) * | 2013-03-15 | 2015-10-06 | Seiko Epson Corporation | 2D/3D localization and pose estimation of harness cables using a configurable structure representation for robot operations |
CN106485690A (en) * | 2015-08-25 | 2017-03-08 | 南京理工大学 | Cloud data based on a feature and the autoregistration fusion method of optical image |
CN105976353B (en) * | 2016-04-14 | 2020-01-24 | 南京理工大学 | Spatial non-cooperative target pose estimation method based on model and point cloud global matching |
CN108982901B (en) * | 2018-06-14 | 2020-06-09 | 哈尔滨工业大学 | Method for measuring rotating speed of uniform-speed rotating body |
CN109166149B (en) * | 2018-08-13 | 2021-04-02 | 武汉大学 | Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU |
CN109373898B (en) * | 2018-11-27 | 2020-07-10 | 华中科技大学 | Complex part pose estimation system and method based on three-dimensional measurement point cloud |
CN109801337B (en) * | 2019-01-21 | 2020-10-02 | 同济大学 | 6D pose estimation method based on instance segmentation network and iterative optimization |
-
2019
- 2019-06-18 CN CN201910526419.3A patent/CN110310331B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107945220A (en) * | 2017-11-30 | 2018-04-20 | 华中科技大学 | A kind of method for reconstructing based on binocular vision |
CN109544599A (en) * | 2018-11-22 | 2019-03-29 | 四川大学 | A kind of three-dimensional point cloud method for registering based on the estimation of camera pose |
Also Published As
Publication number | Publication date |
---|---|
CN110310331A (en) | 2019-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110310331B (en) | Pose estimation method based on combination of linear features and point cloud features | |
KR102647351B1 (en) | Modeling method and modeling apparatus using 3d point cloud | |
CN107392947B (en) | 2D-3D image registration method based on contour coplanar four-point set | |
CN108369741B (en) | Method and system for registration data | |
CN111563921B (en) | Underwater point cloud acquisition method based on binocular camera | |
CN109615654B (en) | Method for measuring corrosion depth and area of inner surface of drainage pipeline based on binocular vision | |
CN107588721A (en) | The measuring method and system of a kind of more sizes of part based on binocular vision | |
CN107492107B (en) | Object identification and reconstruction method based on plane and space information fusion | |
EP2751777A1 (en) | Method for estimating a camera motion and for determining a three-dimensional model of a real environment | |
CN112381886A (en) | Multi-camera-based three-dimensional scene reconstruction method, storage medium and electronic device | |
CN113393524B (en) | Target pose estimation method combining deep learning and contour point cloud reconstruction | |
CN110425983A (en) | A kind of monocular vision three-dimensional reconstruction distance measuring method based on polarization multi-spectrum | |
CN104182968A (en) | Method for segmenting fuzzy moving targets by wide-baseline multi-array optical detection system | |
CN110675436A (en) | Laser radar and stereoscopic vision registration method based on 3D feature points | |
CN114170284B (en) | Multi-view point cloud registration method based on active landmark point projection assistance | |
CN117197333A (en) | Space target reconstruction and pose estimation method and system based on multi-view vision | |
CN113393413B (en) | Water area measuring method and system based on monocular and binocular vision cooperation | |
CN112001954B (en) | Underwater PCA-SIFT image matching method based on polar curve constraint | |
CN117372647A (en) | Rapid construction method and system of three-dimensional model for building | |
CN112712566A (en) | Binocular stereo vision sensor measuring method based on structure parameter online correction | |
Kochi et al. | 3D modeling of architecture by edge-matching and integrating the point clouds of laser scanner and those of digital camera | |
Shen et al. | A 3D modeling method of indoor objects using Kinect sensor | |
CN113340201B (en) | Three-dimensional measurement method based on RGBD camera | |
CN112991372B (en) | 2D-3D camera external parameter calibration method based on polygon matching | |
CN109285210A (en) | In conjunction with the pipeline three-dimensional rebuilding method of topological relation and epipolar-line constraint |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |