CN107622499B - Identification and space positioning method based on target two-dimensional contour model - Google Patents

Identification and space positioning method based on target two-dimensional contour model Download PDF

Info

Publication number
CN107622499B
CN107622499B CN201710734207.5A CN201710734207A CN107622499B CN 107622499 B CN107622499 B CN 107622499B CN 201710734207 A CN201710734207 A CN 201710734207A CN 107622499 B CN107622499 B CN 107622499B
Authority
CN
China
Prior art keywords
contour
contours
target
closed
edge point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710734207.5A
Other languages
Chinese (zh)
Other versions
CN107622499A (en
Inventor
凌乐
陈远强
魏清平
周东
刘丝丝
莫堃
董娜
于信宾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfang Electric Corp
Original Assignee
Dongfang Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfang Electric Corp filed Critical Dongfang Electric Corp
Priority to CN201710734207.5A priority Critical patent/CN107622499B/en
Publication of CN107622499A publication Critical patent/CN107622499A/en
Application granted granted Critical
Publication of CN107622499B publication Critical patent/CN107622499B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a recognition and space positioning method based on a target two-dimensional contour model, which relates to the technical field of image processing, and aims at an object with a two-dimensional plane contour, wherein the recognition target is a contour feature on a certain interested plane of the object, the recognition target is a closed contour formed by connecting a certain number of straight line segments and/or arc line segments and/or circles, the image can shoot the target and a background thereof from any angle and height, but the visibility and clearness of all contours of the target are ensured, then the target is recognized from the image according to the contour information of the two-dimensional model of the target, and the space position of the target relative to a camera is calculated.

Description

Identification and space positioning method based on target two-dimensional contour model
Technical Field
The invention belongs to the field of image processing, and mainly aims to search and identify a target with a specified shape from an image and calculate the spatial pose of the target relative to a camera.
Background
Template matching is the most common technique for identifying and locating a given object from an image, i.e. giving a template of an object and then searching for objects from the image that are most similar to the template. The simplest template matching technology is that an object with the size and the direction completely consistent with the template is searched from the image, the template only needs to be translated one by one along the image pixel coordinate system, the similarity is respectively calculated until the similarity is greater than an acceptance threshold, and then the target is considered to be searched; however, in practical applications, the sizes of the target and the template in the image are different due to different shooting distances, and the target may rotate around the surface normal, the method is no longer applicable, but in this case, the target is not deformed, i.e., circular or circular, and the rectangle is still rectangular and is only scaled in equal proportion, the existing method is to discretize the scaling ratio and the rotation angle, perform affine transformation on the original template according to a certain step length to generate a plurality of templates with different sizes and different angles, and then perform the template matching operation; the automatic identification and positioning of workpieces on a production line by the technology can meet the requirements, because the height and the angle of a camera and a production line platform are fixed, an object usually only rotates around a normal line, however, when a mechanical arm and a robot need to identify and position a space target, the size and the normal rotation angle of the target in an image may change, projection deformation may occur, because the shooting angle is arbitrary, at the moment, a circle may be projected into an ellipse, a rectangle is changed into a parallelogram, an irregular target consisting of a plurality of straight lines and arc lines can present a form which is more difficult to identify, and the template matching method is not applicable any more. The projection deformation matching is a difficult problem of image identification, because it needs to identify the target from the deformed image and also needs to calculate the current spatial position (relative to the camera) of the target, therefore, the invention designs an identification and spatial positioning method based on a target two-dimensional contour model aiming at the projection deformation target.
Disclosure of Invention
The method aims at an object with a two-dimensional plane contour, the recognition target is a contour feature on a certain interested plane of the object, the recognition target is a closed contour formed by connecting a certain number of straight line segments and/or arc line segments and/or circles, the image can shoot the target and the background where the target is located from any angle and height, but all contours of the target are required to be visible and clear, then the target is recognized from the image according to the contour information of a two-dimensional model of the target, and the spatial position of the target relative to a camera is calculated.
The purpose of the invention is realized by the following technical scheme:
a target two-dimensional contour model-based identification and spatial positioning method is characterized by comprising the following steps:
step 1, preparing a two-dimensional model of a target, drawing a real two-dimensional model of the target in drawing software, wherein the length unit is millimeter, exporting drawing paper into a file containing image contour information, wherein the contour line of the two-dimensional model is closed, and the contour consists of straight line segments, arc line segments or circles;
step 2, reading the file containing the image contour information, analyzing out a geometric primitive, searching and describing a closed contour, judging and distinguishing an ambiguous contour and a non-ambiguous contour in the file, describing a connection sequence relation of a straight line and an arc line in the closed contour, and establishing a target two-dimensional contour geometric primitive topological relation;
step 3, collecting a target image, detecting the edge of the image by using a Canny edge detection operator, traversing each contour, eliminating non-closed contours, and carrying out single-pass and multi-pass edge point inspection on the closed contours; selecting the checked single-pass and multi-pass edge points to carry out intersecting contour separation; enabling the polygon to approach the closed contours, and performing recursive subdivision on each closed contour by using a Ramer algorithm until the maximum distance from all the obtained line segments to the corresponding closed contour is smaller than a threshold value Dmax; fitting and combining the straight line segments and/or the arc segments, and calculating the coordinates of intersection points of adjacent line segments according to the fitted straight line equation and/or the fitted elliptic arc equation, wherein the intersection points correspond to the target two-dimensional contour in the step 2;
and 4, appointing a search contour, retrieving all closed contours generated in the step 3, confirming undetermined contours formed by line segments which are the same as the contour of the target template, performing posture operation on the contours to be confirmed, completing confirmation, marking successfully identified targets in the image, and outputting target positions and projection matrixes.
In step 2, the ambiguous contours, that is, the sequential descriptions of the contours, have the same sequential description; the specific method for establishing the topological relation of the target two-dimensional contour geometric primitive comprises
Analyzing primitive information from an segments of the files containing the image contour information, wherein the primitive information comprises start and stop coordinates of Nl straight lines, the center, the radius and the start and stop coordinates of Na circular arcs, and the center coordinates and the radius of Nc circles, and storing information of all the straight lines, the circular arcs and the circles;
searching for a closed contour, starting from the starting point of any straight line or arc, searching for the starting point of the next connecting line segment which is coincident with the end point of the straight line, and repeating the process until the starting point returns to the beginning, namely searching for a closed contour;
removing the line segments contained in the searched closed contour from the original data, repeating the step of searching the closed contour until all the line segments are searched, counting N closed contours, adding Nc circles which are self-closed contours in the target contour, counting N + Nc closed contours in the target two-dimensional model, and counting the number Nl of straight lines in each closed contouriArc number NaiNumber of circles NciWherein the corner mark i represents the number of the closed contour;
describing the connection sequence relation of straight lines and arc lines of the non-circular closed contour in the closed contour, wherein the description rule is that starting from any line segment, all line segments of the contour are traversed to return to the starting point along a fixed clockwise or anticlockwise direction, if the line segments are straight lines, the line segments are marked as 0, and the arc lines are marked as 1; for each non-circular closed contour, the sequential relationship is common (NL)i+NAi) And (3) obtaining a connection sequence by taking each line segment as a starting point to complete the establishment of the topological relation.
In the step 2, a step of checking whether the closed contour is an ambiguous contour or a non-ambiguous contour is further included; in an ambiguous profile, i.e. a sequential description of the profile, there is the same sequential description.
In step 3, the method for excluding the non-closed contour includes: traversing all the edge points, if a certain edge point is in the three neighborhoods thereof and the number of other edge points is only 1, the edge point belongs to an 'unclosed point', deleting the edge point, then rechecking the edge point adjacent to the 'unclosed point', whether only one other edge point exists in the three neighborhoods thereof, and so on, namely deleting an unclosed outline; and repeating the process, wherein after all the edge points are processed, the remaining contours are all closed contours.
In the step 3, the method for checking the single-pass and multi-pass edge points of the closed contour is to check the number of the non-adjacent edge points in the five-neighborhood outermost circle of a certain edge, check whether each non-adjacent edge point can reach the central edge point along the three neighborhoods, if so, indicate that the non-adjacent edge point is a channel of the central edge point, if not, the non-adjacent edge point is irrelevant to the central edge point and is directly excluded, and finally, if the number of the channels of the central edge point is more than 2, the non-adjacent edge point is a multi-channel edge point, otherwise, the non-adjacent edge point is a single-channel edge point
In the step 3, the intersecting contour separating method is that starting from any edge point, if the edge point is a single-pass edge point, a channel is selected to search for the next edge point, and if the edge point is a multi-pass edge point, the next edge point is searched along each channel; if the searched edge points are returned in the searching process, deleting the route until the route is returned to the starting point, comparing all paths successfully returned to the starting point, taking the shortest one as the minimum closed contour, repeating the above process, and completing the separation of all intersecting contours.
In the step 3, the specific method for fitting and merging the straight line segments and/or the arc segments is as follows:
fitting an elliptical arc segment, checking two adjacent edges of a polygon approaching a closed contour, using the edge points of the two edges to fit the ellipse, calculating an elliptical fitting error, if the error is smaller than the approximation error of the two edges and the corresponding contour, using the elliptical arc to replace two straight line edges, and repeating the process along the two adjacent edges until the elliptical arc cannot be used for better fitting;
combining elliptical arc sections, calculating the center distance between two adjacent elliptical arcs if continuous sections of contours fitting by elliptical arcs appear in recursive subdivision on each closed contour by using a Ramer algorithm, indicating that the contours can be combined if the center distance is smaller than a threshold Dcmax, and repeating the same operation with the next adjacent elliptical arc after the contours are combined into one elliptical arc until the adjacent elliptical arcs are processed completely;
straight line fitting and merging, namely using a Ramer algorithm to fit the contours which cannot be fitted by using elliptical arcs in the recursive subdivision of each closed contour, directly using straight lines to fit, and judging and merging adjacent straight line segments;
solving the coordinates of the intersection points of the adjacent line segments, wherein the linear equation Y is kX + b and the elliptic arc equation is AX2+BXY+CY2Solving the + DX + EY +1 as 0 to obtain the intersection point coordinates (X, Y) of the adjacent line segments, wherein the intersection point corresponds to the node of the target template; where k, B are coefficients calculated for fitting, representing only one straight line, and (A, B, C, D, E) are parameters calculated for fitting an ellipse, each set of parameters representing a unique ellipse.
In the step 4, the method for specifying the search contour is that if a plurality of closed contours exist in the target contour, the non-ambiguous contour is preferably used as the template contour, and if all the closed contours are ambiguous contours, the contour with the largest number of line segments is used as the template contour, the contours with the largest number of line segments are matched along the corner points, the similarity is all calculated, and the most similar group is selected.
In the step 4, the specific method for confirming the undetermined contour consisting of the same line segments as the contour of the target template is to retrieve all closed contours generated in the step 3 and eliminate the contours which form line segments with different numbers from the contour segments of the template; and in the rest contours, randomly selecting a node in the contour, arranging the connection sequence of the contour in a clockwise direction, if the sequence can be found in the (NLi + NAi) connection sequence of the template contour, determining the searched contour as the contour to be determined, and if not, directly excluding the searched contour.
In step 4, the method for performing attitude calculation and completing confirmation on the contour to be determined comprises the following steps:
determining the corresponding relation between the physical seat plate and the pixel coordinate between nodes according to the sequence relation between the undetermined contour and the template contour, giving the physical coordinate of the node by the template contour, establishing a posture calculation equation firstly
Figure BDA0001387772950000051
Figure BDA0001387772950000052
Wherein
Figure BDA0001387772950000053
The reference matrix is an internal reference matrix of the camera and can be obtained by calibrating the camera;
selecting three non-collinear nodes in the template contour, and setting the physical coordinate [ X ] of the nodesw Yw 0]i TAnd pixel coordinates [ u0 v0 ] detected in the contour to be determined]i TSubstituting the attitude calculation equation to obtain a projection transformation matrix obtained by calculation
Figure BDA0001387772950000054
Substituting the projection transformation matrix into the above attitude calculation equation, and substituting the physical coordinates [ X ] of all nodes of the template contourw Yw0]i TThe projection pixel coordinates of each node are calculated [ u v ] by substituting the above-mentioned attitude calculation equation]i T
Calculating projection similarity F according to the pixel coordinate (u v) i of the node and the pixel coordinate (u0 v0) i of the corresponding node detected in the contour to be determined, wherein the calculation formula is as follows
Figure BDA0001387772950000055
Wherein n represents the sum of the number of nodes in the template profile;
for the non-ambiguous contour, if the similarity exceeds a set threshold, the undetermined contour is regarded as a correct target, and the projection transformation moment is returned
Figure BDA0001387772950000061
An array describing the spatial translational position of the object relative to the camera and its rotation angle about various axes;
For an ambiguous profile, if the similarity under the corresponding relation of various nodes exceeds a set threshold, obtaining a plurality of posture matrixes, setting the profile to be determined as a polymorphic profile, and if the target only has the ambiguous profile, directly returning the plurality of posture matrixes meeting the requirement;
if the plurality of polymorphic profiles have the unique common attitude matrix, the attitude matrix is the correct attitude of the ambiguous profile; if a plurality of identical attitude matrixes still exist, the distribution of a plurality of ambiguous contours of the target is completely symmetrical, and then a plurality of attitude matrixes are directly returned and the target which is successfully identified is marked in the image.
The invention has the following beneficial effects:
the invention provides a recognition and space positioning method based on a target two-dimensional contour model, wherein although the length, radius, included angle, proportional relation and the like of geometric elements (straight line segments and arc segments) can be changed after the target is projected and deformed, the unchanged topological relation among the geometric elements is maintained; the topological relation is the connection sequence relation of the number of straight line segments, the number of arc segments, the number of circles and the straight line segments contained in the target contour; according to a contour model of a given target, establishing a topological relation of geometric primitives of the given target, then carrying out edge detection on an image to be searched, then carrying out segmentation fitting on the geometric primitives, segmenting all contours into straight lines, ellipses and elliptical arcs, and searching and matching in the image according to the topological relation of the target model until the topological relation of the target is met.
The Canny adopted by the identification and space positioning method based on the target two-dimensional contour model is the best pixel-level edge detection algorithm recognized in the industry, and the Ramer algorithm is the same.
The invention further provides a recognition and space positioning method based on the target two-dimensional contour model, and the recognition target is the contour feature on a certain interested plane of the object. The identification target is connected into a closed contour by a certain number of straight line segments and/or arc segments and/or circles. The image can shoot the target and the background thereof from any angle and height, but all the contours of the target are ensured to be visible and clear, then the target is identified from the image according to the contour information of the two-dimensional model of the target, and the spatial position of the target relative to the camera is calculated.
Drawings
FIG. 1 is a schematic diagram depicting an inner contour of an object image according to the present invention;
FIG. 2 is a schematic representation of three areas of the present invention;
FIG. 3 is a schematic illustration of five fields of the invention;
FIG. 4 is a schematic diagram of the shortest path search and separation strategy for intersecting contours of the present invention;
fig. 5 is a schematic diagram of the logical structure of the present invention.
Detailed Description
The technical solutions for achieving the objects of the present invention are further described below by using several specific examples, and it should be noted that the technical solutions claimed in the present invention include, but are not limited to, the following examples.
Example 1
Referring to fig. 1 to 5, a method for identifying and spatially positioning an object based on a two-dimensional contour model is characterized by comprising the following steps:
step 1, preparing a two-dimensional model of a target, drawing a real two-dimensional model of the target in drawing software, wherein the length unit is millimeter, exporting drawing paper into a file containing image contour information, wherein the contour line of the two-dimensional model is closed, and the contour consists of straight line segments, arc line segments or circles;
step 2, reading the file containing the image contour information, analyzing out a geometric primitive, searching and describing a closed contour, judging and distinguishing an ambiguous contour and a non-ambiguous contour in the file, describing a connection sequence relation of a straight line and an arc line in the closed contour, and establishing a target two-dimensional contour geometric primitive topological relation;
step 3, collecting a target image, detecting the edge of the image by using a Canny edge detection operator, traversing each contour, eliminating non-closed contours, and carrying out single-pass and multi-pass edge point inspection on the closed contours; selecting the checked single-pass and multi-pass edge points to carry out intersecting contour separation; enabling the polygon to approach the closed contours, and performing recursive subdivision on each closed contour by using a Ramer algorithm until the maximum distance from all the obtained line segments to the corresponding closed contour is smaller than a threshold value Dmax; fitting and combining the straight line segments and/or the arc segments, and calculating the coordinates of intersection points of adjacent line segments according to the fitted straight line equation and/or the fitted elliptic arc equation, wherein the intersection points correspond to the target two-dimensional contour in the step 2;
and 4, appointing a search contour, retrieving all closed contours generated in the step 3, confirming undetermined contours formed by line segments which are the same as the contour of the target template, performing posture operation on the contours to be confirmed, completing confirmation, marking successfully identified targets in the image, and outputting target positions and projection matrixes.
This is one of the most basic embodiments of the present invention. Although the length, radius, included angle, proportion relation and the like of geometric elements (straight line segments and arc segments) can be changed after the projection deformation of the target, the unchanged two-dimensional outline of the target is the topological relation among the geometric elements; the topological relation is the connection sequence relation of the number of straight line segments, the number of arc segments, the number of circles and the straight line segments contained in the target contour; establishing a topological relation of geometric primitives of a given target according to a contour model of the target, then carrying out edge detection on an image to be searched, then carrying out segmentation fitting on the geometric primitives, segmenting all contours into straight lines, ellipses and elliptical arcs, and searching and matching in the image according to the topological relation of the target model until the topological relation of the target is met; canny is adopted as the best pixel-level edge detection algorithm recognized in the industry, and the Ramer algorithm is the same.
Example 2
A target two-dimensional contour model-based identification and spatial positioning method is characterized by comprising the following steps
Step 1, preparing a two-dimensional model of a target, drawing a real two-dimensional model of the target in AutoCAD, wherein the length unit is millimeter, and exporting the model into a DXF format file, wherein the outline of the two-dimensional model must be closed, and the outline consists of a straight line segment, an arc line segment or a circle;
step 2, reading a file containing image contour information, analyzing geometric primitives, establishing a target two-dimensional contour geometric primitive topological relation, searching and describing a closed contour, and judging whether the contour is an ambiguous contour;
step 3, collecting a target image, detecting edges in the image by using a Canny edge detection operator, excluding non-closed contours, carrying out single-pass and multi-pass edge point detection on the closed contours, selecting the detected single-pass and multi-pass edge points to carry out intersection contour separation, and carrying out recursive subdivision on each closed contour by using a Ramer algorithm until the maximum distance from all obtained line segments to the respective corresponding contour segment is less than a certain threshold value Dmax; fitting and combining the straight line segments and/or the arc segments, and calculating the coordinates of intersection points of adjacent line segments according to the fitted straight line equation and/or the fitted elliptic arc equation, wherein the intersection points correspond to the nodes of the target template;
and 4, appointing a search contour, retrieving all closed contours generated in the step 3, confirming undetermined contours formed by line segments which are the same as the contour of the target template, performing posture operation on the contours to be confirmed, completing confirmation, marking successfully identified targets in the image, and outputting target positions and projection matrixes.
In step 2, the ambiguous contours, that is, the sequential descriptions of the contours, have the same sequential description; the specific method for establishing the topological relation of the target two-dimensional contour geometric primitive comprises
Analyzing primitive information from an segments of the files containing the image contour information, wherein the primitive information comprises start and stop coordinates of Nl straight lines, the center, the radius and the start and stop coordinates of Na circular arcs, and the center coordinates and the radius of Nc circles, and storing information of all the straight lines, the circular arcs and the circles;
searching for a closed contour, starting from the starting point of any straight line or arc, searching for the starting point of the next connecting line segment which is coincident with the end point of the straight line, and repeating the process until the starting point returns to the beginning, namely searching for a closed contour; since there is no intersection in the closed contour of the target template, there is only one search path that can go back to the starting point. Specifically, in the following, when searching for a closed contour from an actual image, there may be an intersecting closed contour, and in this case, it is necessary to search for a closed contour according to the shortest path, which will be described in detail later.
Removing the line segments contained in the searched closed contour from the original data, repeating the step of searching the closed contour until all the line segments are searched, counting N closed contours and adding the circle Nc in the target contour, namely the closed contour, so that (N + Nc) closed contours are counted in the target two-dimensional model, counting the number NLi of straight lines, the number NAi of circular arcs and the number NCi of circles in each closed contour, wherein the corner mark i represents the number of the closed contours;
describing the connection sequence relation between straight lines and arcs in the closed contour, wherein the description rule is that starting from any line segment, all line segments of the contour are traversed to return to the starting point along a fixed clockwise or anticlockwise direction, if the line segment is the straight line, the line segment is marked as 0, and the arc line is marked as 1; for non-circular contours, the order relationship has (NLi + NAi) in common, i.e. each line segment is used as a starting point to obtain a connection order. For example, in the example of FIG. 1, the connection of the outer contours can be represented as four types, 1-0-0-0, 0-1-0, 0-0-0-1, in sequence.
Check if it is an "ambiguous profile"; the same sequential description exists for an "ambiguous profile", i.e., the (NLi + NAi) sequential description of the profile, as in the example of FIG. 1, the four connections of the inner profile are sequentially 1-0-1-0, 0-1-0-1, 1-0-1-0, 0-1-0-1; during the matching process, the 'ambiguous profile' cannot obtain a unique matching result.
Searching from the closed contour of 'non-ambiguity', if all closed contours are 'ambiguous', matching along the corner points from the closed contour with the most number of corner points, sharing X kinds of corresponding relations, calculating similarity, and selecting the most similar group;
in the step 3, the method for excluding the non-closed contour is to traverse all edge points, if a certain edge point is in the three neighborhoods thereof and the number of other edge points is only 1, the edge point belongs to an 'unclosed point', after deleting the edge point, rechecking the edge point adjacent to the 'unclosed point', whether there is only one other edge point in the three neighborhoods thereof, and so on, deleting an unclosed contour; and repeating the process, wherein after all the edge points are processed, the remaining contours are all closed contours.
In the step 3, the method for checking the single-pass and multi-pass edge points of the closed contour is to check the number of the non-adjacent edge points in the five-neighborhood outermost circle of a certain edge, check whether each non-adjacent edge point can reach the central edge point along the three neighborhoods, if so, indicate that the non-adjacent edge point is a channel of the central edge point, if not, the non-adjacent edge point is unrelated to the central edge point and is directly excluded, and finally, if the channel of the central edge point is more than 2, the non-adjacent edge point is a multi-channel edge point, otherwise, the non-adjacent edge point is a single-channel edge point
In the step 3, the intersecting contour separating method is that the detected single-pass and multi-pass edge points are started from any edge point, if the edge point is a single-pass, a channel is selected to search for the next edge point, and if the edge point is a multi-channel edge point, the next edge point is searched along each channel respectively; and if the searched edge points are returned in the searching process, deleting the route until the route is returned to the starting point, comparing all paths successfully returned to the expected destination, taking the shortest path as the minimum closed contour, repeating the process, and completing the separation of all the intersecting contours.
In the step 3, the specific method for fitting and merging the straight line segments and/or the arc segments is as follows
And fitting the elliptic arc segment, checking adjacent two sides of the approximate polygon, using edge points where the two sides of the elliptic fit are located, calculating an elliptic fit error, and replacing two straight line sides with the elliptic arc if the error is less than the approximate error between the two sides and the corresponding contour. The above process is repeated along two adjacent edges until a better fit cannot be made using the elliptical arc.
And f, combining the elliptical arc sections, and if the continuous multiple sections of the contour fitting by the elliptical arc appears in the step f, considering whether the continuous multiple sections of the contour can be combined into an elliptical arc for fitting. The specific method comprises the following steps: and calculating the center distance between two adjacent elliptical arcs, if the center distance is smaller than a threshold Dcmax, indicating that the two adjacent elliptical arcs can be combined, combining the two elliptical arcs into one elliptical arc, and repeating the same operation with the next adjacent elliptical arc until the adjacent elliptical arcs are processed.
And (3) linear fitting and merging, namely directly fitting the contour which cannot be fitted by using the elliptical arc by using a linear line, and judging and merging adjacent linear segments.
And solving the coordinates of the intersection points of the adjacent line segments, and calculating the coordinates of the intersection points of the adjacent line segments according to the fitted linear equation and elliptic arc equation, wherein the intersection points correspond to the nodes of the target template.
In the step 4, the method for specifying the search contour is to preferentially select the "unambiguous" contour as the template contour if a plurality of closed contours exist in the target contour, and to select the contour with the largest number of line segments as the template contour if the "unambiguous" contour does not exist.
In the step 4, the specific method for confirming the undetermined contour consisting of the same line segments as the contour of the target template is to retrieve all closed contours generated in the step 3 and eliminate the contours which form line segments with different numbers from the contour segments of the template; and (3) in the rest contours, arbitrarily selecting a node (intersection point of adjacent line segments) in the contour, arranging the connection sequence of the contour in the clockwise direction (which is consistent with the direction in the step 2 e), and if the sequence can be found in the connection sequence in the (NLi + NAi) of the template contour, determining the searched contour as the contour to be determined, otherwise, directly excluding the searched contour.
In the step 4, the method for performing attitude operation and completing confirmation on the contour to be determined comprises the following steps
Determining the corresponding relation of 'physics-pixel' between nodes according to the sequence relation of the undetermined contour and the template contour, wherein the template contour gives the physical coordinates of the nodes, the nodes of the undetermined contour are pixel coordinates, if the template contour is an 'ambiguous contour', two or more corresponding relations which are in line are present and should be treated equally, and the following calculation is completed
Firstly, establishing an attitude calculation equation:
Figure BDA0001387772950000111
Figure BDA0001387772950000112
wherein
Figure BDA0001387772950000113
The reference matrix is an internal reference matrix of the camera and can be obtained by calibrating the camera;
selecting three non-collinear nodes in the template contour, and setting the physical coordinate [ X ] of the nodesw Yw 0]i TAnd pixel coordinates [ u0 v0 ] detected in the contour to be determined]i TSubstituting the attitude calculation equation to obtain a projection transformation matrix obtained by calculation
Figure BDA0001387772950000114
Substituting the projection transformation matrix into the above attitude calculation equation, and substituting the physical coordinates [ X ] of all nodes of the template contourw Yw0]i TThe projection pixel coordinates of each node are calculated [ u v ] by substituting the above-mentioned attitude calculation equation]i T
Calculating projection similarity F according to the pixel coordinate (u v) i of the node and the pixel coordinate (u0 v0) i of the corresponding node detected in the contour to be determined, wherein the calculation formula is as follows
Figure BDA0001387772950000121
Wherein n represents the sum of the number of nodes in the template profile;
regarding the 'unambiguous contour', if the similarity exceeds a set threshold, the undetermined contour is considered as a correct target and returns to the projective transformation matrix
Figure BDA0001387772950000122
The matrix describes the spatial translational position of the object relative to the camera and its rotation angle around various axes;
for the ambiguous profile, if the similarity under the corresponding relations of various nodes exceeds a set threshold, a plurality of posture matrixes are obtained and cannot be uniquely determined, the to-be-determined profile is set as a polymorphic profile, and if the target only has the ambiguous profile, the posture matrixes meeting the requirements are directly returned;
if the target has other 'ambiguous contours', setting other contours as new template contours, and repeating the steps near the 'polymorphic contours' to obtain new 'polymorphic contours';
if a unique common attitude matrix exists among the multiple polymorphic profiles, the attitude matrix is the correct attitude of the ambiguous profile; if there are still multiple identical pose matrices, which indicate that the distribution of multiple "ambiguous contours" of the target is completely symmetric, then the multiple pose matrices are returned directly and the target that was successfully identified is marked in the image.
Example 3
A target two-dimensional contour model-based identification and spatial positioning method comprises the following steps
Step 1, preparing a two-dimensional model of a target, drawing a real two-dimensional model of the target in drawing software, wherein the length unit is millimeter, exporting drawing paper into a DXF format file, wherein the outline of the two-dimensional model is closed, and the outline consists of a straight line segment, an arc line segment or a circle;
step 2, reading the file containing the image contour information, analyzing out a geometric primitive, searching and describing a closed contour, judging and distinguishing an ambiguous contour and a non-ambiguous contour in the file, describing a connection sequence relation of a straight line and an arc line in the closed contour, and establishing a target two-dimensional contour geometric primitive topological relation;
step 3, collecting a target image, detecting the edge of the image by using a Canny edge detection operator, traversing each contour, removing non-closed contours, and carrying out single-pass and multi-pass edge point inspection on the closed contours, wherein the edge of the image, namely pixel points on the edge, form a series of contours, and the edge points refer to the pixel points on the edge of the image detected by the Canny edge detection operator in the step 3, namely the pixel points on the contours; selecting the checked single-pass and multi-pass edge points to carry out intersecting contour separation; enabling the polygon to approach the closed contours, and performing recursive subdivision on each closed contour by using a Ramer algorithm until the maximum distance from all the obtained line segments to the corresponding closed contour is smaller than a threshold value Dmax; fitting and combining the straight line segments and/or the arc segments, and calculating the coordinates of the intersection points of the adjacent line segments according to the fitted straight line equation and/or the fitted elliptical arc equation, wherein the intersection points correspond to the target two-dimensional contour in the step 2 and are also composed of geometric primitives such as straight line segments, arc segments and the like, and the nodes are the intersection points of the straight line segments and/or the elliptical arc segments in the target two-dimensional contour; canny is the best pixel-level edge detection algorithm recognized in the industry, and the Ramer algorithm works similarly; the threshold value Dmax is suggested to be 2, and the suggestion is made in a paper by a Ramer algorithm author; the single-pass and multi-pass edge points are used for describing the adjacent relation between any edge pixel point and peripheral edge points and judging whether the outlines are intersected or not;
and 4, appointing a search contour, retrieving all closed contours generated in the step 3, confirming undetermined contours formed by line segments which are the same as the contour of the target template, performing posture operation on the contours to be confirmed, completing confirmation, marking successfully identified targets in the image, and outputting target positions and projection matrixes.
In step 2, the ambiguous contours, that is, the sequential descriptions of the contours, have the same sequential description; the specific method for establishing the topological relation of the target two-dimensional contour geometric primitive comprises
Analyzing primitive information from an segments of the files containing the image contour information, wherein the primitive information comprises start and stop coordinates of Nl straight lines, the center, the radius and the start and stop coordinates of Na circular arcs, and the center coordinates and the radius of Nc circles, and storing information of all the straight lines, the circular arcs and the circles;
searching for a closed contour, starting from the starting point of any straight line or arc, searching for the starting point of the next connecting line segment which is coincident with the end point of the straight line, and repeating the process until the starting point returns to the beginning, namely searching for a closed contour;
removing the line segments contained in the searched closed contour from the original data, repeating the step of searching the closed contour until all the line segments are searched, counting N closed contours, adding Nc circles which are self-closed contours in the target contour, wherein N + Nc closed contours are totally counted in the target two-dimensional model, and counting the number Nli of straight lines, the number Nai of circular arcs and the number Nci of circles in each closed contour, wherein the corner mark i represents the number of the closed contours;
describing the connection sequence relation of straight lines and arc lines of the non-circular closed contour in the closed contour, wherein the description rule is that starting from any line segment, all line segments of the contour are traversed to return to the starting point along a fixed clockwise or anticlockwise direction, if the line segments are straight lines, the line segments are marked as 0, and the arc lines are marked as 1; for each non-circular closed contour, the sequence relation has (NLi + NAi), that is, each line segment is used as a starting point, a connection sequence is obtained, and the establishment of the topological relation is completed.
In the step 2, a step of checking whether the closed contour is an ambiguous contour or a non-ambiguous contour is further included; in the sequential description of an ambiguous profile, i.e., the profile, there is the same sequential description, as in the example of FIG. 1, the four connections of the inner profile are sequentially 1-0-1-0, 0-1-0-1, 1-0-1-0, 0-1-0-1; the ambiguous contours do not have a unique corresponding relation, so that a unique solution cannot be obtained subsequently, further elimination is needed, and the calculation workload is increased, so that the non-ambiguous contours are preferentially used for performing matching calculation if the non-ambiguous contours exist, and the two required calculation processes are different and need to be distinguished if the non-ambiguous contours are all ambiguous contours and are matched and calculated according to another method;
in step 3, the method for excluding the non-closed contour includes: traversing all the edge points, if a certain edge point is in the three neighborhoods thereof and the number of other edge points is only 1, the edge point belongs to an 'unclosed point', deleting the edge point, then rechecking the edge point adjacent to the 'unclosed point', whether only one other edge point exists in the three neighborhoods thereof, and so on, namely deleting an unclosed outline; and repeating the process, wherein after all the edge points are processed, the remaining contours are all closed contours.
The three neighborhoods refer to pixel windows with the edge points to be calculated as the center and 3 times 3, as shown in fig. 2, and the 'five neighborhoods' refer to pixel windows with the edge points to be calculated as the center and 5 times 5, as shown in fig. 3; "the number of other edge points is only 1" in the three neighborhoods thereof means that one and only one of eight pixel points except for itself in the three neighborhoods is an edge point belonging to the traversed current contour, as shown in fig. 2, black represents an edge point, and white is a pixel point not having an edge point; as shown in fig. 3, "non-adjacent edge point" refers to a point in the outermost circle of five neighborhoods, and at this time, the pixels of 3X3 in the five neighborhoods are not considered, and a plurality of adjacent pixel points in the outermost circle of pixels are regarded as an edge point, but if a certain pixel point is isolated and is not adjacent to any other pixel point, the certain pixel point is directly regarded as an edge point. The edge points in the two cases are collectively called "non-adjacent edge points", and as shown in the left drawing of the lower drawing, the edge points marked as 1 and 2 are isolated in one circle at the outermost periphery and are therefore respectively marked as "non-adjacent edge points", and the place marked as 3, two edge points are adjacent and are therefore considered as "one edge point", so that the left drawing has three non-adjacent edge points in total; the "channel of the central edge point" means that if the above "non-adjacent edge point" can search the central edge point along the adjacent edge points, i.e. the "non-adjacent edge point" is communicated with the central edge point and is recorded as a channel.
In the step 3, the method for checking the single-pass and multi-pass edge points of the closed contour is to check the number of the non-adjacent edge points in the five-neighborhood outermost circle of a certain edge, check whether each non-adjacent edge point can reach the central edge point along the three neighborhoods, if so, indicate that the non-adjacent edge point is a channel of the central edge point, if not, the non-adjacent edge point is irrelevant to the central edge point and is directly excluded, and finally, if the number of the channels of the central edge point is more than 2, the non-adjacent edge point is a multi-channel edge point, otherwise, the non-adjacent edge point is a single-channel edge point
In the step 3, the intersecting contour separating method is that starting from any edge point, if the edge point is a single-pass edge point, a channel is selected to search for the next edge point, and if the edge point is a multi-pass edge point, the next edge point is searched along each channel; if the searched edge points are returned in the searching process, deleting the route until the searched edge points are returned to the starting point, comparing all paths successfully returned to the starting point, wherein the paths are closed outlines formed along the edge point searching, taking the shortest one as the minimum closed outline, repeating the above processes, and completing the separation of all the intersecting outlines.
In the step 3, the specific method for fitting and merging the straight line segments and/or the arc segments is as follows
Fitting an elliptical arc segment, checking two adjacent edges of a polygon approaching a closed contour, using the edge points of the two edges to fit the ellipse, calculating an elliptical fitting error, if the error is smaller than the approximation error of the two edges and the corresponding contour, using the elliptical arc to replace two straight line edges, and repeating the process along the two adjacent edges until the elliptical arc cannot be used for better fitting;
combining elliptical arc sections, calculating the center distance between two adjacent elliptical arcs if continuous sections of contours fitting by elliptical arcs appear in recursive subdivision on each closed contour by using a Ramer algorithm, indicating that the contours can be combined if the center distance is smaller than a threshold Dcmax, and repeating the same operation with the next adjacent elliptical arc after the contours are combined into one elliptical arc until the adjacent elliptical arcs are processed completely;
the threshold value Dcmax may be in the range of 0.5 to 2 according to empirical values.
Straight line fitting and merging, namely using a Ramer algorithm to fit the contours which cannot be fitted by using elliptical arcs in the recursive subdivision of each closed contour, directly using straight lines to fit, and judging and merging adjacent straight line segments;
solving the coordinates of the intersection points of the adjacent line segments, wherein the linear equation Y is kX + b and the elliptic arc equation is AX2+BXY+CY2Solving the + DX + EY +1 as 0 to obtain the intersection point coordinates (X, Y) of the adjacent line segments, wherein the intersection point corresponds to the node of the target template; where k, B are coefficients calculated for fitting, representing only one straight line, and (A, B, C, D, E) are parameters calculated for fitting an ellipse, each set of parameters representing a unique ellipse.
In the step 4, the method for specifying the search contour is that if a plurality of closed contours exist in the target contour, the non-ambiguous contour is preferably used as the template contour, and if all the closed contours are ambiguous contours, the contour with the largest number of line segments is used as the template contour, the contours with the largest number of line segments are matched along the corner points, the similarity is all calculated, and the most similar group is selected.
In the step 4, the specific method for confirming the undetermined contour consisting of the same line segments as the contour of the target template is to retrieve all closed contours generated in the step 3 and eliminate the contours which form line segments with different numbers from the contour segments of the template; and in the rest contours, randomly selecting a node in the contour, arranging the connection sequence of the contour in a clockwise direction, if the sequence can be found in the (NLi + NAi) connection sequence of the template contour, determining the searched contour as the contour to be determined, and if not, directly excluding the searched contour.
In step 4, the method for performing attitude calculation and completing confirmation on the contour to be determined comprises the following steps:
determining the corresponding relation of 'physics-pixel' between nodes according to the sequence relation between the undetermined contour and the template contour, wherein the template contour gives the physical coordinates of the nodes, the nodes of the undetermined contour are pixel coordinates, if the template contour is an ambiguous contour, two or more corresponding relations which are in accordance with the undetermined contour are present and should be treated equally, and the following calculation is completed
Firstly, establishing an attitude calculation equation
Figure BDA0001387772950000161
Figure BDA0001387772950000162
Wherein
Figure BDA0001387772950000163
The reference matrix is an internal reference matrix of the camera and can be obtained by calibrating the camera;
selecting three non-collinear nodes in the template contour, and setting the physical coordinate [ X ] of the nodesw Yw 0]i TAnd pixel coordinates [ u0 v0 ] detected in the contour to be determined]i TSubstituting the attitude calculation equation to obtain a projection transformation matrix obtained by calculation
Figure BDA0001387772950000164
Substituting the projection transformation matrix into the above attitude calculation equation, and substituting the physical coordinates [ X ] of all nodes of the template contourw Yw0]i TThe projection pixel coordinates of each node are calculated [ u v ] by substituting the above-mentioned attitude calculation equation]i T
Calculating projection similarity F according to the pixel coordinate (u v) i of the node and the pixel coordinate (u0 v0) i of the corresponding node detected in the contour to be determined, wherein the calculation formula is as follows
Figure BDA0001387772950000171
Wherein n represents the sum of the number of nodes in the template profile;
for the non-ambiguous contour, if the similarity exceeds a set threshold, the undetermined contour is regarded as a correct target, and the projection transformation moment is returned
Figure BDA0001387772950000172
An array describing the spatial translational position of the target relative to the camera and its rotation angle about various axes;
for an ambiguous profile, if the similarity under the corresponding relation of various nodes exceeds a set threshold, obtaining a plurality of posture matrixes, setting the profile to be determined as a polymorphic profile, and if the target only has the ambiguous profile, directly returning the plurality of posture matrixes meeting the requirement;
setting a threshold value expresses the average pixel tolerance deviation of the calculation node and the theoretical node, and the proposal is to take 1 to 5, if the matching requirement is strict, the threshold value should be smaller, otherwise, the threshold value should be larger.
If the target has other ambiguous contours, setting other contours as new template contours, and repeating the steps near the polymorphic contours to obtain new polymorphic contours;
since the first multi-state contour search is performed from the entire image, the calculation amount is relatively large, and therefore, in the subsequent multi-state contour search, the search can be started from the vicinity of the position of the last multi-state contour in order to reduce the calculation amount. And the vicinity of the polymorphic profile, namely the center of the polymorphic profile is used as a circle center, the search radius is gradually increased, other possible closed profiles are searched by outward diffusion, and the parameters in the search calculation process are completely the same as those in the previous time.
If the plurality of polymorphic profiles have the unique common attitude matrix, the attitude matrix is the correct attitude of the ambiguous profile; if a plurality of identical attitude matrixes still exist, the distribution of a plurality of ambiguous contours of the target is completely symmetrical, and then a plurality of attitude matrixes are directly returned and the target which is successfully identified is marked in the image.
Example 4
Referring to fig. 1 to 5, a method for identifying and spatially positioning an object based on a two-dimensional contour model is characterized by comprising the following steps:
1. a two-dimensional model of the target is prepared. Drawing a real two-dimensional model of the target in the AutoCAD, wherein the length unit is millimeter, and exporting the model into a DXF format file, wherein the target contour is required to be closed and consists of straight line segments, arc line segments or circles. As shown in legend 1, the target has four closed contours, consisting of straight line segments, circles and arcs.
2. And reading the DXF file, analyzing out the geometric primitives and establishing a topological relation of the target two-dimensional outline geometric primitives.
a) Analyzing primitive information from the segments of the DXF files, wherein the primitive information mainly comprises start and stop coordinates of straight lines (recording Nl strips), circle centers, radiuses and start and stop coordinates of circular arcs (recording Na strips), central coordinates and radiuses of circles (recording Nc strips), and storing information of all the straight lines, the circular arcs and the circles.
b) And searching for a closed contour. Starting from the starting point of any straight line (or circular arc), searching the starting point of the next connecting line segment which is coincident with the end point of the straight line, repeating the previous process until the starting point is returned to the beginning, namely successfully searching a closed contour. (since the closed contour of the target template has no intersection, only one search path can be returned to the starting point, which means that, in the following, when searching for the closed contour from the actual image, there may be an intersection closed contour, and at this time, the closed contour needs to be searched according to the shortest path, see the details later.)
c) And c, removing the line segments contained in the searched closed contour from the original data, repeating the step b until all the line segments are searched, and counting N closed contours. The addition of the circle Nc in the target contour is an inherently self-closed contour, so there are (N + Nc) closed contours in total in the target two-dimensional model.
d) And counting the number NLi of straight lines, the number NAi of circular arcs and the number NCi of circles in each closed contour, wherein the corner mark i represents the number of the closed contour.
e) The connection sequence relation of the straight line and the arc line in the closed contour is described. The description rule is that starting from any line segment, all line segments traversing the contour in a fixed clockwise (or counterclockwise) direction return to the starting point, and if the line segment is a straight line, the line segment is marked as 0, and the arc line is marked as 1. For non-circular contours, the order relationship has (NLi + NAi) in common, i.e. each line segment is used as a starting point to obtain a connection order. For example, in the example of FIG. 1, the connection of the outer contours can be represented as four types, 1-0-0-0, 0-1-0, 0-0-0-1, in sequence.
f) Check if it is an "ambiguous profile". The same sequential description exists for an "ambiguous profile", i.e., the (NLi + NAi) sequential description of the profile, as in the example of FIG. 1, the four connections of the inner profile are sequentially 1-0-1-0, 0-1-0-1, 1-0-1-0, 0-1-0-1. In the following matching process, the "ambiguous contour" cannot obtain a unique matching result.
3. And acquiring and processing images and establishing a geometric topological relation.
a) And acquiring a target image, wherein the features to be identified need to be complete and clear, and the real physical size of the features needs to be in accordance with the design size of a two-dimensional model of the features as much as possible.
b) The Canny edge detection operator is used for detecting the edges in the image, the strong edge threshold value is properly reduced according to the actual situation of the image, the target edge which is to be detected is ensured not to have missed detection, the edges are continuous as far as possible, and additionally added short and weak edges are eliminated in the next step.
c) Excluding non-closed contours. The method of excluding non-closed contours is:
traversing all the edge points, if a certain edge point is in the three neighborhoods thereof and the number of other edge points is only 1, the edge point belongs to the 'unclosed point', deleting the edge point, then rechecking the edge point adjacent to the 'unclosed point', whether only one other edge point exists in the three neighborhoods thereof, and so on, thus deleting the unclosed outline. And repeating the process, wherein after all the edge points are processed, the remaining contours are all closed contours.
d) And (5) checking single-pass and multi-pass edge points. After the elimination in the step c, the remaining contours in the image are all closed contours, and viewed from the adjacent relation between the contours, the contours can be divided into intersection and phase separation. For the separated closed contour, the contour has nodes of single pass, and for the intersected contour, there are multi-pass edge points (nodes), the judgment method of the single pass edge points and the multi-pass edge points: in the five neighborhoods of a certain edge, several places of non-adjacent edge points are checked, for each place, whether the center edge point can be reached along the three neighborhoods is checked, if so, the place is indicated to be a channel of the center edge point, if not, the place is irrelevant to the center edge point, and the center edge point is directly excluded. Finally, if the channel of the central edge point is greater than 2, it is a multi-channel edge point, otherwise it is a single-channel edge point, as shown in fig. 2.
e) The intersecting contours are separated. D, starting from any edge point according to the single-pass edge points (nodes) and the multi-pass edge points (nodes) judged in the step d, selecting one channel to search the next edge point if the edge point is a single pass, and searching the next edge point along each channel if the edge point is a multi-channel edge point; if the searched edge points are returned in the searching process, the route is deleted until the searched edge points are returned to the starting point, all paths which are successfully returned to the expected your are compared, the shortest one is taken as the minimum closed contour, the above processes are repeated, and the separation of all intersecting contours is completed, as shown in fig. 3.
f) The polygon approximates a closed contour. And performing recursive subdivision on each closed contour by using a Ramer algorithm until the maximum distance from all the obtained line segments to the corresponding contour segments is less than a certain threshold value Dmax.
g) And fitting an elliptic arc segment. And checking adjacent two sides of the approximate polygon, fitting edge points of the two sides by using an ellipse, calculating ellipse fitting errors, and replacing two straight line sides by using an elliptical arc if the errors are smaller than the approximation errors of the two sides and the corresponding contour. The above process is repeated along two adjacent edges until a better fit cannot be made using the elliptical arc.
h) The elliptical arc segments merge. If a contour of consecutive segments fitting with an elliptical arc occurs in step f, then it is considered whether the segments can be merged into an elliptical arc for fitting. The specific method comprises the following steps: and calculating the center distance between two adjacent elliptical arcs, if the center distance is smaller than a threshold Dcmax, indicating that the two adjacent elliptical arcs can be combined, combining the two elliptical arcs into one elliptical arc, and repeating the same operation with the next adjacent elliptical arc until the adjacent elliptical arcs are processed.
i) And fitting and merging straight lines. In the step f, the contour which can not be fitted by the elliptical arc is directly fitted by a straight line, and the specific fitting method is out of the scope of the invention. And judging and combining adjacent straight line segments.
j) And (5) solving the coordinates of the intersection points of the adjacent line segments. And calculating the coordinates of the intersection points of the adjacent line segments according to the fitted linear equation and the fitted elliptic arc equation, wherein the intersection points correspond to the nodes of the target template.
4. The search starts with a "non-ambiguous" closed contour. If all the closed contours are 'ambiguous' closed contours, matching is carried out along the angular points from the closed contour with the largest number of angular points, X kinds of corresponding relations are shared, similarity is calculated completely, and a group with the most similarity is selected.
a) A search contour is specified. If a plurality of closed contours exist in the target contour, the 'unambiguous' contour is preferably used as the template contour, and if the 'unambiguous' contour does not exist, the contour with the largest number of line segments is used as the template contour.
b) And (3) retrieving all closed contours generated in the step (3), and rejecting contours which form line segments with different numbers from those of the contour segments of the template. And (3) in the rest contours, arbitrarily selecting a certain node (intersection point of adjacent line segments) in the contour, arranging the connection sequence of the contour in the clockwise direction (which is consistent with the direction in the step 2 e), if the sequence can be found in the connection sequence in the (NLi + NAi) of the template contour, determining the searched contour as the contour to be determined, and if not, directly excluding the searched contour. The outline to be determined only has the line segment composition which is the same as the outline of the template at present, but does not necessarily completely accord with the real shape of the target, so similarity calculation is needed.
c) And determining the corresponding relation between nodes, namely physical-pixel according to the sequential relation between the undetermined contour and the template contour, wherein the template contour provides the physical coordinates of the nodes, and the nodes of the undetermined contour are pixel coordinates. If the template contour is an 'ambiguous contour', two or more corresponding relations are satisfied, and the two or more corresponding relations should be treated equally, so that the following calculation is completed. Firstly, establishing the following attitude calculation equation:
Figure BDA0001387772950000211
Figure BDA0001387772950000212
wherein
Figure BDA0001387772950000213
The internal reference matrix of the camera can be obtained by calibrating the camera. Selecting three non-collinear nodes in the template contour, and setting the physical coordinate [ X ] of the nodesw Yw 0]i TAnd pixel coordinates [ u0 v0 ] detected in the contour to be determined]i TCarry over into formula 2, obtain and calculate and get the projective transformation matrix
Figure BDA0001387772950000214
d) B, bringing the projective transformation matrix obtained in the step c into a formula 1, and taking the physical coordinates [ X ] of all nodes of the template contourw Yw0]i TCarry over into formula 1 separately, calculate the projected pixel coordinate of each node [ u v]i T
e) And calculating the projection similarity F. And d, calculating the similarity F according to the pixel coordinate (u v) i of the node calculated in the step d and the pixel coordinate (u0 v0) i of the corresponding node detected in the contour to be determined, wherein the calculation formula is shown as formula 3.
Figure BDA0001387772950000221
Wherein n represents the sum of the number of nodes in the template profile
f) For the 'non-ambiguous contour', if the similarity exceeds a set threshold, the undetermined contour is considered as a correct target, and the projective transformation matrix calculated in the step c is returned
Figure BDA0001387772950000222
The matrix describes the spatial translational position of the object relative to the camera and its rotation angle about various axes.
g) For the ambiguous profile, if the similarity under the corresponding relations of various nodes exceeds a set threshold, a plurality of posture matrixes can be obtained, and the unique determination cannot be carried out, and the to-be-determined profile is set as a polymorphic profile. If the target has only one 'ambiguous contour', a plurality of attitude matrixes meeting the requirement are directly returned.
h) And if other 'ambiguous contours' exist in the target, setting other contours as new template contours, and repeating the steps b-e near the 'polymorphic contours'. A new "polymorphic profile" will be obtained.
i) If a unique common attitude matrix exists among the multiple polymorphic profiles, the attitude matrix is the correct attitude of the ambiguous profile; if there are still multiple identical pose matrices, which indicate that the distribution of multiple "ambiguous contours" of the target is completely symmetric, then the multiple pose matrices are returned directly and the target that was successfully identified is marked in the image.

Claims (9)

1. A target two-dimensional contour model-based identification and spatial positioning method is characterized by comprising the following steps:
step 1, preparing a two-dimensional model of a target, drawing a real two-dimensional model of the target in drawing software, wherein the length unit is millimeter, exporting drawing paper into a file containing image contour information, wherein the contour line of the two-dimensional model is closed, and the contour consists of straight line segments, arc line segments or circles;
step 2, reading the file containing the image contour information, analyzing a geometric primitive, searching and describing a closed contour, judging and distinguishing an ambiguous contour and a non-ambiguous contour, wherein the ambiguous contour, namely the sequential description of the contour, has the same sequential description, otherwise, the ambiguous contour is the non-ambiguous contour; describing the connection sequence relation of straight lines and arc lines in the closed contour, and establishing a topological relation of a target two-dimensional contour geometric primitive;
step 3, collecting a target image, detecting the edge of the image by using a Canny edge detection operator, traversing each contour, eliminating non-closed contours, and carrying out single-pass and multi-pass edge point inspection on the closed contours; selecting the checked single-pass and multi-pass edge points to carry out intersecting contour separation; enabling the polygon to approach the closed contours, and performing recursive subdivision on each closed contour by using a Ramer algorithm until the maximum distance from all the obtained line segments to the corresponding closed contour is smaller than a threshold value Dmax; fitting and combining the straight line segments and/or the arc segments, and calculating the coordinates of intersection points of adjacent line segments according to the fitted straight line equation and/or the fitted elliptic arc equation, wherein the intersection points correspond to the target two-dimensional contour in the step 2;
step 4, appointing a search contour, retrieving all closed contours generated in the step 3, confirming undetermined contours formed by line segments which are the same as the contour of the target template, performing posture operation on the contours to be confirmed, completing confirmation, marking targets successfully identified in the image, and outputting target positions and projection matrixes; specifically, the method for performing attitude calculation and completing confirmation on the contour to be determined comprises the following steps:
determining the corresponding relation between the physical seat plate and the pixel coordinate between nodes according to the sequence relation between the undetermined contour and the template contour, giving the physical coordinate of the node by the template contour, establishing a posture calculation equation firstly
Figure DEST_PATH_IMAGE001
Figure 478872DEST_PATH_IMAGE002
Wherein
Figure DEST_PATH_IMAGE003
The reference matrix is an internal reference matrix of the camera and can be obtained by calibrating the camera;
selecting three non-collinear nodes in the template contour, and combining the physical coordinates
Figure 798382DEST_PATH_IMAGE004
And the pixel coordinates detected in the contour to be determined
Figure DEST_PATH_IMAGE005
Substituting the attitude calculation equation to obtain a projection transformation matrix obtained by calculation
Figure 143913DEST_PATH_IMAGE006
Substituting the projection transformation matrix into the attitude calculation equation, and substituting the physical coordinates of all nodes of the template contour
Figure 372900DEST_PATH_IMAGE004
Respectively substituting the above attitude calculation equations to calculate the projection pixel coordinates of each node
Figure DEST_PATH_IMAGE007
Calculating projection similarity F according to the pixel coordinate (u v) i of the node and the pixel coordinate (u0 v0) i of the corresponding node detected in the contour to be determined, wherein the calculation formula is as follows
Figure 171092DEST_PATH_IMAGE008
Wherein n represents the sum of the number of nodes in the template profile;
for the non-ambiguous profile, if the similarity exceeds a set threshold, identifyingFor the undetermined contour to be the correct target and back to the projective transformation moment
Figure 25784DEST_PATH_IMAGE006
An array describing the spatial translational position of the target relative to the camera and its rotation angle about various axes;
for an ambiguous profile, if the similarity under the corresponding relation of various nodes exceeds a set threshold, obtaining a plurality of posture matrixes, setting the profile to be determined as a polymorphic profile, and if the target only has the ambiguous profile, directly returning the plurality of posture matrixes meeting the requirement;
if the plurality of polymorphic profiles have the unique common attitude matrix, the attitude matrix is the correct attitude of the ambiguous profile; if a plurality of identical attitude matrixes still exist, the distribution of a plurality of ambiguous contours of the target is completely symmetrical, and then a plurality of attitude matrixes are directly returned and the target which is successfully identified is marked in the image.
2. A method of identifying and spatially localizing a two-dimensional contour model of an object as defined in claim 1, characterized in that: in step 2, the ambiguous contours, that is, the sequential descriptions of the contours, have the same sequential description; the specific method for establishing the topological relation of the target two-dimensional contour geometric primitive comprises
Analyzing primitive information from an segments of the files containing the image contour information, wherein the primitive information comprises start and stop coordinates of Nl straight lines, the center, the radius and the start and stop coordinates of Na circular arcs, and the center coordinates and the radius of Nc circles, and storing information of all the straight lines, the circular arcs and the circles;
searching for a closed contour, starting from the starting point of any straight line or arc, searching for the starting point of the next connecting line segment which is coincident with the end point of the straight line, and repeating the process until the starting point returns to the beginning, namely searching for a closed contour;
removing the line segments contained in the searched closed contour from the original data, repeating the step of searching the closed contour until all the line segments are searched, counting N closed contours and adding the target contourThe center is Nc circles of the self-closing contours, the target two-dimensional model totally has N + Nc closing contours, and the number Nl of straight lines in each closing contour is countediArc number NaiNumber of circles NciWherein the corner mark i represents the number of the closed contour;
describing the connection sequence relation of straight lines and arc lines of the non-circular closed contour in the closed contour, wherein the description rule is that starting from any line segment, all line segments of the contour are traversed to return to the starting point along a fixed clockwise or anticlockwise direction, if the line segments are straight lines, the line segments are marked as 0, and the arc lines are marked as 1; for each non-circular closed contour, the sequential relationship is common (Nl)i+Nai) And (3) obtaining a connection sequence by taking each line segment as a starting point to complete the establishment of the topological relation.
3. A method of identifying and spatially localizing a two-dimensional contour model of an object as defined in claim 2, characterized in that: in the step 2, a step of checking whether the closed contour is an ambiguous contour or a non-ambiguous contour is further included; in an ambiguous profile, i.e. a sequential description of the profile, there is the same sequential description.
4. A method of identifying and spatially localizing a two-dimensional contour model of an object as defined in claim 1, characterized in that: in step 3, the method for excluding the non-closed contour includes: traversing all the edge points, if a certain edge point is in the three neighborhoods thereof and the number of other edge points is only 1, determining that the edge point belongs to an unclosed point, deleting the edge point, then rechecking the edge point adjacent to the unclosed point, and determining whether only one other edge point exists in the three neighborhoods thereof, and so on, namely deleting an unclosed outline; and repeating the process, wherein after all the edge points are processed, the remaining contours are all closed contours.
5. A method of identifying and spatially localizing a two-dimensional contour model of an object as defined in claim 1, characterized in that: in the step 3, the method for checking the single-pass and multi-pass edge points of the closed contour is to check the number of the non-adjacent edge points in the five-neighborhood outermost circle of a certain edge, check whether each non-adjacent edge point can reach the central edge point along the three neighborhoods, if so, indicate that the non-adjacent edge point is a channel of the central edge point, if not, the non-adjacent edge point is irrelevant to the central edge point and is directly excluded, and finally, if the number of the channels of the central edge point is more than 2, the non-adjacent edge point is a multi-channel edge point, otherwise, the non-adjacent edge point is a single-channel edge point.
6. A method of identifying and spatially localizing a two-dimensional contour model of an object as defined in claim 1, characterized in that: in the step 3, the intersecting contour separating method is that starting from any edge point, if the edge point is a single-pass edge point, a channel is selected to search for the next edge point, and if the edge point is a multi-pass edge point, the next edge point is searched along each channel; if the searched edge points are returned in the searching process, deleting the route until the route is returned to the starting point, comparing all paths successfully returned to the starting point, taking the shortest one as the minimum closed contour, repeating the above process, and completing the separation of all intersecting contours.
7. A method of identifying and spatially localizing a two-dimensional contour model of an object as defined in claim 1, characterized in that: in the step 3, the specific method for fitting and merging the straight line segments and/or the arc segments is as follows:
fitting an elliptical arc segment, checking two adjacent edges of a polygon approaching a closed contour, using the edge points of the two edges to fit the ellipse, calculating an elliptical fitting error, if the error is smaller than the approximation error of the two edges and the corresponding contour, using the elliptical arc to replace two straight line edges, and repeating the process along the two adjacent edges until the elliptical arc cannot be used for better fitting;
combining elliptical arc sections, calculating the center distance between two adjacent elliptical arcs if continuous sections of contours fitting by elliptical arcs appear in recursive subdivision on each closed contour by using a Ramer algorithm, indicating that the contours can be combined if the center distance is smaller than a threshold Dcmax, and repeating the same operation with the next adjacent elliptical arc after the contours are combined into one elliptical arc until the adjacent elliptical arcs are processed completely;
straight line fitting and merging, namely using a Ramer algorithm to fit the contours which cannot be fitted by using elliptical arcs in the recursive subdivision of each closed contour, directly using straight lines to fit, and judging and merging adjacent straight line segments;
calculating the coordinates of the intersection points of adjacent line segments according to the linear equation
Figure 960242DEST_PATH_IMAGE010
And the elliptic arc equation of
Figure 219185DEST_PATH_IMAGE012
Solving the coordinates (X, Y) of the intersection point of the adjacent line segments, wherein the intersection point corresponds to the node of the target template; where k, B are coefficients calculated for fitting, representing only one straight line, and (A, B, C, D, E) are parameters calculated for fitting an ellipse, each set of parameters representing a unique ellipse.
8. A method of identifying and spatially localizing a two-dimensional contour model of an object as defined in claim 1, characterized in that: in the step 4, the method for specifying the search contour is that if a plurality of closed contours exist in the target contour, the non-ambiguous contour is preferably used as the template contour, and if all the closed contours are ambiguous contours, the contour with the largest number of line segments is used as the template contour, the contours with the largest number of line segments are matched along the corner points, the similarity is all calculated, and the most similar group is selected.
9. A method of identifying and spatially localizing a two-dimensional contour model of an object as defined in claim 1, characterized in that: in the step 4, the specific method for confirming the undetermined contour consisting of the same line segments as the contour of the target template is to retrieve all closed contours generated in the step 3 and eliminate the contours which form line segments with different numbers from the contour segments of the template; in the remaining contours, a node in the contour is arbitrarily selected, and the connecting sequence of the contour is arranged in the clockwise direction, if the sequence can be in the contour of the templateOf (Nl)i+Nai) If the middle connection is found in sequence, the search contour is the contour to be determined, otherwise, the search contour is directly excluded.
CN201710734207.5A 2017-08-24 2017-08-24 Identification and space positioning method based on target two-dimensional contour model Active CN107622499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710734207.5A CN107622499B (en) 2017-08-24 2017-08-24 Identification and space positioning method based on target two-dimensional contour model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710734207.5A CN107622499B (en) 2017-08-24 2017-08-24 Identification and space positioning method based on target two-dimensional contour model

Publications (2)

Publication Number Publication Date
CN107622499A CN107622499A (en) 2018-01-23
CN107622499B true CN107622499B (en) 2020-11-13

Family

ID=61088264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710734207.5A Active CN107622499B (en) 2017-08-24 2017-08-24 Identification and space positioning method based on target two-dimensional contour model

Country Status (1)

Country Link
CN (1) CN107622499B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087323A (en) * 2018-07-25 2018-12-25 武汉大学 A kind of image three-dimensional vehicle Attitude estimation method based on fine CAD model
CN109272569B (en) * 2018-08-03 2023-07-11 广东工业大学 Method for quickly extracting and generating floor contour lines of autocad building two-dimensional map
CN110020657A (en) * 2019-01-15 2019-07-16 浙江工业大学 A kind of bitmap silhouettes coordinate extraction method of cutting
CN110069915B (en) * 2019-03-12 2021-04-13 杭州电子科技大学 Sudoku graphic verification code identification method based on contour extraction
CN110058263B (en) * 2019-04-16 2021-08-13 广州大学 Object positioning method in vehicle driving process
CN111951290B (en) * 2019-05-16 2023-11-03 杭州睿琪软件有限公司 Edge detection method and device for object in image
CN110223339B (en) * 2019-05-27 2021-07-16 盐城工学院 Thermal protector calibration point center positioning method based on machine vision
CN110299042B (en) * 2019-06-04 2021-09-07 中广核工程有限公司 Immersive nuclear power plant main equipment process simulation deduction method and system
CN110640303B (en) * 2019-09-26 2022-06-07 南京魔迪多维数码科技有限公司 High-precision vision positioning system and positioning calibration method thereof
CN111091597B (en) * 2019-11-18 2020-11-13 贝壳找房(北京)科技有限公司 Method, apparatus and storage medium for determining image pose transformation
CN111024021B (en) * 2019-12-09 2021-09-28 江南造船(集团)有限责任公司 Ship plate part polishing edge judgment method
CN111709426B (en) * 2020-05-08 2023-06-02 广州博进信息技术有限公司 Diatom recognition method based on contour and texture
CN112033408B (en) * 2020-08-27 2022-09-30 河海大学 Paper-pasted object space positioning system and positioning method
CN111950315B (en) * 2020-10-19 2023-11-07 江苏理工学院 Method, device and storage medium for segmenting and identifying multiple bar code images
CN113945159B (en) * 2021-10-26 2023-08-25 中国铁建电气化局集团有限公司 Bolt diameter measurement method based on contour matching
CN114818591B (en) * 2022-07-01 2022-09-09 南京维拓科技股份有限公司 Method for quickly generating clearance of tool device
CN115183673A (en) * 2022-07-07 2022-10-14 湖南联智科技股份有限公司 Box girder end structure size detection method
CN115018845B (en) * 2022-08-09 2022-10-25 聊城市泓润能源科技有限公司 Method for detecting quality of lubricating oil abrasive particles

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8180486B2 (en) * 2006-10-02 2012-05-15 Honda Motor Co., Ltd. Mobile robot and controller for same
CN103425988A (en) * 2013-07-03 2013-12-04 江南大学 Real-time positioning and matching method with arc geometric primitives

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8180486B2 (en) * 2006-10-02 2012-05-15 Honda Motor Co., Ltd. Mobile robot and controller for same
CN103425988A (en) * 2013-07-03 2013-12-04 江南大学 Real-time positioning and matching method with arc geometric primitives

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
机器视觉技术在工件分拣中的应用;刘振宇、赵彬、邹风山;《计算机应用与软件》;20121115;全文 *

Also Published As

Publication number Publication date
CN107622499A (en) 2018-01-23

Similar Documents

Publication Publication Date Title
CN107622499B (en) Identification and space positioning method based on target two-dimensional contour model
CN109300162B (en) Multi-line laser radar and camera combined calibration method based on refined radar scanning edge points
US9436987B2 (en) Geodesic distance based primitive segmentation and fitting for 3D modeling of non-rigid objects from 2D images
US7822264B2 (en) Computer-vision system for classification and spatial localization of bounded 3D-objects
US7894661B2 (en) Calibration apparatus, calibration method, program for calibration, and calibration jig
US8121415B2 (en) Combining feature boundaries
CN110136182A (en) Method for registering, device, equipment and the medium of laser point cloud and 2D image
CN107063261B (en) Multi-feature information landmark detection method for precise landing of unmanned aerial vehicle
CN109658454B (en) Pose information determination method, related device and storage medium
CN107818598B (en) Three-dimensional point cloud map fusion method based on visual correction
Hu et al. Matching point features with ordered geometric, rigidity, and disparity constraints
CN113610917A (en) Circular array target center image point positioning method based on blanking points
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN110415304B (en) Vision calibration method and system
CN111368573A (en) Positioning method based on geometric feature constraint
CN115131268A (en) Automatic welding system based on image feature extraction and three-dimensional model matching
CN113393524A (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
Holz et al. Fast edge-based detection and localization of transport boxes and pallets in rgb-d images for mobile robot bin picking
CN115131363A (en) Positioning method and device based on semantic information and terminal equipment
JP2001143073A (en) Method for deciding position and attitude of object
CN116494253B (en) Target object grabbing pose acquisition method and robot grabbing system
CN114543819A (en) Vehicle positioning method and device, electronic equipment and storage medium
CN110490887B (en) 3D vision-based method for quickly identifying and positioning edges of rectangular packages
EP2631813A1 (en) Method and device for eliminating cracks within page
Ping et al. Verification of turning insert specifications through three-dimensional vision system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant