CN111832659B - Laser marking system and method based on feature point extraction algorithm detection - Google Patents

Laser marking system and method based on feature point extraction algorithm detection Download PDF

Info

Publication number
CN111832659B
CN111832659B CN202010704587.XA CN202010704587A CN111832659B CN 111832659 B CN111832659 B CN 111832659B CN 202010704587 A CN202010704587 A CN 202010704587A CN 111832659 B CN111832659 B CN 111832659B
Authority
CN
China
Prior art keywords
point
contour
image
corner
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010704587.XA
Other languages
Chinese (zh)
Other versions
CN111832659A (en
Inventor
张弛
陈思远
宛张灵
吴晓光
朱里
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Textile University
Original Assignee
Wuhan Textile University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Textile University filed Critical Wuhan Textile University
Priority to CN202010704587.XA priority Critical patent/CN111832659B/en
Publication of CN111832659A publication Critical patent/CN111832659A/en
Application granted granted Critical
Publication of CN111832659B publication Critical patent/CN111832659B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image recognition and discloses a laser marking system and a laser marking method based on feature point extraction algorithm detection. The latitude of the characteristic parameter used by the invention is low, and the invention can ensure higher identification accuracy and efficiency. The invention effectively extracts and expresses the shape characteristics of the workpiece, has excellent performances of translational invariance, rotational invariance and the like, and can effectively inhibit noise interference.

Description

Laser marking system and method based on feature point extraction algorithm detection
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a laser marking system and method based on feature point extraction algorithm detection.
Background
At present, the detection and extraction of image features is an important research topic in the field of computer vision, and has wide application in the field of artificial intelligence, including the fields of machine vision, OCR technology, monitoring technology and the like.
The difficulty arises with pictures becoming the primary information carrier in the internet. When the information is recorded by the text, the user can easily find the required content through keyword searching and edit the required content, and when the information is recorded by the picture, the user cannot search the content in the picture, so that the efficiency of finding the key content from the picture is affected. The pictures bring a quick information recording and sharing mode to us, but reduce the information retrieval efficiency of us. In this environment, the image recognition technology of the computer is particularly important.
Image recognition is a technique in which a computer processes, analyzes, and understands images to identify objects and objects in various different modes. The existing image recognition methods are mainly divided into four types:
based on a statistical analysis method, a great number of statistical analyses are carried out on the researched images, rules are found out, and characteristics reflecting the essential characteristics of the images are extracted to identify the images. However, if the image is complex and the number of categories is large, extraction is difficult, and classification is difficult to realize.
The method based on the syntax recognition mainly highlights the spatial structure relation information of the recognized object by decomposing the complex image into single-layer or multi-layer relatively simple sub-images by adopting a hierarchical description method. However, if there is a large disturbance or noise, the erroneous judgment rate is liable to occur, and it is difficult to satisfy the requirements of classification recognition accuracy and reliability.
Based on the neural network method, the image is identified by the neural network algorithm. However, in practical application, the neural network method has low convergence rate, large training amount, long training time, local minimum, insufficient recognition and classification precision, and is difficult to be applied to the occasion where a new mode frequently appears.
And extracting the characteristics of the workpiece to be identified based on a template matching method, matching the characteristics of the workpiece to be identified with the characteristics of a preset template, and taking the template type with the highest matching degree as the workpiece type to be identified. Typical algorithms include SC, FEMD, etc., but the recognition accuracy is not sufficient.
Through the above analysis, the problems and defects existing in the prior art are as follows:
(1) Higher recognition accuracy and efficiency cannot be ensured at the same time.
(2) The data storage amount is large.
(3) The noise interference is strong.
(4) It is difficult to adapt to the new mode.
(5) The recognition accuracy is not sufficient.
(6) The feature parameter dimension is high.
The difficulty of solving the problems and the defects is as follows:
the feature point extraction algorithm has the characteristics of ensuring the recognition precision and the recognition efficiency, using lower dimension feature parameters, having strong anti-interference capability and being strong in adaptability to new modes.
The meaning of solving the problems and the defects is as follows:
through solving above problem and defect can ensure that the work piece of waiting to discern has higher discernment rate of accuracy and efficiency, can improve the categorism of work piece discernment, and the discernment of work piece has excellent properties such as translation invariance, rotation invariance, can effectively restrain the noise, can discover more meaningful latent variable, helps producing deeper understanding to the data, has increased work efficiency.
The invention reduces the data storage and input data bandwidth; redundancy is reduced; the classifying property on low latitude is improved; more meaningful potential variables can be found, helping to give more insight into the data.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides a laser marking system and a laser marking method based on feature point extraction algorithm detection.
The invention is realized in such a way that the detection method of the laser marking system based on the feature point extraction algorithm comprises the following steps:
S1, acquiring an image to be identified, and performing binarization processing on the image.
S2, acquiring all contours from the binarized image, and acquiring coordinates of all contour points.
And S3, screening the identified image contours by the areas of the acquired contours to obtain an optimal contour group, and drawing the most acquired contours.
S4, calculating the moment of each contour.
S5, calculating the mass center of each contour through the moment of the known contour, and storing the mass center coordinates.
S6, calculating a point farthest from the centroid as a first corner point, a point farthest from the first corner point as a second corner point, a point longest in the circumference of a triangle formed by the first corner point, the second corner point and the third corner point as a third corner point, and a point farthest from a fourth feature point as a fourth corner point by calculating the contour point of the known contour. The closest point to the centroid is the fifth angular point.
And S7, obtaining 1, 2 and 3 characteristic points through calculation.
S8, after three characteristic points 1, 2 and 3 are obtained, respectively calculating the distance between the two points, and sequencing from long to short to obtain the slope of the longest side.
And S9, judging whether the position of the image is the front or the back according to the slope of the longest edge, calculating the rotation angle of the initial position of the longest edge, and drawing the rotation angle in the image.
And S10, calculating the distance from the mass center to the longest edge to obtain the marking position at the moment.
Preferably, the specific steps of acquiring the image to be identified and binarizing the image are as follows:
and acquiring an RGB image of the tool by using an MVS camera, converting the acquired RGB image into a required Mat matrix by using an Opencv function, and setting the gray value of the pixel point of the Mat matrix according to the gray value, so that the Mat matrix after binarization is obtained after gray processing.
Preferably, the specific steps of acquiring the contour from the binarized image and performing contour screening are as follows:
and carrying out morphological change processing on the image by using a morphyoyEx function of OpenCV, and carrying out expansion processing and corrosion processing on the image to remove some small black holes on the image. The binarized image is then processed by the findContours function and the image contour is found in the form of a contour point vector. After the image contour is found, all the found contours are drawn, the area contained by the contours is calculated, the found contour area is screened out through setting the minimum area value and the maximum area value of the contours, the screened out contour area is obtained, and finally, the rectangle degree and the circularity of the contours are screened out, and small areas, characters in the areas and the like are filtered out. And finally drawing the obtained outline.
Preferably, the specific step of computing the moment of each contour is:
the center moment in the image is calculated by the movements () function provided by OpenCV. The calculation formula of the center moment is shown in the figure:
Figure BDA0002594246130000031
preferably, the specific step of calculating the centroid of each contour from the moments of the contours is:
since the image is two-dimensional, the method of finding the centroid of the image is to find the centroid independently in the x-direction and the y-direction, respectively. I.e. for the centroid in the x-direction, the pixels of the image on the left and right sides of the centroid are equal; for the centroid in the y-direction, the image is equal to the sum of pixels on both sides of the centroid. The x-and y-coordinates of the centroid are calculated as shown in the following figure.
Figure BDA0002594246130000032
Preferably, the specific steps of calculating the five feature corner points through analysis of the known contour points are as follows:
and (3) sequentially calculating the distance between each contour point and the centroid, and finding the point farthest from the centroid as a first angular point.
And (3) sequentially calculating the distance from each contour point to the second corner point, and finding the point farthest from the second corner point as the second corner point.
And forming a triangle by sequentially calculating each contour point, the first corner point and the second corner point, and forming the contour point with the longest triangle circumference as a third corner point.
And (3) sequentially calculating the distance from each contour point to the fourth corner point, and finding the point farthest from the fourth corner point as the fourth corner point.
And (3) sequentially calculating the distance between each contour point and the centroid, and finding the closest point to the centroid as a fifth angle point.
Preferably, the specific steps of calculating the lengths between every two of the five characteristic corner points to judge the positions of the first, second, third and fourth corner points and sequencing are as follows:
knowing that the 1 st corner and the 2 nd corner are diagonal points, the 3 rd corner and the 4 th corner are diagonal points, and respectively calculating the distance h1 from the 1 st corner to the 2 nd corner and the distance h2 from the 3 rd corner to the 4 th corner. And comparing the sizes of h1 and h2, and giving a certain constraint condition to obtain a required first and second characteristic point.
The required special point is obtained by judging the length of the line segment between the two points 1 and 2 and the two points 3 and 4.
Preferably, the specific step of obtaining the slope of the longest edge is:
and respectively calculating the distances between every two pairs of points between the centroid and the first and second characteristic points, and sequencing from long to short to obtain the slope of the longest edge.
Preferably, the specific steps for judging the front and back sides of the image are as follows:
and judging that the third characteristic point is on a certain side of a straight line connecting the first characteristic point and the second characteristic point by adopting a vector cross product mode, wherein the obtained result is the regular left side, and the obtained result is the right side if the obtained result is negative.
If p×q >0, P is in the clockwise direction of Q.
If p×q <0, P is in the counterclockwise direction of Q.
If p×q=0, P and Q are collinear, but may be co-directional or reverse.
Preferably, the specific step of calculating the distance from the centroid to the longest edge to obtain the marking position at the moment is as follows:
preferably, the invention also provides a laser marking system based on feature point extraction algorithm detection, comprising:
and the extraction module is used for acquiring an image of the tool to be identified, extracting an unclosed contour from the image graph edge of the tool to be identified, and acquiring the coordinates of all contour points on the contour.
The calculation module is used for calculating the area parameter of each contour point, screening the contour according to the area parameter, extracting the optimal contour point, calculating the moment and the mass center of the contour at the moment in the optimal contour, calculating the first, second, third, fourth and fifth corner points according to the position relation between the contour point and the mass center, calculating the first, second and third characteristic points according to the position relation between the corner points, judging the front and back surfaces of the tool at the moment in a vector cross product mode between the characteristic points and the mass center, and finally calculating the marking position according to the distance from the mass center to the longest edge.
And the display module is used for marking the identified characteristic points, corner points and the rotating angle of the tool.
It is a further object of the present invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
acquiring an image to be identified, and performing binarization processing on the image;
acquiring all contours from the binarized image, and acquiring coordinates of all contour points;
screening the identified image contours through the areas of the acquired contours to obtain an optimal contour group, and drawing the most acquired contours;
calculating the moment of each contour;
calculating the mass center of each contour through the moment of the known contour, and storing the mass center coordinates;
calculating a point farthest from the centroid as a first corner point, a point farthest from the first corner point as a second corner point, a point longest in the circumference of a triangle formed by the first corner point, the second corner point and the third corner point as a third corner point, and a point farthest from a fourth feature point as a fourth corner point; the closest point to the centroid is the fifth angular point;
Obtaining 1, 2 and 3 characteristic points through calculation;
after three characteristic points 1, 2 and 3 are obtained, respectively calculating the distance between every two points, and sequencing from long to short to obtain the slope of the longest side;
judging whether the position of the image is the front or the back according to the slope of the longest edge, calculating the rotation angle of the starting position of the longest edge, and drawing in the image;
and calculating the distance from the mass center to the longest side to obtain the marking position at the moment.
Another object of the present invention is to provide a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring an image to be identified, and performing binarization processing on the image;
acquiring all contours from the binarized image, and acquiring coordinates of all contour points;
screening the identified image contours through the areas of the acquired contours to obtain an optimal contour group, and drawing the most acquired contours;
calculating the moment of each contour;
calculating the mass center of each contour through the moment of the known contour, and storing the mass center coordinates;
calculating a point farthest from the centroid as a first corner point, a point farthest from the first corner point as a second corner point, a point longest in the circumference of a triangle formed by the first corner point, the second corner point and the third corner point as a third corner point, and a point farthest from a fourth feature point as a fourth corner point; the closest point to the centroid is the fifth angular point;
Obtaining 1, 2 and 3 characteristic points through calculation;
after three characteristic points 1, 2 and 3 are obtained, respectively calculating the distance between every two points, and sequencing from long to short to obtain the slope of the longest side;
judging whether the position of the image is the front or the back according to the slope of the longest edge, calculating the rotation angle of the starting position of the longest edge, and drawing in the image;
and calculating the distance from the mass center to the longest side to obtain the marking position at the moment.
By combining all the technical schemes, the invention has the advantages and positive effects that: according to the method, based on the contour shape of the workpiece to be identified, the area parameter of each contour point is calculated, the optimal contour point is screened out through the area, the moment and the mass center of the contour are calculated, and finally the obvious characteristic points are extracted through the relation between the contour point and the mass center, so that the effective extraction and representation of the shape of the workpiece to be identified are realized, the effective marking area is finally determined, and the final marking precision is improved. The latitude of the characteristic parameter used by the invention is low, and the invention can ensure higher identification accuracy and efficiency.
The invention effectively extracts and expresses the shape characteristics of the workpiece, has excellent performances of translational invariance, rotational invariance and the like, and can effectively inhibit noise interference.
The invention can effectively extract and express the characteristic points of a plurality of workpieces at the same time, thereby increasing the working efficiency.
The technical effects or experimental effects of the comparison include:
the invention reduces data storage and input data bandwidth.
The invention reduces redundancy.
The invention improves the classification of workpiece identification.
The invention can use the recognition of workpieces with different shapes, and has strong capability of adapting to new modes.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly explain the drawings needed in the embodiments of the present application, and it is obvious that the drawings described below are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a detection method of a laser marking system based on a feature point extraction algorithm according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of image binarization according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of image contour screening according to the present invention.
FIG. 4 is a schematic diagram of the distribution of particles in different patterns according to the present invention.
Fig. 5 is a schematic diagram showing initial contour point distribution in an embodiment of the present invention.
Fig. 6 is a schematic diagram of three feature point distribution of different patterns according to the present invention.
Fig. 7 is a flow chart of feature point extraction in an example one of the present invention.
Fig. 8 is a schematic diagram of a four-corner distribution in an embodiment of the present invention.
FIG. 9 is a flow chart of a positive and negative determination of a workpiece in accordance with an embodiment of the present invention.
Fig. 10 is a graph showing various profile profiles of a workpiece in accordance with an embodiment of the invention.
Fig. 11 is a schematic diagram of a four feature point distribution in an embodiment of the present invention.
FIG. 12 is a diagram of a binarized workpiece image according to an embodiment of the invention.
FIG. 13 is a schematic diagram of an interface of a laser marking system according to an embodiment of the invention.
Fig. 14 is a schematic diagram showing a module distribution in an embodiment of the present invention.
Fig. 15 is a schematic view of an image processing arrangement in an example one of the present invention.
Fig. 16 is a schematic diagram of parameter setting in an example one of the present invention.
FIG. 17 is a schematic view of a selection object identification image according to an embodiment of the present invention.
Fig. 18 is a schematic diagram of recognition results in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Aiming at the problems existing in the prior art, the invention provides a laser marking system and a laser marking method based on feature point extraction algorithm detection, and the invention is described in detail below with reference to the accompanying drawings.
The invention provides a laser marking system detection method based on a feature point extraction algorithm, which comprises the following steps:
s1, acquiring an image to be identified, and performing binarization processing on the image.
S2, acquiring all contours from the binarized image, and acquiring coordinates of all contour points.
And S3, screening the identified image contours by the areas of the acquired contours to obtain an optimal contour group, and drawing the most acquired contours.
S4, calculating the moment of each contour.
S5, calculating the mass center of each contour through the moment of the known contour, and storing the mass center coordinates.
S6, calculating a point farthest from the centroid as a first corner point, a point farthest from the first corner point as a second corner point, a point longest in the circumference of a triangle formed by the first corner point, the second corner point and the third corner point as a third corner point, and a point farthest from a fourth feature point as a fourth corner point by calculating the contour point of the known contour. The closest point to the centroid is the fifth angular point.
And S7, obtaining 1, 2 and 3 characteristic points through calculation.
S8, after three characteristic points 1, 2 and 3 are obtained, respectively calculating the distance between the two points, and sequencing from long to short to obtain the slope of the longest side.
And S9, judging whether the position of the image is the front or the back according to the slope of the longest edge, calculating the rotation angle of the initial position of the longest edge, and drawing the rotation angle in the image.
And S10, calculating the distance from the mass center to the longest edge to obtain the marking position at the moment.
The invention is further described below in connection with specific embodiments.
Examples:
referring to fig. 1, a feature point extraction algorithm, the method includes the steps of:
s1, acquiring an image to be identified, and performing binarization processing on the image.
The binarization algorithm used here is a fixed threshold algorithm, the segmentation method used is a bimodal image segmentation, the minimum threshold T is set in the external configuration file to enable the minimum number of corresponding pixels to be met, and the image is subjected to binarization processing by the threshold T. Fig. 2 is a diagram illustrating image binarization according to the present invention.
S2, acquiring all contours from the binarized image, and acquiring coordinates of all contour points.
Here, the processing of acquiring the contour from the binarized image includes preprocessing and searching the contour. The number of the contour points is the number of all points on the contour, and the specific value of the contour points is determined according to the actual situation.
Pretreatment: morphological preprocessing is performed on the binarized image to provide a high quality input source image for contour finding.
Searching the outline: contour detection is performed from the preprocessed binarized image, each contour being stored as a point vector. I.e. a single contour is composed of several points, and then all contours in the input image are composed of several contours.
In the searched outline, the outline is represented by a series of outline points with coordinate information, and the set of outline points of the detected image is represented as S:
s={p(i)|i∈[1,n]}
wherein n represents the length of the contour, namely the number of contour points; p (i) represents the ith contour point in the sequence of contour points and has
p(i)={u(i),v(i)}
Wherein u (i) and v (i) are represented by the abscissa of p (i).
Fig. 3 is a schematic diagram of contour acquisition according to the present invention.
And S3, screening the identified image contours by the areas of the acquired contours to obtain an optimal contour group, and drawing the most acquired contours. Fig. 3 is a schematic diagram of image contour screening according to the present invention.
In the invention, the acquired contour needs to be processed, including contour screening and contour drawing.
Screening profile: the large contours found are clearly undesirable and thus contour screening can be performed by certain criteria. By observation, it was found that many contours may contain many self contours, while these child contours obviously also have parent contours, so we can filter out small or too large contours by using the set contour area range.
Drawing a contour: in order to facilitate the observation of the result of contour screening, the contour is drawn and displayed.
S4, calculating the moment of each contour.
S5, calculating the mass center of each contour through the moment of the known contour, and storing the mass center coordinates.
Here, the center moment of the contour is calculated, and the translation invariance is obtained. The center moment can be expressed as:
Figure BDA0002594246130000081
wherein the method comprises the steps of
Figure BDA0002594246130000082
Assuming that array (x, y) is only 0,1 is a binary image, then m 00 Representing the total number of non-0 pixels, i.e. the area.
One advantage of this method of calculating the centroid of an object is that it is insensitive to noise. The calculated centroid does not shift too much when there is external noise interference.
As shown in fig. 4, a centroid acquisition schematic in a different graph of the present invention. The centroid coordinates are also different in different figures.
S6, calculating a point farthest from the initial contour point to be a first corner point, a point farthest from the first corner point to be a second corner point, a point longest in the circumference of a triangle formed by the first corner point, the second corner point and the third corner point to be a third corner point, and a point farthest from the fourth feature point to be a fourth corner point by the contour points of the known contour. The closest point to the centroid is the fifth angular point.
The initial contour point coordinates are different for each graph. Fig. 5 is a schematic diagram of an initial contour point distribution according to an embodiment of the present invention.
And S7, obtaining 1, 2 and 3 characteristic points through calculation.
S8, after three characteristic points 1, 2 and 3 are obtained, respectively calculating the distance between the two points, and sequencing from long to short to obtain the slope of the longest side.
The purpose of this operation is to obtain the coordinates of the two end points of the longest edge, as well as the slope of the longest edge. Fig. 6 shows the distribution of three characteristic points of different patterns according to the present invention.
And S9, judging whether the position of the image is the front or the back according to the slope of the longest edge, calculating the rotation angle of the initial position of the longest edge, and drawing the rotation angle in the image.
FIG. 7 is a flow chart of a positive and negative determination of a workpiece in accordance with an embodiment of the present invention. The method for judging the positive and negative of the image is judged according to the method of vector cross product. The cross product result Q can be expressed as:
Q={(y 2 -y 1 )*x 3 +(x 1 -x 2 )*y 3 +(x 2 *y 1 -x 1 *y 2 )}
wherein (x) 1 ,y 1 ) Is the coordinates of the first feature point, (x 2 ,y 2 ) Is the coordinates of the second feature point, (x 3 ,y 3 ) Is the coordinates of the third feature point.
If Q is that the regular third feature point is at the left side of the straight line connecting the first feature point and the second feature point, the default workpiece identification image is the front.
And if Q is negative, the third characteristic point is on the right side of a straight line connecting the first characteristic point and the second characteristic point, and the default workpiece identification image is the reverse side.
And if Q is zero, the first, second and third characteristic points are collinear. At this point error tool profile information is recorded.
The rotation angle S of the longest side is obtained by calculating the rotation angle P of the first feature point with respect to the second feature point.
If Q >0, P >180, then s= -360+p.
If Q >0, P <180, s=p.
If Q <0, P >180, s= -360+p.
If Q <0, P <180, s=p.
If q=0, the external file is recorded.
And S10, calculating the distance from the mass center to the longest edge to obtain the marking position at the moment. The distance from the centroid to the longest side is calculated according to the actual situation.
In the step S6, the distribution of the four corner points is shown in fig. 8.
And (3) sequentially calculating the distance between each contour point and the centroid, and finding the point farthest from the centroid as a first angular point.
And (3) sequentially calculating the distance from each contour point to the first corner point, and finding the point farthest from the second corner point as the second corner point.
And forming a triangle by sequentially calculating each contour point, the first corner point and the second point, and forming the contour point with the longest triangle circumference as a third corner point.
And (3) sequentially calculating the distance from each contour point to the third corner point, and finding the point farthest from the fourth corner point as the fourth corner point.
Wherein the first second corner point is necessarily a diagonal point, and the third fourth corner point is necessarily a diagonal point.
Four corner points have an eight-middle distribution.
In the step S7, the principle of determining and sorting the positions of the four corner points is as follows:
as shown in fig. 8, the known 1 and 2 corner points are diagonal points, and the known 3 and 4 corner points are diagonal points. And respectively calculating the distance h1 from the 1 st corner to the 2 nd corner and the distance h2 from the 3 rd corner to the 4 th corner. If h1> h2, respectively calculating the distance D1 of the straight line connected by the 3 rd corner to the 2 nd corner 1 and the straight line distance D2 connected by the 4 th corner to the 2 nd corner 1. If D1> D2, the distance h3 between the 1 st and 3 rd corner points and the distance h4 between the 2 nd and 3 rd corner points are calculated, and the 4 th corner point is the third feature point. If h3> h4; the first feature point is the 1 st corner point, the second feature point is the 3 rd corner point, otherwise the first feature point is the 2 nd corner point, and the second feature point is the 3 rd corner point. If D1 is less than D2, calculating a distance h5 between the 1 st and 4 th corner points, a distance h6 between the 2 nd and 4 th corner points, and taking the 3 rd corner point at the moment as a third feature point. If h5> h6; the first characteristic point is the 1 st corner point, the third characteristic point is the 3 rd corner point, otherwise, the 2 nd corner point is the first characteristic point, and the 4 th corner point is the third characteristic point. If h1< h2; calculating the distance D3 between the 1 st corner point and the straight line connecting the 3 rd corner point and the 4 th corner point; distance D4 of the straight line connecting the 2 nd corner point to the 3 rd corner point and the 4 th corner point. If D3> D4; calculating the distance h7 of the 1 st corner point and the 2 nd corner point; a distance h8 between the 1 st corner point and the 4 th corner point; the 2 nd corner point at this time is the third feature point. If h7> h8, the first feature point is the 1 st corner point, and the second feature point is the 3 rd corner point. Otherwise, the first characteristic point is the 1 st corner point, and the second characteristic point is the 4 th corner point. If D3 is less than D4, calculating the distance h9 between the 2 rd corner points and the 3 rd corner points, the distance h10 between the 2 nd corner points and the 4 th corner points, and the third characteristic point is the 4 th corner point. If h9> h10, the first feature point is the 2 nd corner point, and the second feature point is the 3 rd corner point. Otherwise, the first characteristic point is the 2 nd corner point, and the second characteristic point is the 4 th corner point.
Fig. 7 is a flowchart showing feature point extraction according to an embodiment of the present invention. And S7, acquiring three characteristic points 1, 2 and 3, wherein the mass center is the 4 th characteristic point, and the connecting line of the two characteristic points 1 and 2 is the longest edge on the contour.
The pentagonal workpiece contour obtaining method comprises the following specific steps:
the method comprises the steps of obtaining RGB images in a BMP format of a workpiece by using an MVS camera, converting the images into Mat format by using an OpenCV, carrying out binarization processing of a fixed threshold method on the Mat format images, obtaining the outline of the workpiece by using an edge detection algorithm on the binarized images, calculating the outline area, wherein the outline area is required, filtering the outline which is not in the area range, and finally obtaining the outline of the workpiece which is not closed.
The area parameter obtaining step specifically includes:
taking any contour point as a target contour point, taking the coordinates of the target contour point as a center, and taking a preset radius as a circle to obtain a preset circle;
and taking the ratio of the area, intercepted by the preset circle, of the workpiece shape to be identified, which has a direct connection relation with the target contour point to the area of the preset circle as the normalized area of the area, and multiplying the normalized area of the area by 2 to obtain the area parameter of the target contour point.
In this example, various distributions of workpiece profiles are shown in FIG. 10.
In this example, feature points are selected from the corner points, as shown in fig. 11, wherein three small circles of red, green and blue represent three feature points of 1, 2 and 3, respectively, and the small circle of yellow represents the centroid of the shape of the workpiece and also represents the 4 th feature point. Wherein the connecting line of the characteristic points 1 and 2 is the longest side.
In this example, the shape of the workpiece to be identified is obtained using an OpenCV computer vision library.
In the present invention, the method for acquiring the shape of the workpiece to be identified is any method which can be effectively realized by the operator to determine the shape of the workpiece in advance. In the embodiment of the invention, an OpenCV computer vision library is adopted to effectively acquire the shape of the workpiece to be identified.
In this example, a fixed threshold algorithm is used to extract an image of the shape of the workpiece to be identified, and an unclosed contour is obtained.
The workpiece shape contour extraction method can adopt any method preset by a worker and capable of effectively realizing workpiece shape contour extraction, and specifically can be a Canny operator, a Laplacian operator, a fixed threshold algorithm and the like.
In this example, a preset threshold is determined, and the minimum threshold 181 is set in the external configuration file to enable the preset threshold to meet the minimum number of corresponding pixels, and binarization processing is performed on the image by using the threshold 181. The processed binarized image is shown in fig. 12.
In this example, determining the preset area includes:
calculating the area of the shape of the workpiece to be identified, slightly enlarging and reducing the area of the shape of the workpiece to be identified to obtain the minimum and maximum preset areas of the shape of the workpiece to be identified, which can be specifically expressed as MinObjArea and MaxObjArea.
Of course, the specific setting method of the preset area can be set by the staff according to the actual needs, and the specific setting method is within the protection scope of the invention.
In this example, the obtaining the optimal profile of the workpiece to be identified by using the preset area parameter and the rectangle degree parameter as the important basis of the screening profile includes:
and comparing the area parameter of the contour of the workpiece to be identified with a preset area parameter, determining the contour in a preset area range, and determining the contour in the rectangular range as the best matching contour by comparing the rectangular degree of the contour.
And if the contours of the workpiece to be identified are all within the preset range, the contours are considered to be the best matching contours, and if the individual contours are not within the preset range, the contours are filtered and recorded in an external configuration file.
As shown in fig. 13, this example further provides a laser marking system based on feature point extraction algorithm detection, including:
the setting module 1 is used for setting parameters such as camera parameters, exposure time, image threshold values, preset areas and the like.
And the extraction module 2 is used for acquiring an image of the tool to be identified, extracting an unclosed contour from the image graph edge of the tool to be identified, and acquiring the coordinates of all contour points on the contour.
The calculating module 3 is configured to calculate an area parameter of each contour point, screen the contour according to the area parameter, extract an optimal contour point, calculate a moment and a centroid of the contour at the time in the optimal contour, calculate first, second, third, fourth and fifth corner points according to a positional relationship between the contour point and the centroid, calculate first, second and third feature points according to a positional relationship between each corner point, determine a front and a back of the tool at the time according to a vector cross product manner between the feature points and the centroid, and calculate a marking position according to a distance from the centroid to a longest edge.
And the display module 4 is used for marking the identified characteristic points, corner points and the rotation angle of the tool.
As shown in fig. 14, the setting module 1 is connected with the extracting module 2, the extracting module 2 is connected with the calculating module 3, and the calculating module 3 is connected with the display module 4.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the setting module comprises:
image setting, namely setting image exposure time, image update time and image processing threshold.
The method comprises the steps of coordinate setting, calibrating an image coordinate center coordinate, setting a laser center coordinate, setting a mechanical and image coordinate included angle, setting image and mechanical coordinate rotation modification safety, setting mechanical coordinate pixels and the like.
OBJ settings, modifying minimum maximum preset area, setting effective identification area, setting identification area color, etc.
The method comprises the following steps of self-adapting OBJ setting, self-adapting identification style setting, character color setting, marking point color setting, offset correction value setting and the like.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the extraction module comprises:
the acquisition unit is used for acquiring the shape of the workpiece to be identified by using an OpenCV computer vision library;
And the extraction unit is used for extracting an unclosed contour for the workpiece shape by adopting a fixed threshold method and an OpenCV computer vision library.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the calculation module is used for comprising:
(1) Calculating the area parameter of each contour;
(2) Screening out the best matching contour according to the preset area parameter and the preset rectangle degree parameter;
(3) Calculating the moment of each contour;
(4) Calculating the mass center of each contour;
(5) Calculating characteristic angular points of each contour;
(6) Calculating and obtaining characteristic points of each contour;
(7) Judging the positive and negative of the image by a vector cross product method;
(8) And calculating the rotation angle of the image.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the calculation module may include:
a computing subunit for: taking any contour point as a target contour point, taking the coordinates of the target contour point as the center, and taking a preset radius as a circle to obtain a preset circle; subtracting the ratio of the area, which is intercepted by the preset circle and has a direct connection relation with the target contour point, in the shape of the workpiece to be identified to the area of the preset circle from 0.5, and multiplying the ratio by 2 to obtain the area parameter of the target contour point.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the calculation module may include:
and the calculating subunit is used for screening the contour area parameter, judging whether the contour area is the best matching contour, and if the current contour area parameter value is not smaller than the minimum preset area parameter value and not larger than the maximum preset area parameter value, the rectangle degree of the current contour is not smaller than the minimum rectangle degree and not larger than the maximum rectangle degree, considering the contour as the best matching contour. And after the judgment is carried out on all the contours, obtaining a contour sequence of the workpiece to be identified.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the calculation module may include:
a calculating subunit: the center moment in the image is calculated by the movements () function provided by OpenCV.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the calculation module may include:
a calculating subunit: since the image is two-dimensional, the method of finding the centroid of the image is to find the centroid independently in the x-direction and the y-direction, respectively. I.e. for the centroid in the x-direction, the pixels of the image on the left and right sides of the centroid are equal; for the centroid in the y-direction, the image is equal to the sum of pixels on both sides of the centroid. The x-coordinate and y-coordinate calculation formula of the centroid is as follows:
Figure BDA0002594246130000131
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the calculation module may include:
a calculating subunit: for the purpose of:
and (3) sequentially calculating the distance between each contour point and the centroid, and finding the point farthest from the centroid as a first angular point.
And (3) sequentially calculating the distance from each contour point to the first corner point, and finding the point farthest from the second corner point as the second corner point.
And forming a triangle by sequentially calculating each contour point, the first corner point and the second point, and forming the contour point with the longest triangle circumference as a third corner point.
And (3) sequentially calculating the distance from each contour point to the third corner point, and finding the point farthest from the fourth corner point as the fourth corner point. Wherein the first second corner point is necessarily a diagonal point, and the third fourth corner point is necessarily a diagonal point.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the calculation module may include:
a calculating subunit: knowing that the 1 st corner and the 2 nd corner are diagonal points, the 3 rd corner and the 4 th corner are diagonal points, and respectively calculating the distance h1 from the 1 st corner to the 2 nd corner and the distance h2 from the 3 rd corner to the 4 th corner. If h1> h2, respectively calculating the distance D1 of the straight line connected by the 3 rd corner to the 2 nd corner 1 and the straight line distance D2 connected by the 4 th corner to the 2 nd corner 1. If D1> D2, the distance h3 between the 1 st and 3 rd corner points and the distance h4 between the 2 nd and 3 rd corner points are calculated, and the 4 th corner point is the third feature point. If h3> h4; the first feature point is the 1 st corner point, the second feature point is the 3 rd corner point, otherwise the first feature point is the 2 nd corner point, and the second feature point is the 3 rd corner point. If D1 is less than D2, calculating a distance h5 between the 1 st and 4 th corner points, a distance h6 between the 2 nd and 4 th corner points, and taking the 3 rd corner point at the moment as a third feature point. If h5> h6; the first characteristic point is the 1 st corner point, the third characteristic point is the 3 rd corner point, otherwise, the 2 nd corner point is the first characteristic point, and the 4 th corner point is the third characteristic point. If h1< h2; calculating the distance D3 between the 1 st corner point and the straight line connecting the 3 rd corner point and the 4 th corner point; distance D4 of the straight line connecting the 2 nd corner point to the 3 rd corner point and the 4 th corner point. If D3> D4; calculating the distance h7 of the 1 st corner point and the 2 nd corner point; a distance h8 between the 1 st corner point and the 4 th corner point; the 2 nd corner point at this time is the third feature point. If h7> h8, the first feature point is the 1 st corner point, and the second feature point is the 3 rd corner point. Otherwise, the first characteristic point is the 1 st corner point, and the second characteristic point is the 4 th corner point. If D3 is less than D4, calculating the distance h9 between the 2 rd corner points and the 3 rd corner points, the distance h10 between the 2 nd corner points and the 4 th corner points, and the third characteristic point is the 4 th corner point. If h9> h10, the first feature point is the 2 nd corner point, and the second feature point is the 3 rd corner point. Otherwise, the first characteristic point is the 2 nd corner point, and the second characteristic point is the 4 th corner point.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the calculation module may include:
a calculating subunit: judging whether the image is on the front side or the back side by a vector cross product method, wherein the cross product result can be expressed as:
Q={(y 2 -y 1 )*x 3 +(x 1 -x 2 )*y 3 +(x 2 *y 1 -x 1 *y 2 )}
wherein (x) 1 ,y 1 ) Is the coordinates of the first feature point, (x 2 ,y 2 ) Is the coordinates of the second feature point, (x 3 ,y 3 ) Is the coordinates of the third feature point.
If Q is that the regular third feature point is at the left side of the straight line connecting the first feature point and the second feature point, the default workpiece identification image is the front.
And if Q is negative, the third characteristic point is on the right side of a straight line connecting the first characteristic point and the second characteristic point, and the default workpiece identification image is the reverse side.
And if Q is zero, the first, second and third characteristic points are collinear. At this point error tool profile information is recorded.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the calculation module may include: a calculating subunit: the rotation angle S of the longest side at this time is calculated by calculating the rotation angle P of the first feature point relative to the second feature point, and then according to the vector cross product result Q of the feature points.
If Q >0, P >180, then s= -360+p.
If Q >0, P <180, s=p.
If Q <0, P >180, s= -360+p.
If Q <0, P <180, s=p.
If q=0, the external file is recorded.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the display module is used for:
(1) Drawing and displaying the outline;
(2) Displaying the barycenter coordinates;
(3) Displaying the coordinates of the feature points;
(4) Displaying the image rotation corner point.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the display module may include:
the display subunit draws and displays the outline by using the cv:drawcontours () function.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the display module may include:
and a display subunit drawing the centroid through the putText () function and displaying the centroid coordinates.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the display module may include: and a display subunit drawing the feature points through the circle () function and displaying the feature point coordinates.
In the laser marking system based on feature point extraction algorithm detection provided by the embodiment of the invention, the display module may include:
And the display subunit is used for drawing an arc on the outline graph by calculating the rotation angle of the longest side so as to realize the rotation angle of the marked graph.
The invention is further described below in connection with specific experiments and effects.
As shown in fig. 15, a laser marking system interface is opened, the system is set for image processing, the selection detection method is pentagon a (front), the original image is selected, and the parameters are set.
As shown in fig. 16, the identification parameter is set when the pentagon a is set. And setting the identification parameters through double-click parameter values.
As shown in fig. 17, the laser marking system interface target recognition button is clicked, the ZHEN-all. Bmp image is selected, and the open button is clicked to recognize the target.
As shown in fig. 18, the result of the recognition on the ZHEN-all.bmp image is finally displayed, wherein the figure number is marked, the two endpoints and the mass center of the longest edge are marked, the rotation angle of the workpiece relative to the original workpiece position at this time is drawn, and the number of effective recognition object images, the coordinates of the workpiece on the front or back, the mass center of the workpiece figure and the rotation angle of the workpiece are displayed on the right side.
In the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more; the terms "upper," "lower," "left," "right," "inner," "outer," "front," "rear," "head," "tail," and the like are used as an orientation or positional relationship based on that shown in the drawings, merely to facilitate description of the invention and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore should not be construed as limiting the invention. Furthermore, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It should be noted that the embodiments of the present invention can be realized in hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or special purpose design hardware. Those of ordinary skill in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a magnetic disk, CD or DVD-ROM, a programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The device of the present invention and its modules may be implemented by hardware circuitry, such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., as well as software executed by various types of processors, or by a combination of the above hardware circuitry and software, such as firmware.
The foregoing is merely illustrative of specific embodiments of the present invention, and the scope of the invention is not limited thereto, but any modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present invention will be apparent to those skilled in the art within the scope of the present invention.

Claims (8)

1. The laser marking system detection method based on the feature point extraction algorithm is characterized by comprising the following steps of:
acquiring an image to be identified, and performing binarization processing on the image;
acquiring all contours from the binarized image, and acquiring coordinates of all contour points;
screening the identified image contours through the areas of the acquired contours to obtain an optimal contour group, and drawing the most acquired contours;
calculating the moment of each contour;
calculating the mass center of each contour through the moment of the known contour, and storing the mass center coordinates;
calculating a point farthest from the centroid as a first corner point, a point farthest from the first corner point as a second corner point, a point longest in the circumference of a triangle formed by the first corner point, the second corner point and the third corner point as a third corner point, and a point farthest from a fourth feature point as a fourth corner point; the closest point to the centroid is the fifth angular point;
obtaining a plurality of feature points through calculation;
after a plurality of characteristic points are obtained, respectively calculating the distance between every two points, and sequencing from long to short to obtain the slope of the longest edge;
Judging whether the position of the image is the front or the back according to the slope of the longest edge, calculating the rotation angle of the starting position of the longest edge, and drawing in the image;
calculating the distance from the mass center to the longest side to obtain the marking position at the moment;
the method for calculating the moment of each contour comprises the following steps:
calculating a center moment in the image through a motion () function provided by OpenCV; the calculation formula of the center moment is as follows:
Figure QLYQS_1
the method for calculating the mass center of each contour through the moment of the contour comprises the following steps:
finding the centroids in the x-direction and the y-direction independently; for the centroid in the x direction, the pixels of the image on the left and right sides of the centroid are equal; for the centroid in the y direction, the pixels of the image are equal at the upper side and the lower side of the centroid; the x-coordinate and y-coordinate of the centroid are calculated as shown in the following formula;
Figure QLYQS_2
the method for calculating the five characteristic corner points through analyzing the known contour points comprises the following steps:
by sequentially calculating the distance between each contour point and the centroid, finding out the point farthest from the centroid as a first angular point;
finding out the farthest point from the second corner point as the second corner point by sequentially calculating the distance from each contour point to the second corner point;
forming a triangle by sequentially calculating each contour point, the first corner point and the second corner point, and forming a contour point with the longest triangle circumference as a third corner point;
The distance from each contour point to the fourth corner point is calculated in sequence, and the point farthest from the fourth corner point is found to be the fourth corner point;
by sequentially calculating the distance between each contour point and the mass center, finding the closest point to the mass center as a fifth angle point;
the method for judging the positive and negative of the image is judged according to a vector cross product method, and a cross product result Q is expressed as follows: q= { (y) 2 -y 1 )*x 3 +(x 1 -x 2 )*y 3 +(x 2 *y 1 -x 1 *y 2 )}
Wherein (x) 1 ,y 1 ) Is the coordinates of the first feature point, (x 2 ,y 2 ) Is the coordinates of the second feature point, (x 3 ,y 3 ) Coordinates of the third feature point;
if Q is that the regular third feature point is at the left side of the straight line connecting the first feature point and the second feature point, the default workpiece identification image is the front;
if Q is negative, the third characteristic point is on the right side of a straight line connecting the first characteristic point and the second characteristic point, and the default workpiece identification image is the reverse side;
if Q is zero, the first, second and third characteristic points are collinear, and error tool contour information is recorded at the moment;
acquiring a rotation angle S of the longest side by calculating a rotation angle P of the first characteristic point relative to the second characteristic point;
if Q >0, P >180, s= -360+p;
if Q >0, P <180, s=p;
if Q <0, P >180, s= -360+p;
if Q <0, P <180, s=p;
if q=0, the external file is recorded.
2. The method for detecting a laser marking system based on a feature point extraction algorithm according to claim 1, wherein the steps of acquiring an image to be identified and binarizing the image comprise:
and acquiring an RGB image of the tool by using an MVS camera, converting the acquired RGB image into a required Mat matrix by using an Opencv function, and setting the gray value of the pixel point of the Mat matrix according to the gray value, so that the Mat matrix after binarization is obtained after gray processing.
3. The method for detecting a laser marking system based on a feature point extraction algorithm according to claim 1, wherein the method for acquiring a contour from a binarized image and performing contour screening comprises:
carrying out morphological change processing on the image by using a morphyoyEx function of OpenCV, and removing some small black holes on the image by carrying out expansion processing and then corrosion processing on the image; then processing the binarized image through a findContours function, and finding out an image contour in the form of a contour point vector; after finding the outline of the image, drawing all the found outlines, calculating the area contained by the outlines, screening out the found outline area by setting the minimum area value and the maximum area value of the outlines to obtain the screened outline area, and finally screening out the rectangle degree and the circularity of the outlines to filter out small areas and characters in the areas; the resulting contour is drawn.
4. The method for detecting a laser marking system based on a feature point extraction algorithm according to claim 1, wherein the method for calculating lengths between five feature points to determine positions of first, second, third and fourth corner points and sequencing the positions comprises:
knowing that the 1 st corner point and the 2 nd corner point are diagonal points, and the 3 rd corner point and the 4 th corner point are diagonal points, respectively calculating the distance h1 from the 1 st corner point to the 2 nd corner point and the distance h2 from the 3 rd corner point to the 4 th corner point; by comparing the sizes of h1 and h2, giving a certain constraint condition, and acquiring a required first and second characteristic point;
acquiring a required special point by judging the length of a line segment between the two points 1 and 2 and the two points 3 and 4;
the specific steps for obtaining the slope of the longest edge include:
respectively calculating the distances between every two pairs of points between the mass center and the first and second characteristic points, and sequencing from long to short to obtain the slope of the longest edge;
the method for judging the front and back sides of the image comprises the following steps:
judging that the third characteristic point is on a certain side of a straight line connected with the first characteristic point and the second characteristic point by adopting a vector cross product mode, if the obtained result is a regular left side, the obtained result is a negative right side;
and calculating the distance from the mass center to the longest side to obtain the marking position at the moment.
5. The laser marking system based on the feature point extraction algorithm detection is characterized by comprising:
the setting module is used for setting parameters such as camera parameters, exposure time, image threshold values, preset areas and the like;
the extraction module is used for acquiring an image of the tool to be identified, extracting an unclosed contour from the image graph edge of the tool to be identified, and acquiring coordinates of all contour points on the contour;
the calculation module is used for calculating the area parameter of each contour point, screening the contour according to the area parameter, extracting the optimal contour point, calculating the moment and the mass center of the contour at the moment in the optimal contour, calculating the first, second, third, fourth and fifth corner points according to the position relation between the contour point and the mass center, calculating the first, second and third characteristic points according to the position relation between the corner points, judging the front and back surfaces of the tool at the moment in a vector cross product mode between the characteristic points and the mass center, and finally calculating the marking position according to the distance from the mass center to the longest edge;
and the display module is used for marking the identified characteristic points, corner points and the rotating angle of the tool.
6. The laser marking system based on feature point extraction algorithm detection of claim 5, wherein the setup module comprises:
the image setting module is used for setting image exposure time, image update time and image processing threshold;
the coordinate setting module is used for calibrating the central coordinates of the image coordinates, setting the central coordinates of the laser, setting the included angles between the machine and the image coordinates, setting the rotation modification safety of the image and the machine coordinates and setting the pixels of the machine coordinates;
the OBJ setting module is used for modifying the minimum maximum preset area, setting an effective identification area and setting the color of the identification area;
the self-adaptive OBJ setting module is used for setting a self-adaptive identification pattern, setting a character color, setting a mark point color and setting an offset correction value;
the extraction module comprises:
the acquisition unit is used for acquiring the shape of the workpiece to be identified by using an OpenCV computer vision library;
the extraction unit is used for extracting an unclosed contour for the workpiece shape by adopting a fixed threshold method and an OpenCV computer vision library;
the computing module is also used for computing the area parameter of each contour, screening out the best matching contour according to the preset area parameter and the preset rectangle degree parameter, computing the moment of each contour, computing the mass center of each contour, computing the characteristic angular point of each contour, computing and obtaining the characteristic point of each contour, judging the positive and negative of the image through a vector cross product method, and computing the rotation angle of the image;
The calculation module comprises a calculation subunit, a calculation unit and a calculation unit, wherein the calculation subunit is used for taking any contour point as a target contour point, taking the coordinates of the target contour point as the center, and taking a preset radius as a circle to obtain a preset circle; subtracting the ratio of the area, intercepted by the preset circle, of the workpiece shape to be recognized and having a direct connection relation with the target contour point to the area of the preset circle from 0.5, and multiplying the ratio by 2 to obtain the area parameter of the target contour point;
the method is also used for screening the contour area parameter, judging whether the contour area is the best matching contour, and if the current contour area parameter value is not smaller than the minimum preset area parameter value and not larger than the maximum preset area parameter value, the rectangle degree of the current contour is not smaller than the minimum rectangle degree and not larger than the maximum rectangle degree, the contour is considered to be the best matching contour; after the judgment is carried out on all the contours, a contour sequence of the workpiece to be identified is obtained;
the computing subunit is further configured to calculate a center moment in the image through a movements () function provided by OpenCV;
the method is also used for finding out the point farthest from the mass center as a first angular point by sequentially calculating the distance from each contour point to the mass center;
the distance between each contour point and the first corner point is calculated in sequence, and the point farthest from the second corner point is found to be the second corner point;
Forming a triangle by sequentially calculating each contour point, the first corner point and the second point, and forming a contour point with the longest triangle circumference as a third corner point;
by sequentially calculating the distance from each contour point to the third corner point, finding out the point farthest from the fourth corner point as the fourth corner point; the first second corner point is a diagonal point, and the third fourth corner point is a diagonal point;
the display module is also used for drawing the outline, displaying and displaying the barycenter coordinates and the characteristic point coordinates; displaying an image rotation corner point;
the method specifically comprises the following steps:
and a display subunit, which is used for drawing and displaying a contour by using a cv: drawContours () function, drawing a centroid by using a putText () function, displaying a centroid coordinate, drawing a feature point by using a circle () function, displaying a feature point coordinate, or performing arc drawing on the contour map by calculating the rotation angle of the longest side so as to realize the rotation angle of the marked graph.
7. A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
Acquiring an image to be identified, and performing binarization processing on the image;
acquiring all contours from the binarized image, and acquiring coordinates of all contour points;
screening the identified image contours through the areas of the acquired contours to obtain an optimal contour group, and drawing the most acquired contours;
calculating the moment of each contour;
calculating the mass center of each contour through the moment of the known contour, and storing the mass center coordinates;
calculating a point farthest from the centroid as a first corner point, a point farthest from the first corner point as a second corner point, a point longest in the circumference of a triangle formed by the first corner point, the second corner point and the third corner point as a third corner point, and a point farthest from a fourth feature point as a fourth corner point; the closest point to the centroid is the fifth angular point;
obtaining 1, 2 and 3 characteristic points through calculation;
after three characteristic points 1, 2 and 3 are obtained, respectively calculating the distance between every two points, and sequencing from long to short to obtain the slope of the longest side;
judging whether the position of the image is the front or the back according to the slope of the longest edge, calculating the rotation angle of the starting position of the longest edge, and drawing in the image;
Calculating the distance from the mass center to the longest side to obtain the marking position at the moment;
the method for calculating the moment of each contour comprises the following steps:
calculating a center moment in the image through a motion () function provided by OpenCV; the calculation formula of the center moment is as follows:
Figure QLYQS_3
the method for calculating the mass center of each contour through the moment of the contour comprises the following steps:
finding the centroids in the x-direction and the y-direction independently; for the centroid in the x direction, the pixels of the image on the left and right sides of the centroid are equal; for the centroid in the y direction, the pixels of the image are equal at the upper side and the lower side of the centroid; the x-coordinate and y-coordinate of the centroid are calculated as shown in the following formula;
Figure QLYQS_4
the method for calculating the five characteristic corner points through analyzing the known contour points comprises the following steps:
by sequentially calculating the distance between each contour point and the centroid, finding out the point farthest from the centroid as a first angular point;
finding out the farthest point from the second corner point as the second corner point by sequentially calculating the distance from each contour point to the second corner point;
forming a triangle by sequentially calculating each contour point, the first corner point and the second corner point, and forming a contour point with the longest triangle circumference as a third corner point;
the distance from each contour point to the fourth corner point is calculated in sequence, and the point farthest from the fourth corner point is found to be the fourth corner point;
By sequentially calculating the distance between each contour point and the mass center, finding the closest point to the mass center as a fifth angle point;
the method for judging the positive and negative of the image is judged according to a vector cross product method, and a cross product result Q is expressed as follows: q= { (y) 2 -y 1 )*x 3 +(x 1 -x 2 )*y 3 +(x 2 *y 1 -x 1 *y 2 )}
Wherein (x) 1 ,y 1 ) Is the coordinates of the first feature point, (x 2 ,y 2 ) Is the coordinates of the second feature point, (x 3 ,y 3 ) Coordinates of the third feature point;
if Q is that the regular third feature point is at the left side of the straight line connecting the first feature point and the second feature point, the default workpiece identification image is the front;
if Q is negative, the third characteristic point is on the right side of a straight line connecting the first characteristic point and the second characteristic point, and the default workpiece identification image is the reverse side;
if Q is zero, the first, second and third characteristic points are collinear, and error tool contour information is recorded at the moment;
acquiring a rotation angle S of the longest side by calculating a rotation angle P of the first characteristic point relative to the second characteristic point;
if Q >0, P >180, s= -360+p;
if Q >0, P <180, s=p;
if Q <0, P >180, s= -360+p;
if Q <0, P <180, s=p;
if q=0, the external file is recorded.
8. A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
Acquiring an image to be identified, and performing binarization processing on the image;
acquiring all contours from the binarized image, and acquiring coordinates of all contour points;
screening the identified image contours through the areas of the acquired contours to obtain an optimal contour group, and drawing the most acquired contours;
calculating the moment of each contour;
calculating the mass center of each contour through the moment of the known contour, and storing the mass center coordinates;
calculating a point farthest from the centroid as a first corner point, a point farthest from the first corner point as a second corner point, a point longest in the circumference of a triangle formed by the first corner point, the second corner point and the third corner point as a third corner point, and a point farthest from a fourth feature point as a fourth corner point; the closest point to the centroid is the fifth angular point;
obtaining 1, 2 and 3 characteristic points through calculation;
after three characteristic points 1, 2 and 3 are obtained, respectively calculating the distance between every two points, and sequencing from long to short to obtain the slope of the longest side;
judging whether the position of the image is the front or the back according to the slope of the longest edge, calculating the rotation angle of the starting position of the longest edge, and drawing in the image;
Calculating the distance from the mass center to the longest side to obtain the marking position at the moment;
the method for calculating the moment of each contour comprises the following steps:
calculating a center moment in the image through a motion () function provided by OpenCV; the calculation formula of the center moment is as follows:
Figure QLYQS_5
the method for calculating the mass center of each contour through the moment of the contour comprises the following steps:
finding the centroids in the x-direction and the y-direction independently; for the centroid in the x direction, the pixels of the image on the left and right sides of the centroid are equal; for the centroid in the y direction, the pixels of the image are equal at the upper side and the lower side of the centroid; the x-coordinate and y-coordinate of the centroid are calculated as shown in the following formula;
Figure QLYQS_6
the method for calculating the five characteristic corner points through analyzing the known contour points comprises the following steps:
by sequentially calculating the distance between each contour point and the centroid, finding out the point farthest from the centroid as a first angular point;
finding out the farthest point from the second corner point as the second corner point by sequentially calculating the distance from each contour point to the second corner point;
forming a triangle by sequentially calculating each contour point, the first corner point and the second corner point, and forming a contour point with the longest triangle circumference as a third corner point;
the distance from each contour point to the fourth corner point is calculated in sequence, and the point farthest from the fourth corner point is found to be the fourth corner point;
By sequentially calculating the distance between each contour point and the mass center, finding the closest point to the mass center as a fifth angle point;
the method for judging the positive and negative of the image is judged according to a vector cross product method, and a cross product result Q is expressed as follows: q= { (y) 2 -y 1 )*x 3 +(x 1 -x 2 )*y 3 +(x 2 *y 1 -x 1 *y 2 )}
Wherein (x) 1 ,y 1 ) Is the coordinates of the first feature point, (x 2 ,y 2 ) Is the coordinates of the second feature point, (x 3 ,y 3 ) Coordinates of the third feature point;
if Q is that the regular third feature point is at the left side of the straight line connecting the first feature point and the second feature point, the default workpiece identification image is the front;
if Q is negative, the third characteristic point is on the right side of a straight line connecting the first characteristic point and the second characteristic point, and the default workpiece identification image is the reverse side;
if Q is zero, the first, second and third characteristic points are collinear, and error tool contour information is recorded at the moment;
acquiring a rotation angle S of the longest side by calculating a rotation angle P of the first characteristic point relative to the second characteristic point;
if Q >0, P >180, s= -360+p;
if Q >0, P <180, s=p;
if Q <0, P >180, s= -360+p;
if Q <0, P <180, s=p;
if q=0, the external file is recorded.
CN202010704587.XA 2020-07-21 2020-07-21 Laser marking system and method based on feature point extraction algorithm detection Active CN111832659B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010704587.XA CN111832659B (en) 2020-07-21 2020-07-21 Laser marking system and method based on feature point extraction algorithm detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010704587.XA CN111832659B (en) 2020-07-21 2020-07-21 Laser marking system and method based on feature point extraction algorithm detection

Publications (2)

Publication Number Publication Date
CN111832659A CN111832659A (en) 2020-10-27
CN111832659B true CN111832659B (en) 2023-07-04

Family

ID=72924468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010704587.XA Active CN111832659B (en) 2020-07-21 2020-07-21 Laser marking system and method based on feature point extraction algorithm detection

Country Status (1)

Country Link
CN (1) CN111832659B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297872B (en) * 2021-03-24 2024-01-12 福州符号信息科技有限公司 Dotcode identification method and device
CN113409334B (en) * 2021-06-20 2022-10-04 桂林电子科技大学 Centroid-based structured light angle point detection method
CN113689378B (en) * 2021-07-07 2024-04-05 杭州未名信科科技有限公司 Determination method and device for accurate positioning of test strip, storage medium and terminal
CN113837204B (en) * 2021-09-28 2022-06-14 常州市宏发纵横新材料科技股份有限公司 Hole shape recognition method, computer equipment and storage medium
CN118015029A (en) * 2022-07-18 2024-05-10 宁德时代新能源科技股份有限公司 Method and device for detecting corner points of tabs and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2010129404A (en) * 2010-07-15 2012-01-20 Открытое акционерное общество "Научно-производственный комплекс "ЭЛАРА" имени Г.А. Ильенко" (ОАО "ЭЛАРА") (RU) METHOD OF LASER ENGRAVING
CN111275761A (en) * 2020-01-17 2020-06-12 湖北三江航天红峰控制有限公司 Visual positioning laser marking method with self-adaptive height

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2010129404A (en) * 2010-07-15 2012-01-20 Открытое акционерное общество "Научно-производственный комплекс "ЭЛАРА" имени Г.А. Ильенко" (ОАО "ЭЛАРА") (RU) METHOD OF LASER ENGRAVING
CN111275761A (en) * 2020-01-17 2020-06-12 湖北三江航天红峰控制有限公司 Visual positioning laser marking method with self-adaptive height

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Benchmarking and Functional Decompositio of Automotive Lidar Sensor Models;Philipp Rosenberger;《IEEE》;632-639 *
基于集成网络的激光标刻冗余快速滤除系统设计;贾凌杉;《激光杂志》;165-168 *

Also Published As

Publication number Publication date
CN111832659A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
CN111832659B (en) Laser marking system and method based on feature point extraction algorithm detection
US8209172B2 (en) Pattern identification method, apparatus, and program
CN109816644A (en) A kind of bearing defect automatic checkout system based on multi-angle light source image
Goel et al. Specific color detection in images using RGB modelling in MATLAB
CN110097596A (en) A kind of object detection system based on opencv
CN111965197B (en) Defect classification method based on multi-feature fusion
CN111626295B (en) Training method and device for license plate detection model
US8929664B2 (en) Shape detection using chain code states
CN111695373B (en) Zebra stripes positioning method, system, medium and equipment
WO2020061691A1 (en) Automatically detecting and isolating objects in images
CN115830359A (en) Workpiece identification and counting method based on target detection and template matching in complex scene
CN113609984A (en) Pointer instrument reading identification method and device and electronic equipment
CN110288040B (en) Image similarity judging method and device based on topology verification
US7403636B2 (en) Method and apparatus for processing an image
CN112699704B (en) Method, device, equipment and storage device for detecting bar code
CN112784675B (en) Target detection method and device, storage medium and terminal
Chen et al. Rapid and precise object detection based on color histograms and adaptive bandwidth mean shift
US7133560B2 (en) Generating processing sequences for image-based decision systems
CN113744200B (en) Camera dirt detection method, device and equipment
CN115019306A (en) Embedding box label batch identification method and system based on deep learning and machine vision
CN114694162A (en) Invoice image recognition method and system based on image processing
Ning Vehicle license plate detection and recognition
Ismail et al. Detection and recognition via adaptive binarization and fuzzy clustering
CN113284158A (en) Image edge extraction method and system based on structural constraint clustering
JP2005250786A (en) Image recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant