CN115578314A - Spectacle frame identification and grabbing feeding method based on continuous edge extraction - Google Patents

Spectacle frame identification and grabbing feeding method based on continuous edge extraction Download PDF

Info

Publication number
CN115578314A
CN115578314A CN202211088357.0A CN202211088357A CN115578314A CN 115578314 A CN115578314 A CN 115578314A CN 202211088357 A CN202211088357 A CN 202211088357A CN 115578314 A CN115578314 A CN 115578314A
Authority
CN
China
Prior art keywords
curve
edge
smooth edge
grabbing
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211088357.0A
Other languages
Chinese (zh)
Inventor
蒋超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhongke Xingzhi Intelligent Technology Co ltd
Original Assignee
Suzhou Zhongke Xingzhi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhongke Xingzhi Intelligent Technology Co ltd filed Critical Suzhou Zhongke Xingzhi Intelligent Technology Co ltd
Priority to CN202211088357.0A priority Critical patent/CN115578314A/en
Publication of CN115578314A publication Critical patent/CN115578314A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a spectacle frame identification and grabbing and feeding method based on continuous edge extraction, which comprises the following steps: 3D structured light scanning is carried out to obtain 3D point cloud data and a 2D texture map; background image elimination is carried out to obtain 3D point cloud data and a binary image after background elimination; down-sampling the binary image, and searching for a non-broken long edge larger than a length threshold; fitting and dividing smooth edge curves, and removing smooth edge curve parts of closed areas formed among the smooth edge curves; calculating the curve length of each smooth edge curve and the distance between the midpoint of the curve and the centroid; calculating a weighted value; carrying out back projection on curve pixel information of the smooth edge curve to a 3D point cloud space to obtain corresponding 3D point cloud data and interpolating a space 3D curve; and calculating the pose to be grabbed of the central point of the space 3D curve, and issuing the pose to the mechanical arm to grab. The invention adapts to the production requirement of rapid model change, can estimate the space pose of the spectacle frame and ensures higher grabbing success rate of the spectacle frame.

Description

Continuous edge extraction-based spectacle frame identification and grabbing and feeding method
Technical Field
The invention relates to the technical field of disordered grabbing industrial automation, in particular to a spectacle frame identification and grabbing feeding method based on continuous edge extraction.
Background
In order to solve the automatic feeding problem in automatic processing of the glasses frame, 3D structured light is used for scanning the glasses frame in a stacking state, and 3D point cloud reconstruction is completed on a single glasses frame of objects such as the glasses frame in the stacking state by the 3D structured light, but the 3D structured light in the prior art has the following defects in actual application: (1) Because some spectacle frames are made of stainless steel materials and have high light reflection characteristics, and the spectacle frames are small in size, the 3D structured light is difficult to complete non-disconnected 3D point cloud reconstruction on a single spectacle frame due to the problems of high light reflection and small size of the spectacle frames, and the position of the spectacle frame is difficult to estimate by using a 3D point cloud position estimation method based on Surface matching (Surface Match); (2) The tolerance of the size and the shape of the processed spectacle frame has larger deviation, and for different spectacle frame types, the traditional template matching method needs frequent template replacement and complex parameter adjustment process.
Therefore, there is a need for a robust gripping embodiment to guide the robot arm to complete the automatic feeding of the spectacle frame.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a spectacle frame identification and grabbing and feeding method based on continuous edge extraction, which is applied to automatic feeding of spectacle frames, can meet the production requirement of rapid model change, can estimate the spatial position and pose of the spectacle frame under the condition that a single spectacle frame is difficult to complete disconnection-free 3D point cloud reconstruction, and ensures higher grabbing success rate of the spectacle frame.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows: a spectacle frame identification and grabbing feeding method based on continuous edge extraction comprises the following steps: (1) Scanning the glasses frame in a stacking state by using 3D structured light to obtain 3D point cloud data and a 2D texture map; (2) Removing a background image of the 2D texture image, and obtaining 3D point cloud data and a binary image after removing the background; (3) Setting a length threshold value, performing down-sampling on the binary image, and searching for a non-disconnected long edge larger than the length threshold value; (4) Fitting smooth edge curves, dividing the crossed smooth edge curves, and removing smooth edge curve parts of a closed area formed between the smooth edge curves; (5) solving the curve length of each smooth edge curve; (6) Calculating the distance between the curve midpoint of each smooth edge curve and the centroid; (7) calculating a weighted value; (8) Carrying out back projection on curve pixel information of the smooth edge curve to a 3D point cloud space to obtain corresponding 3D point cloud data and interpolating a space 3D curve; (9) Calculating the pose to be grabbed of the central point of the space 3D curve, and issuing the pose to the mechanical arm; (10) And (4) the mechanical arm performs grabbing, the 3D structured light is scanned again to obtain 3D point cloud data and a 2D texture map, and the steps are repeated until all the glasses frames are grabbed.
Preferably, the 3D structured light is 3D monocular structured light.
As a preferred scheme, in the step (2), for the 2D texture map with the background removed, a continuous long edge is selected as a real glasses frame, so that 3D point cloud data and a binary image with the background removed are obtained.
As a preferred scheme, the step (3) is to perform downsampling on the binarized image, examine any two adjacent edge pixels with a gray value of 255 in the downsampled image, check whether the two pixels can be in the 2D texture map according to the growth directions determined by the two pixels, connect the edge pixels along the growth directions to establish a connecting edge, and after performing the above operations on the pixels of the downsampled image, connect the pixels which can be connected in sequence through the connecting edge into a non-broken long edge, so that a plurality of non-broken long edges with lengths larger than a length threshold appear in the downsampled image, and the plurality of non-broken long edges with lengths larger than the length threshold are all graspable glasses frame targets, and the to-be-selected grasping point is the 3D coordinate corresponding to the original image pixel indexed by the downsampled pixel in the screened non-broken long edges.
As a preferable scheme, when the binary image is downsampled, that is, larger pixels are adopted, if there are edge pixels in the large pixels, the large pixels are white, otherwise, the large pixels are black, and in each large white pixel, the original image pixel indexed by the large white pixel is the edge pixel of the eyeglass frame closest to the center of the large white pixel.
Preferably, the angle between the edge pixel and the growth direction is not more than 90 degrees.
As a preferable scheme, in the step (4), the unbroken long sides larger than the length threshold value screened in the step (3) are subjected to interpolation smoothing processing to fit a smooth edge curve, the smooth edge curve is segmented by utilizing a segmentation algorithm, and then a plurality of continuous smooth edge curves are obtained in the binary image.
As a preferable scheme, continuous smooth edge curves in the binary image are indexed through the steps (5) to (9) to obtain corresponding 3D space positions, the positions of the spectacle frames corresponding to the smooth edge curves in the world coordinate system are obtained, the smooth edge curves are fitted in the world coordinate system, the positions of the smooth edge curves are further estimated, the grabbing pose of the spectacle frames is further estimated, and the position of the outermost spectacle frame is selected to guide a robot to grab and feed.
As a preferable scheme, the grabbing is carried out for the glasses frame with 6 degrees of freedom, a smooth edge curve is screened and used, the smooth edge curve is back-projected to a 3D space, the position of the glasses frame in the 3D space is fitted, and the position estimation capable of grabbing is given quickly.
As a preferred scheme, the invention also provides a spectacle frame identification and grabbing and feeding method based on continuous edge extraction, which comprises the following steps:
(1) Scanning the glasses frame in a stacking state by using 3D structured light to obtain 3D point cloud data and a 2D texture map;
(2) Background elimination is carried out on the 3D point cloud data obtained in the step (1), a plane P0 where a glasses frame is placed is fitted, the plane P0 is taken as a reference, the plane P0 is translated by delta D along the positive direction of a normal vector of the plane P0 to obtain a plane P1, the plane P0 is translated by delta D along the negative direction of the normal vector of the plane P0 to obtain a plane P2, the 3D point cloud data falling between the plane P1 and the plane P2 are eliminated, and the 3D point cloud data and a binary image after the background is eliminated are obtained; the delta d is 0.1-1mm;
(3) Setting a length threshold value D, and extracting a non-broken long edge larger than the length threshold value D by using a binary image and 3D point cloud data;
(4) According to the obtained information of the non-disconnected long sides, corresponding interpolation processing is carried out, a smooth edge curve is processed in the binary image, pixel information of each point forming the smooth edge curve is recorded, the crossed smooth edge curve is segmented, and the smooth edge curve which can form a closed area is removed;
(5) Calculating the curve length L of the smooth edge curve screened in the step (4);
(6) Calculating the curve midpoint of each smooth edge curve in the step (5) of the image coordinate system of the binary image, calculating the centroid of a point group consisting of the curve midpoints of the smooth edge curves, and solving the distance D between the centroid and the curve midpoint of each smooth edge curve;
(7) According to the result of the distance D in the step (6) and the size of the curve length L in the step (5), allocating a weight by 1: 0.5 + D +0.5 + L, D is the distance of the centroid from the curve midpoint of each smooth edge curve, and L is the curve length of the smooth edge curve;
(8) According to the value with the maximum weighted value in the step (7), indexing a corresponding smooth edge curve, indexing 3D point cloud data by pixel information of the smooth edge curve, and interpolating a space 3D curve;
(9) Calculating a tangent vector of a curve midpoint center in the smooth edge curve in the step (8) on the space 3D curve
Figure BDA0003836087220000031
And find its unit vector
Figure BDA0003836087220000032
The points in the smoothed edge curve of the indexing step (8) located near the center of the curve, and the range of the points located near the center of the curve of the smoothed edge curve of the step (8) is [ center-0.5d]Wherein d is a length threshold, center is the curve center of the smooth edge curve; constructing a plane P3 by the point which is positioned near the center of the curve in the smooth edge curve in the step (8), and calculating the normal vector of the plane P3
Figure BDA0003836087220000041
Constrain the normal vector
Figure BDA0003836087220000042
The projection on the Z axis of the world coordinate system is negative, and the Z axis direction established by the coordinate system of the manipulator tool is adapted to the direction; computing
Figure BDA0003836087220000043
Wherein
Figure BDA0003836087220000044
As a vector
Figure BDA0003836087220000045
And
Figure BDA0003836087220000046
cross product of (1), constructed
Figure BDA0003836087220000047
Perpendicular to the orthogonal vertical vector
Figure BDA0003836087220000048
And vector
Figure BDA0003836087220000049
Is a normal vector to the plane P3,
Figure BDA00038360872200000410
is a line vector unit vector; constructing a pose to be grabbed at the central point of the space 3D curve, and issuing the pose to the mechanical arm to realize coarse grabbing of the glasses frame, wherein the pose is a homogeneous matrix:
Figure BDA00038360872200000411
wherein,
Figure BDA00038360872200000412
is a vector
Figure BDA00038360872200000413
And with
Figure BDA00038360872200000414
Cross product of (2), constructed
Figure BDA00038360872200000415
Perpendicular to the orthogonal vertical vector
Figure BDA00038360872200000416
And vector
Figure BDA00038360872200000417
Figure BDA00038360872200000418
Is a normal vector to the plane P3,
Figure BDA00038360872200000419
is a line vector unit vector, center is the curve center of the smooth edge curve;
(10) And (4) the mechanical arm performs grabbing, the 3D structured light scans the glasses frames in the stacking state again, and the steps (1) to (9) are repeated until all the glasses frames are grabbed.
Compared with the prior art, the invention has the following beneficial effects:
(1) The method of positioning the local contour edge of the spectacle frame instead of the whole spectacle frame is utilized, so that the position estimation problem of the long and thin high-reflection spectacle frame can be quickly and effectively realized;
(2) For different types of spectacle frames, as long as a curve with a certain length can be effectively found in an image, a grabbing position can be planned, and the frequent template changing and the complex parameter adjusting process in the traditional template matching method are avoided.
Drawings
FIG. 1 is a flow chart of the operation of the present invention.
Detailed Description
The invention is further described with reference to specific examples. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Example 1:
as shown in fig. 1, a method for recognizing, grabbing and loading a glasses frame based on continuous edge extraction includes the following steps: (1) Scanning the glasses frame in a stacking state by using 3D structured light to obtain 3D point cloud data and a 2D texture map; (2) Removing a background image of the 2D texture image, and obtaining 3D point cloud data and a binary image after removing the background; (3) Setting a length threshold value, performing down-sampling on the binary image, and searching for a non-disconnected long edge larger than the length threshold value; (4) Fitting smooth edge curves, dividing the crossed smooth edge curves, and removing smooth edge curve parts of closed areas formed among the smooth edge curves; (5) solving the curve length of each smooth edge curve; (6) Calculating the distance between the curve midpoint of each smooth edge curve and the centroid; (7) calculating a weighted value; (8) Carrying out back projection on the curve pixel information of the smooth edge curve to a 3D point cloud space to obtain corresponding 3D point cloud data and interpolating a space 3D curve; (9) Calculating the pose to be grabbed of the central point of the space 3D curve, and issuing the pose to the mechanical arm; (10) And (4) the mechanical arm performs grabbing, the 3D structured light is scanned again to obtain 3D point cloud data and a 2D texture map, and the steps are repeated until all the glasses frames are grabbed.
Preferably, the 3D structured light is 3D monocular structured light.
Because the spectacle frame has the problem of high light reflection, the single spectacle frame of the object like the spectacle frame is difficult to complete non-disconnected 3D point cloud reconstruction, so that the position of the object is difficult to estimate by using a 3D point cloud position estimation method based on Surface Matc, and the object can be grabbed only by providing a proper grabbing point in consideration of rough grabbing of the spectacle frame, so that a method for replacing the whole positioning of the spectacle frame with the positioning of the local contour edge of the spectacle frame is provided, and the position estimation purpose of the long and thin high light reflection object can be quickly and effectively realized. For different spectacle frame types, as long as the unbroken long edge larger than the length threshold value can be effectively found in the binary image and the smooth edge curve is fitted, the grabbing position can be planned, and the frequent template changing and the complex parameter adjusting process in the traditional template matching method are avoided.
More preferably, in the step (2), for the 2D texture map with the background removed, a continuous long edge is selected as a real glasses frame, so that 3D point cloud data and a binary image with the background removed are obtained.
Specifically, for the 2D texture map with the background removed, there are other noise points besides the edge of the glasses frame, and a continuous long edge needs to be selected from the 2D texture map as a real glasses frame.
More preferably, in the step (3), the binary image is down-sampled, any two adjacent edge pixels with the gray value of 255 are inspected in the down-sampled image, whether the two adjacent edge pixels can be in the 2D texture map or not is checked according to the growth directions determined by the two pixels, a connecting edge is established by connecting the edge pixels along the growth directions, after the above operation is performed on each pixel of the down-sampled image, the pixels which can be sequentially connected through the connecting edge are connected into a non-broken long edge, so that a plurality of non-broken long edges with the length larger than a length threshold value can appear in the down-sampled image, the plurality of non-broken long edges with the length larger than the length threshold value are all graspable glasses frame targets, and the to-be-selected grasping point is the 3D coordinate corresponding to the original image pixel indexed by the down-sampled pixel in the screened non-broken long edges.
Specifically, when the binary image is down-sampled, that is, larger pixels are adopted, if there are edge pixels in the large pixels, the large pixels are white, otherwise, the large pixels are black, and in each large white pixel, the original image pixel indexed by the large white pixel is the edge pixel of the eyeglass frame closest to the center of the large white pixel.
More specifically, the included angle between the edge pixel and the growth direction does not exceed 90 degrees.
Preferably, in the step (4), the unbroken long sides larger than the length threshold value screened in the step (3) are subjected to interpolation smoothing processing to fit a smooth edge curve, the smooth edge curve is segmented by utilizing a segmentation algorithm, and then a plurality of continuous smooth edge curves are obtained in the binary image.
Preferably, through the steps (5) to (9), the continuous smooth edge curve in the binary image is indexed by the corresponding 3D space position, the position of the spectacle frame corresponding to the smooth edge curve in the world coordinate system is obtained, the smooth edge curve is fitted in the world coordinate system, the position of the smooth edge curve is further estimated, the grabbing pose of the spectacle frame is further estimated, and the position of the spectacle frame at the outermost periphery is selected to guide the robot to grab and feed.
Preferably, the grabbing is performed for the glasses frame with 6 degrees of freedom, a smooth edge curve is screened and used, the smooth edge curve is back projected to a 3D space, the positions of the glasses frame in the 3D space are fitted, and the position estimation capable of being grabbed is given quickly.
Specifically, due to the fact that stacking is considered, grabbing is performed on the glasses frame with 6 degrees of freedom, a smooth edge curve is screened and used, the smooth edge curve is back-projected to a 3D space, the position of the glasses frame in the 3D space is fitted, the method can effectively solve the problem that the position of the glasses frame cannot be estimated easily in the 3D structure light environment, and the grabbed position estimation can be given quickly.
Example 2:
on the basis of the embodiment 1, the invention also provides a spectacle frame identification and grabbing and feeding method based on continuous edge extraction, which comprises the following steps:
(1) Scanning the glasses frame in a stacking state by using 3D structured light to obtain 3D point cloud data and a 2D texture map;
(2) Background elimination is carried out on the 3D point cloud data obtained in the step (1), a plane P0 where a glasses frame is placed is fitted, the plane P0 is taken as a reference, the plane P0 is translated by delta D along the positive direction of a normal vector of the plane P0 to obtain a plane P1, the plane P0 is translated by delta D along the negative direction of the normal vector of the plane P0 to obtain a plane P2, the 3D point cloud data falling between the plane P1 and the plane P2 are eliminated, and the 3D point cloud data and a binary image after the background is eliminated are obtained; the delta d is 0.1-1mm;
(3) Setting a length threshold value D, and extracting a non-broken long edge larger than the length threshold value D by using a binary image and 3D point cloud data;
(4) According to the obtained information of the non-disconnected long sides, corresponding interpolation processing is carried out, a smooth edge curve is processed in the binary image, pixel information of each point forming the smooth edge curve is recorded, the crossed smooth edge curve is segmented, and the smooth edge curve which can form a closed area is removed;
(5) Calculating the curve length L of the smooth edge curve screened in the step (4);
(6) Calculating the curve midpoint of each smooth edge curve in the step (5) of the image coordinate system of the binary image, calculating the centroid of a point group consisting of the curve midpoints of the smooth edge curves, and solving the distance D between the centroid and the curve midpoint of each smooth edge curve;
(7) According to the result of the distance D in the step (6) and the size of the curve length L in the step (5), allocating a weight by 1: 0.5X D + 0.5X L, D is the distance between the centroid and the curve midpoint of each smooth edge curve, and L is the curve length of each smooth edge curve;
(8) According to the value with the maximum weighted value in the step (7), indexing a corresponding smooth edge curve, indexing 3D point cloud data by pixel information of the smooth edge curve, and interpolating a space 3D curve;
(9) Calculating a tangent vector of a curve midpoint center in the smooth edge curve in the step (8) on the space 3D curve
Figure BDA0003836087220000071
And find its unit vector
Figure BDA0003836087220000072
The points in the smoothed edge curve of the indexing step (8) which are located near the center of the curve, and the range of the points located near the center of the curve of the smoothed edge curve of the step (8) is [ center-0.5d]Wherein d is a length threshold, center is the curve center of the smooth edge curve; constructing a plane P3 by the point which is positioned near the center of the curve in the smooth edge curve in the step (8), and calculating the normal vector of the plane P3
Figure BDA0003836087220000073
Constrain the normal vector
Figure BDA0003836087220000074
The projection on the Z axis of the world coordinate system isThe Z-axis direction established by the manipulator tool coordinate system is adapted to the negative direction; computing
Figure BDA0003836087220000075
Wherein
Figure BDA0003836087220000076
Is a vector
Figure BDA0003836087220000077
And
Figure BDA0003836087220000078
cross product of (1), constructed
Figure BDA00038360872200000721
Perpendicular to the orthogonal vertical vector
Figure BDA0003836087220000079
And vector
Figure BDA00038360872200000710
Is a normal vector to the plane P3,
Figure BDA00038360872200000711
is a line vector unit vector; constructing a pose to be grabbed at the central point of the space 3D curve, and issuing the pose to the mechanical arm to realize coarse grabbing of the glasses frame, wherein the pose is a homogeneous matrix:
Figure BDA00038360872200000712
wherein,
Figure BDA00038360872200000713
is a vector
Figure BDA00038360872200000714
And
Figure BDA00038360872200000715
the cross product ofAfter construction
Figure BDA00038360872200000716
Perpendicular to the orthogonal vertical vector
Figure BDA00038360872200000717
And vector
Figure BDA00038360872200000718
Figure BDA00038360872200000719
Is a normal vector to the plane P3,
Figure BDA00038360872200000720
is a line vector unit vector, and center is the curve center of the smooth edge curve;
(10) And (4) the mechanical arm performs grabbing, the 3D structured light scans the glasses frames in the stacking state again, and the steps (1) to (9) are repeated until all the glasses frames are grabbed.
The automatic feeding device is applied to automatic feeding of the spectacle frame and has the following advantages: the flexibility is high, only one non-broken long edge larger than a length threshold value needs to be found from the binary image, a smooth edge curve is fitted, the position of the whole glasses frame is estimated according to the position of the local smooth edge curve of the glasses frame, template matching does not need to be carried out on each type of glasses frame, rapid switching of grabbing of different glasses frames can be achieved rapidly, and model changing production of product production is facilitated.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A spectacle frame identification and grabbing feeding method based on continuous edge extraction is characterized by comprising the following steps: (1) Scanning the glasses frame in a stacking state by using 3D structured light to obtain 3D point cloud data and a 2D texture map; (2) Removing a background image of the 2D texture image, and obtaining 3D point cloud data and a binary image after removing the background; (3) Setting a length threshold value, performing down-sampling on the binary image, and searching for a non-disconnected long edge larger than the length threshold value; (4) Fitting smooth edge curves, dividing the crossed smooth edge curves, and removing smooth edge curve parts of closed areas formed among the smooth edge curves; (5) solving the curve length of each smooth edge curve; (6) Calculating the distance between the curve midpoint of each smooth edge curve and the centroid; (7) calculating a weighted value; (8) Carrying out back projection on the curve pixel information of the smooth edge curve to a 3D point cloud space to obtain corresponding 3D point cloud data and interpolating a space 3D curve; (9) Calculating the pose to be grabbed of the central point of the space 3D curve, and issuing the pose to the mechanical arm; (10) And (4) the mechanical arm performs grabbing, the 3D structured light is scanned again to obtain 3D point cloud data and a 2D texture map, and the steps are repeated until all the glasses frames are grabbed.
2. The continuous edge extraction-based spectacle frame identification and grabbing feeding method according to claim 1, characterized in that: the 3D structured light is 3D monocular structured light.
3. The continuous edge extraction-based spectacle frame identification and grabbing feeding method as claimed in claim 1, wherein: and (2) aiming at the 2D texture map with the background removed, selecting a continuous long edge from the 2D texture map as a real glasses frame, and thus obtaining the 3D point cloud data and the binary image with the background removed.
4. The continuous edge extraction-based spectacle frame identification and grabbing feeding method according to claim 1, characterized in that: and (3) firstly carrying out down-sampling on the binary image, examining any two adjacent edge pixels with the gray value of 255 in the down-sampled image, checking whether the two adjacent edge pixels can be in the 2D texture image or not according to the growth direction determined by the two pixels, establishing a connecting edge by connecting the edge pixels along the growth direction, and connecting the pixels which can be sequentially connected through the connecting edge into a non-broken long edge after carrying out the operation on each pixel of the down-sampled image, so that a plurality of non-broken long edges with the length larger than a length threshold value can appear in the down-sampled image, wherein the plurality of non-broken long edges with the length larger than the length threshold value are all glasses frame targets which can be grabbed, and the grabbing points to be selected are 3D coordinates corresponding to the pixels of the original image indexed by the down-sampled pixels in the screened non-broken long edges.
5. The continuous edge extraction-based spectacle frame identification and grabbing feeding method according to claim 4, characterized in that: when the binary image is down-sampled, namely larger pixels are adopted, if edge pixels exist in the large pixels, the edge pixels are white large pixels, otherwise, the edge pixels are black large pixels, and in each white large pixel, the original image pixels indexed by the white large pixels are the edge pixels of the glasses frame closest to the center of the white large pixels.
6. The continuous edge extraction-based spectacle frame identification and grabbing feeding method according to claim 5, characterized in that: the included angle between the edge pixel and the growth direction is not more than 90 degrees.
7. The continuous edge extraction-based spectacle frame identification and grabbing feeding method as claimed in claim 1, wherein: and (4) carrying out interpolation smoothing processing on the unbroken long sides which are screened out in the step (3) and are larger than the length threshold value to fit a smooth edge curve, and segmenting the smooth edge curve by utilizing a segmentation algorithm so as to obtain a plurality of continuous smooth edge curves in the binary image.
8. The continuous edge extraction-based spectacle frame identification and grabbing feeding method according to claim 1, characterized in that: and (5) indexing the continuous smooth edge curve in the binary image through the steps (5) to (9) to obtain the corresponding 3D space position, solving the position of the spectacle frame corresponding to the smooth edge curve in the world coordinate system, fitting the smooth edge curve in the world coordinate system, further estimating the position of the smooth edge curve, further estimating the grabbing pose of the spectacle frame, and selecting the position of the spectacle frame at the outermost periphery to guide the robot to grab and feed.
9. The continuous edge extraction-based spectacle frame identification and grab feeding method according to claim 8, wherein: the method comprises the steps of grabbing the glasses frame with 6 degrees of freedom, using a screening smooth edge curve, back-projecting the smooth edge curve to a 3D space, fitting the positions of the glasses frame in the 3D space, and quickly giving a position estimation capable of grabbing.
10. The continuous edge extraction-based spectacle frame identification and grabbing and feeding method according to any one of claims 1 to 9, characterized by comprising the following steps:
(1) Scanning the glasses frame in a stacking state by using 3D structured light to obtain 3D point cloud data and a 2D texture map;
(2) Background elimination is carried out on the 3D point cloud data obtained in the step (1), a plane P0 where a glasses frame is placed is fitted, the plane P0 is taken as a reference, the plane P0 is translated by delta D along the positive direction of a normal vector of the plane P0 to obtain a plane P1, the plane P0 is translated by delta D along the negative direction of the normal vector of the plane P0 to obtain a plane P2, the 3D point cloud data falling between the plane P1 and the plane P2 are eliminated, and the 3D point cloud data and a binary image after the background is eliminated are obtained; the delta d is 0.1-1mm;
(3) Setting a length threshold value D, and extracting a non-broken long edge larger than the length threshold value D by using a binary image and 3D point cloud data;
(4) According to the obtained information of the non-disconnected long sides, corresponding interpolation processing is carried out, a smooth edge curve is processed in the binary image, pixel information of each point forming the smooth edge curve is recorded, the crossed smooth edge curve is segmented, and the smooth edge curve which can form a closed area is removed;
(5) Calculating the curve length L of the smooth edge curve screened in the step (4);
(6) Calculating the curve midpoint of each smooth edge curve in the step (5) of the image coordinate system of the binary image, calculating the centroid of a point group consisting of the curve midpoints of the smooth edge curves, and solving the distance D between the centroid and the curve midpoint of each smooth edge curve;
(7) According to the result of the distance D in the step (6) and the size of the curve length L in the step (5), allocating a weight by 1: 0.5X D + 0.5X L, D is the distance between the centroid and the curve midpoint of each smooth edge curve, and L is the curve length of each smooth edge curve;
(8) According to the value with the maximum weighted value in the step (7), indexing a corresponding smooth edge curve, indexing 3D point cloud data by pixel information of the smooth edge curve, and interpolating a space 3D curve;
(9) Calculating a tangent vector of a curve midpoint center in the smooth edge curve in the step (8) on the space 3D curve
Figure FDA0003836087210000031
And find its unit vector
Figure FDA0003836087210000032
The points in the smoothed edge curve of the indexing step (8) which are located near the center of the curve, and the range of the points located near the center of the curve of the smoothed edge curve of the step (8) is [ center-0.5d]Wherein d is a length threshold, center is the curve center of the smooth edge curve; constructing a plane P3 by the point which is positioned near the center of the curve in the smooth edge curve in the step (8), and calculating the normal vector of the plane P3
Figure FDA0003836087210000033
Constrain the normal vector
Figure FDA0003836087210000034
The projection on the Z axis of the world coordinate system is negative, and the Z axis direction established by the coordinate system of the manipulator tool is adapted to the direction; calculating out
Figure FDA0003836087210000035
Wherein
Figure FDA0003836087210000036
As a vector
Figure FDA0003836087210000037
And
Figure FDA0003836087210000038
cross product of (2), constructed
Figure FDA0003836087210000039
Perpendicular to the orthogonal vertical vector
Figure FDA00038360872100000310
And vector
Figure FDA00038360872100000311
Figure FDA00038360872100000312
Is a normal vector to the plane P3,
Figure FDA00038360872100000313
is a line vector unit vector; constructing a pose to be grabbed at the central point of the space 3D curve, and issuing the pose to the mechanical arm to realize coarse grabbing of the glasses frame, wherein the pose is a homogeneous matrix:
Figure FDA00038360872100000314
wherein,
Figure FDA00038360872100000315
is a vector
Figure FDA00038360872100000316
And
Figure FDA00038360872100000317
cross product of (1), constructed
Figure FDA00038360872100000318
Perpendicular to the orthogonal vertical vector
Figure FDA00038360872100000319
And vector
Figure FDA00038360872100000320
Figure FDA00038360872100000321
Is a normal vector to the plane P3,
Figure FDA00038360872100000322
is a line vector unit vector, and center is the curve center of the smooth edge curve;
(10) And (4) the mechanical arm performs grabbing, the 3D structured light scans the glasses frames in the stacking state again, and the steps (1) to (9) are repeated until all the glasses frames are grabbed.
CN202211088357.0A 2022-09-07 2022-09-07 Spectacle frame identification and grabbing feeding method based on continuous edge extraction Pending CN115578314A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211088357.0A CN115578314A (en) 2022-09-07 2022-09-07 Spectacle frame identification and grabbing feeding method based on continuous edge extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211088357.0A CN115578314A (en) 2022-09-07 2022-09-07 Spectacle frame identification and grabbing feeding method based on continuous edge extraction

Publications (1)

Publication Number Publication Date
CN115578314A true CN115578314A (en) 2023-01-06

Family

ID=84581596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211088357.0A Pending CN115578314A (en) 2022-09-07 2022-09-07 Spectacle frame identification and grabbing feeding method based on continuous edge extraction

Country Status (1)

Country Link
CN (1) CN115578314A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116586971A (en) * 2023-05-26 2023-08-15 广州帅普运动用品有限公司 Combined swimming goggles manufacturing system and manufacturing method thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116586971A (en) * 2023-05-26 2023-08-15 广州帅普运动用品有限公司 Combined swimming goggles manufacturing system and manufacturing method thereof
CN116586971B (en) * 2023-05-26 2024-05-14 广州帅普运动用品有限公司 Combined swimming goggles manufacturing system and manufacturing method thereof

Similar Documents

Publication Publication Date Title
CN112070818B (en) Robot disordered grabbing method and system based on machine vision and storage medium
CN110349207B (en) Visual positioning method in complex environment
CN113034600B (en) Template matching-based texture-free planar structure industrial part identification and 6D pose estimation method
CN114972377B (en) 3D point cloud segmentation method and device based on mobile least square method and super-voxel
CN111553949B (en) Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning
CN111507390A (en) Storage box body identification and positioning method based on contour features
CN107138432B (en) Method and apparatus for sorting non-rigid objects
CN112529858A (en) Welding seam image processing method based on machine vision
CN110648359B (en) Fruit target positioning and identifying method and system
CN115018846B (en) AI intelligent camera-based multi-target crack defect detection method and device
CN110288571B (en) High-speed rail contact net insulator abnormity detection method based on image processing
CN112883881B (en) Unordered sorting method and unordered sorting device for strip-shaped agricultural products
CN115578314A (en) Spectacle frame identification and grabbing feeding method based on continuous edge extraction
CN112257721A (en) Image target region matching method based on Fast ICP
CN112338898B (en) Image processing method and device of object sorting system and object sorting system
CN117011377A (en) Data processing method and pose estimation method of point cloud data
CN113781315B (en) Multi-view-based homologous sensor data fusion filtering method
CN109934817A (en) The external contouring deformity detection method of one seed pod
CN115922695A (en) Grabbing method based on plane vision guide mechanical arm
CN113223189B (en) Method for repairing holes of three-dimensional point cloud model of object grabbed by mechanical arm and fitting ruled body
CN115100416A (en) Irregular steel plate pose identification method and related equipment
CN110264481B (en) Box-like point cloud segmentation method and device
CN114653629A (en) Sorting method based on visual identification, intelligent sorting system and readable storage medium
CN116523909B (en) Visual detection method and system for appearance of automobile body
CN118396994B (en) Die-casting die adaptation degree detection method and system based on three-dimensional model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination