CN114104453A - Non-ferrous metal automatic labeling method and device based on image processing - Google Patents

Non-ferrous metal automatic labeling method and device based on image processing Download PDF

Info

Publication number
CN114104453A
CN114104453A CN202111461039.XA CN202111461039A CN114104453A CN 114104453 A CN114104453 A CN 114104453A CN 202111461039 A CN202111461039 A CN 202111461039A CN 114104453 A CN114104453 A CN 114104453A
Authority
CN
China
Prior art keywords
point
labeling
frame
mechanical arm
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111461039.XA
Other languages
Chinese (zh)
Inventor
李建华
宋刘毅
董兵强
吴昊鹏
刘广鹏
于浩
安心怡
陈锦涛
杨慧
刘相何
徐杰
王睿
郝晨曦
任术长
李阳阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lanzhou University of Technology
Original Assignee
Lanzhou University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lanzhou University of Technology filed Critical Lanzhou University of Technology
Priority to CN202111461039.XA priority Critical patent/CN114104453A/en
Publication of CN114104453A publication Critical patent/CN114104453A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65CLABELLING OR TAGGING MACHINES, APPARATUS, OR PROCESSES
    • B65C9/00Details of labelling machines or apparatus
    • B65C9/40Controls; Safety devices
    • B65C9/42Label feed control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Discloses a non-ferrous metal automatic labeling method and a non-ferrous metal automatic labeling device based on image processing, which uses a labeling point positioning algorithm to perform image T2Detecting the labeled pixel point and converting the pixel point into a point D under a robot coordinate system1At D1Taking n around the point>2 points are combined with the distance returned by the laser displacement sensor to respectively obtain n three-dimensional coordinate points which are not on the same line, and plane fitting is carried out to obtain a plane M1To plane M1Solving normal vectors
Figure DDA0003384432630000011
Then solving the deflection angle delta, and adjusting the position and posture of the tail end sucker of the mechanical arm according to the deflection angle deltaAnd labeling operation is completed, so that the labeling machine is more suitable for labeling nonferrous metals in a complex factory environment, and the efficiency and the precision of labeling on a nonferrous metal casting production line are greatly improved.

Description

Non-ferrous metal automatic labeling method and device based on image processing
Technical Field
The invention relates to the technical field of image processing and non-ferrous metal labeling, in particular to a non-ferrous metal automatic labeling method based on image processing and a non-ferrous metal automatic labeling device based on image processing.
Background
At present, in the field of labeling of non-ferrous metals, labeling of non-ferrous metals is mostly manual labeling, and is low in efficiency and high in cost. In the process of manually labeling the nonferrous metals, workers are easy to injure and die due to the complex working environment. In the automatic labeling process of the nonferrous metals, the nonferrous metal planes and the tail end plane of the manipulator have different deflection angles due to different placing postures of the nonferrous metals. If the deflection angle is not corrected, the label attached to the non-ferrous metal plane is wrinkled and falls off, and even the labeling operation cannot be finished.
The traditional Hough line detection has good robustness and detection precision, but the background of the actual working condition of non-ferrous metal labeling is disordered, the labeling environment conversion randomness is strong, the problems of target edge detection and inaccurate graph conversion exist, the influence on the accuracy of the traditional Hough line detection is large, the image correction quality is low, and the calculated amount is huge. Therefore, how to solve the interference of complex background to the labeling point detection and the interference of discontinuous line to linear detection when the non-ferrous metal labeling and positioning are carried out, the deflection angle during the non-ferrous metal labeling is corrected, and the realization of the non-ferrous metal automatic labeling is an important difficulty of the non-ferrous metal labeling technology.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to solve the technical problem of providing the non-ferrous metal automatic labeling method based on image processing, the method is more suitable for non-ferrous metal labeling in a complex factory environment, and the efficiency and the precision of non-ferrous metal labeling in actual work are greatly improved.
The technical scheme of the invention is as follows: the automatic labeling method of the nonferrous metal based on image processing comprises the following steps:
(1) the mechanical arm drives the laser displacement sensor to find the positions of the upper edge and the left edge of the first layer of the nonferrous metal, the mechanical arm is controlled to move to the upper left corner point of the nonferrous metal stack according to the positions, and the current position is recorded as P1
(2) The arm being in position P1The distance of the thickness H of m nonferrous metal ingots is moved downwards to the mth layer, then the mechanical arm moves 3 x H to the right side, and a camera is used for collecting the nonferrous metal picture T of the layer0
(3) According to the pre-trained YOLOv5 model, the collected picture T is subjected to image matching0Extracting ROI, and storing the extracted picture as T1
(4) For T1Performing an imagePreprocessing to obtain a graph T2The processing flow comprises the following steps: graying an image, filtering a median value of the image, transforming the grayscale and binarizing the image;
(5) graph T using improved Hough line detection algorithm for horizontal gap detection2Extracting gap lines among the nonferrous metal ingots;
(6) calculating a labeling pixel point, and converting the pixel point into a point D under a robot coordinate system1
(7) At D1Taking n around the point>2 points are combined with the distance returned by the laser displacement sensor to respectively obtain n three-dimensional coordinate points which are not on the same line, and plane fitting is carried out to obtain a plane M1
(8) To plane M1Solving normal vectors
Figure BDA0003384432610000021
Further solving a deflection angle delta;
(9) adjusting the pose of the sucker at the tail end of the mechanical arm according to the deflection angle delta to finish the labeling operation;
wherein the step (5) comprises:
(5.1) for T2Performing morphological treatment, inhibiting straight lines in the vertical direction, highlighting horizontal line characteristics, and detecting a labeling point based on a horizontal gap between nonferrous metal ingots;
(5.2) at a horizontal distance T2Judging whether edge points are left or not within A/2 distance of the left edge of the image, if so, randomly extracting an edge point as a starting point, and setting a rectangular frame a by taking the point as the center, A as the length and W as the width; if no, executing the step (6);
(5.3) voting on a straight line passing through the point in the a frame in a (rho, theta) parameter space, wherein a group of rho and theta data corresponds to a straight line;
(5.4) judging whether the frame a is larger than the threshold value thr1If the straight line is L (j), the step (5.5) is carried out; if not, returning to the step (5.2); cleaning all points in the frame a;
(5.5) translating the center of the frame a in the direction of a straight line L (j) by (A + B)/2 to obtain a rectangular frame B (i), wherein B is the length of the frame B (i), W is the width, and A > > B, and arranging an accumulator acc (j) for recording the number of straight line interruption times, wherein the accumulator is repeatedly used in each frame B;
(5.6) voting for the (rho, theta) point corresponding to the L (j) straight line in the frame b (i), and recording the increased voting number n (i) of L (j);
(5.7) determining whether n (i) is less than threshold thr2If the value is less than the preset value, the straight line in the frame is discontinuous, acc (j) is added with 1, otherwise acc (j) has a constant value; cleaning all points in the frame of b (i);
(5.8) judging whether the value of acc (j) is greater than the threshold value thr3If the value is larger than the preset value, the straight line is interrupted too much, the detection of the straight line is abandoned, and the step (5.2) is returned;
(5.9) translating the center of the frame B (i) along the direction of a straight line L (j) by B to obtain a rectangular frame B (i + 1);
(5.10) judging whether the current b frame position leaves the range of the image horizontal direction or not, if not, returning to the step (5.6), otherwise, performing the step (5.11); clearing the accumulator acc (j);
(5.11) saving the straight Line L (j) in the set Line, and returning to the step (5.2);
wherein the step (6) comprises:
(6.1) by the diagram T2The upper left corner point is an origin, the right side is the positive direction of an x axis, and the downward side is the positive direction of a y axis to establish a coordinate system; calculating the intersection point of all the lines in the set Line and the y axis, and storing the intersection point in the set P1Performing the following steps;
(6.2) taking the set P1Straight line L corresponding to the minimum valuesThe straight line is a gap line between the layer of non-ferrous metal ingot and the layer of non-ferrous metal ingot above;
(6.3) calculation of LsAt y-axis and x ═ T2Middle point D of line segment between Width0
(6.4) if necessary, mixing D with0Offset by a distance D along the y-axis to obtain D1And the point is a labeling pixel point and is converted into a point in a robot coordinate system through hand-eye calibration.
The invention uses a labeling positioning algorithm to align the graph T2Detecting the labeled pixel point and converting the pixel point into a point D under a robot coordinate system1The labeling positioning algorithm comprises two steps: first, graph T is mapped by using a previously trained Yolov5 model2Extracting ROI (region of interest); secondly, extracting a gap line by using an improved Hough line detection algorithm aiming at horizontal gap detection, and calculating a labeling pixel point D0Converting the point into a point D in a robot coordinate system1. At D1Taking n around the point>2 points are combined with the distance returned by the laser displacement sensor to respectively obtain n three-dimensional coordinate points which are not on the same line, and plane fitting is carried out to obtain a plane M1To plane M1Solving normal vectors
Figure BDA0003384432610000041
And then solving the deflection angle delta, adjusting the position and posture of the sucker at the tail end of the mechanical arm according to the deflection angle delta, and completing labeling operation, so that the labeling machine is more suitable for labeling nonferrous metals in a complex factory environment, and the efficiency and the precision of labeling the nonferrous metals in actual work are greatly improved.
Still provide automatic subsides mark device of non ferrous metal based on image processing, it includes: the device comprises a positioning device, a fixture device, an execution device, a pneumatic device and a printing device;
the positioning device detects the labeling position and is fixedly connected to the executing device through a clamp. After the label sticking position is detected, the pneumatic device sucks the label through the execution device and moves to the label sticking position to stick the label integrally into a cubic shell, and the periphery of the bottom of the clamp is flanged inwards;
4 holes are formed in one side face of the clamp body and used for being connected with a laser displacement sensor, the 4 holes are distributed on rectangular corner points with certain width and height, and threads are tapped in the 4 holes; 4 holes are respectively formed in the upper side and the lower side of the other side face of the clamp body, the upper hole forming position and the lower hole forming position are respectively distributed along two straight lines, the two straight lines are respectively away from the upper edge and the lower edge by a certain distance, the first holes in the upper row and the lower row are both away from the side edge by a certain distance, and 8 holes are all tapped; a circular hole is formed in the center of the outline of the head of the clamp body and used for connecting a sucker, and 4 through holes are formed in the concentric circle of the circular hole and used for fixing the sucker.
Drawings
Fig. 1 is a schematic view of an automatic nonferrous metal labeling apparatus based on image processing according to the present invention.
Fig. 2 is a flow chart of the automatic labeling method of nonferrous metal based on image processing according to the present invention.
Fig. 3 is a schematic diagram of improved hough line detection for horizontal slit detection.
FIG. 4 is a schematic illustration of a plane fit to a non-ferrous metal plane.
Detailed Description
As shown in fig. 2, the non-ferrous metal automatic labeling method based on image processing includes the following steps:
(1) the mechanical arm drives the laser displacement sensor to find the positions of the upper edge and the left edge of the first layer of the nonferrous metal, the mechanical arm is controlled to move to the upper left corner point of the nonferrous metal stack according to the positions, and the current position is recorded as P1
(2) The arm being in position P1And then the non-ferrous metal ingot is moved downwards by the thickness H of m non-ferrous metal ingots to reach the m-th layer (the step can only determine the approximate position and can not be directly used as the basis for determining the labeling point, and image processing is needed for accurate positioning). Then the mechanical arm moves to the right side by a distance of 3X H, and a camera is used for collecting the picture T of the layer of nonferrous metal0
(3) According to the pre-trained YOLOv5 model, the collected picture T is subjected to image matching0Extracting ROI (region of interest), and storing the extracted picture as T1
(4) For T1Image preprocessing is carried out to obtain a graph T2The processing flow comprises the following steps: graying an image, filtering a median value of the image, transforming the grayscale and binarizing the image;
(5) graph T using improved Hough line detection algorithm for horizontal gap detection2Extracting gap lines among the nonferrous metal ingots;
(6) calculating a labeling pixel point, and converting the pixel point into a point D under a robot coordinate system1
(7) At D1Taking n around the point>2 points are combined with the distance returned by the laser displacement sensor to respectively obtain n three-dimensional coordinate points which are not on the same line, and plane fitting is carried out to obtain a plane M1
(8) To plane M1Solving normal vectors
Figure BDA0003384432610000061
Further solving a deflection angle delta;
(9) adjusting the pose of the sucker at the tail end of the mechanical arm according to the deflection angle delta to finish the labeling operation;
wherein the step (5) comprises:
(5.1) for T2Performing morphological treatment, inhibiting straight lines in the vertical direction, and highlighting horizontal line characteristics (detection of labeling points is based on horizontal gaps among nonferrous metal ingots);
(5.2) at a horizontal distance T2And judging whether edge points are remained within A/2 distance of the left edge of the image, if so, randomly extracting an edge point as a starting point, and setting a rectangular frame a by taking the point as the center, A as the length and W as the width. If no residue exists, the algorithm is ended;
(5.3) voting on a straight line passing through the point in the a frame in a (rho, theta) parameter space, wherein a group of rho and theta data corresponds to a straight line;
(5.4) judging whether the frame a is larger than the threshold value thr1If any, the straight line of (c) is recorded as L (j), and step 5.5 is performed. If not, go back to step 5.2. Cleaning all points in the frame a;
(5.5) translating the center of the frame a by (A + B)/2 along the direction of a straight line L (j) to obtain a rectangular frame B (i), wherein B is the length of the frame B (i), and W is the width (A > > B), and setting an accumulator acc (j) for recording the number of straight line interruption times (the accumulator can be repeatedly used in each frame B);
(5.6) voting for the (rho, theta) point corresponding to the L (j) straight line in the frame b (i), and recording the increased voting number n (i) of L (j);
(5.7) determining whether n (i) is less than threshold thr2If the value is less than the preset value, the straight line in the frame is discontinuous, acc (j) is added by 1, otherwise acc (j) has a constant value. Cleaning all points in the frame of b (i);
(5.8) judging whether the value of acc (j) is greater than the threshold value thr3If the value is larger than the preset value, the straight line is interrupted too much, the detection of the straight line is abandoned, and the step 5.2 is returned;
(5.9) translating the center of the frame B (i) along the direction of a straight line L (j) by B to obtain a rectangular frame B (i + 1);
(5.10) judging whether the current b frame position leaves the range of the image horizontal direction, if not, returning to the step 5.6, otherwise, executing the step 5.11. Clearing the accumulator acc (j);
(5.11) saving the straight Line L (j) in the set Line, and returning to the step 5.2;
wherein the step (6) comprises:
(6.1) by the diagram T2The upper left corner point is an origin, the right side is the positive direction of an x axis, and the downward side is the positive direction of a y axis to establish a coordinate system; calculating the intersection point of all the lines in the set Line and the y axis, and storing the intersection point in the set P1Performing the following steps;
(6.2) taking the set P1Straight line L corresponding to the minimum valuesThe straight line is a gap line between the layer of non-ferrous metal ingot and the layer of non-ferrous metal ingot above;
(6.3) calculation of LsAt y-axis and x ═ T2Middle point D of line segment between Width0
(6.4) if necessary, mixing D with0Offset by a distance D along the y-axis to obtain D1The point is a labeling pixel point, and the point is converted into a point under a robot coordinate system through hand-eye calibration;
the invention uses a labeling positioning algorithm to align the graph T2Detecting the labeled pixel point and converting the pixel point into a point D under a robot coordinate system1The labeling positioning algorithm comprises two steps: first, graph T is mapped by using a previously trained Yolov5 model2Extracting ROI (region of interest) (the Yolov5 model is inaccurate in description of linear information, if a linear is detected by using other deep learning algorithms, pixel-level precision labeling needs to be carried out on a large amount of data, comprehensive consideration is given, and the method does not directly use deep learning to carry out linear detection); secondly, extracting a gap line by using an improved Hough line detection algorithm aiming at horizontal gap detection, and calculating a labeling pixel point D0Converting the point into a point D in a robot coordinate system1. At D1Taking n around the point>2 points are combined with the returning distance of the laser displacement sensor to respectively obtain n different positionsCarrying out plane fitting on three-dimensional coordinate points on the same line to obtain a plane M1To plane M1Solving normal vectors
Figure BDA0003384432610000081
And then solving the deflection angle delta, adjusting the position and posture of the sucker at the tail end of the mechanical arm according to the deflection angle delta, and completing labeling operation, so that the labeling machine is more suitable for labeling nonferrous metals in a complex factory environment, and the efficiency and the precision of labeling the nonferrous metals in actual work are greatly improved.
YOLOv5 is a model proposed by Ultralytics LCC corporation in month 5 2020 and published on GitHub, the latest version of the YOLO algorithm family at present. The YOLOv5 integrates various optimization strategies in the field of convolutional neural networks in recent years on the basis of inheriting the integral layout of YOLOv3 and YOLOv4, the performance of the YOLOv5 is greatly improved compared with that of previous generations of YOLO algorithms, the occupied space is smaller, and the YOLOv5 is easier to arrange on embedded equipment. The algorithm carries out regression on the region to be detected, only has one stage, and the detection speed is high. In addition, the picture T is subjected to the YOLOv50Before extracting the region of interest, a certain number of non-ferrous metal stacking images need to be collected, and a proper characteristic region needs to be labeled. And dividing the marked images into a training data set and a verification data set for training, and adjusting the information of the training set, the training times and the like for multiple times to obtain optimal parameters and models.
Preferably, in the step (1), the mechanical arm moves from the outer side to the inner side of the nonferrous metal stack, the laser displacement sensor fixed at the tail end of the mechanical arm finds the upper edge and the left edge of the first layer of the nonferrous metal by taking detected data as signals, the mechanical arm is controlled to move to the upper left corner point of the nonferrous metal stack according to the position, and the current position is recorded as P1
Preferably, in the step (2), the mechanical arm is at the position P1The distance of the thickness H of m nonferrous metal ingots is moved downwards to the mth layer, then the mechanical arm moves 3 x H to the right side, and a camera is used for collecting the nonferrous metal picture T of the layer0. Wherein the camera is fastened on the front panel of the camera frame through a screw, the rear panel of the camera frame is provided with a through hole and a chute,the device is used for connecting the camera stand and the fixture, conveniently adjusting the position of the camera stand relative to the lamp bracket and conveniently finding the best light supplementing effect; the fixture is with one side, camera frame the place ahead, and dome lamp source light filling lamp passes through the lighting fixture to be connected on the fixture, provides stable light source for the camera, and the arm passes through CCD camera collection this layer of non ferrous metal picture T in the assigned position0
Preferably, in the step (3), the acquired picture T is processed according to the actual image processing area0Extracting ROI (region of interest) by utilizing a pre-trained YOLOv5 model, and storing an extracted picture as T1
Preferably, in the step (4), the extracted colored nonferrous metal image is processed into a gray-scale image, obvious noise points in the gray-scale image are removed through median filtering, and details of the image contour are reserved and highlighted; in order to avoid the influence of strong light in a factory on the contrast of a non-ferrous metal image at noon, the quality of the non-ferrous metal image is improved through gray level conversion; the non-ferrous metal and the background are highlighted through self-adaptive binarization processing.
Preferably, in the step (5), in order to more efficiently detect the horizontal gap line between the nonferrous metal ingots, two improvements are made to hough line detection: first add vertical line suppression; and secondly, an a and b two-frame alternative detection mechanism is arranged to make up for the defect that the Hough line detection algorithm cannot avoid broken line interference.
Preferably, in the step (7), the mechanical arm drives the laser displacement sensor to move at D1Moving n positions around, and recording three-dimensional coordinates of the n positions; the general form of the plane equation is: axn+Byn+Czn+ D ═ 0, converting it to: axn+bynZ, wherein:
Figure BDA0003384432610000091
fitting the plane by adopting a least square method, wherein a corresponding least square method matrix is as follows: AX ═ b, where:
Figure BDA0003384432610000092
using the normal equation: x ═ ATA)-1ATb, solving to obtain the values of a, b and c, and obtaining the plane M1And (4) an equation.
Preferably, in said step (8), according to plane M1General equation Axn+Byn+CznM is known when + D is 01Normal vector
Figure BDA0003384432610000101
The normal vector and the horizontal plane deflection angle δ are obtained as (a, B, C).
Preferably, in the step (9), the pose of the current mechanical arm tail end sucker is read, and the included angle delta between the current mechanical arm tail end sucker and the plane is recorded0And with M1Comparing the normal vector with the horizontal plane deflection angle delta, further adjusting the position and posture of a sucker at the tail end of the mechanical arm, and recording the current RPY coordinate as RPY; opening a two-position five-way electromagnetic pneumatic valve, conveying gas into a vacuum sucker through a one-way valve by a pneumatic element by supplying gas to an air pump, and driving the sucker to suck the label to return to the position with the coordinate of rpy by a mechanical arm; and closing the two-position five-way electromagnetic pneumatic valve until the labeling action is finished.
Still provide automatic subsides mark device of non ferrous metal based on image processing, it includes: the device comprises a positioning device, a fixture device, an execution device, a pneumatic device and a printing device;
the positioning device is used for detecting the labeling position and mainly comprises a camera, a laser displacement sensor and a light supplement lamp, the non-ferrous metal stacking position is pre-positioned through the laser displacement sensor, then using a camera to carry out plane information acquisition on the nonferrous metal ingot, processing the acquired image by using a labeling positioning algorithm to obtain a labeling point coordinate of the nonferrous metal ingot, fixedly connecting a positioning device on an execution device through a clamp, after detecting the labeling position, the pneumatic device sucks the label through the execution device and moves to the labeling position for labeling, the pneumatic device mainly comprises an air pump, an electromagnetic directional valve, a pneumatic connector, a vacuum chuck and the like, the electromagnetic directional valve is connected with the air pump and the vacuum chuck through a connecting pipeline, the on-off of the whole air path can be controlled, positive and negative air pressure is provided for the whole labeling system, and the actions of sucking and sticking labels are completed; the fixture device is in charge of connecting and fixing the labeling equipment, the fixture is integrally a cubic shell, and the periphery of the bottom of the fixture is flanged inwards;
4 holes are formed in one side face of the clamp body and used for being connected with a laser displacement sensor, the 4 holes are distributed on rectangular corner points with certain width and height, and threads are tapped in the 4 holes; 4 holes are respectively formed in the upper side and the lower side of the other side face of the clamp body, the upper hole forming position and the lower hole forming position are respectively distributed along two straight lines, the two straight lines are respectively away from the upper edge and the lower edge by a certain distance, the first holes in the upper row and the lower row are both away from the side edge by a certain distance, and 8 holes are all tapped; a circular hole is formed in the center of the outline of the head of the clamp body and used for connecting a sucker, and 4 through holes are formed in the concentric circle of the circular hole and used for fixing the sucker. The printing device comprises an industrial printer, a label supply mechanism and an automatic peeling mechanism, and the upper computer sends printing information and a printing instruction to the industrial printer to print a label. The label paper is arranged on the label supplying mechanism, the locking device clamps the label paper tightly, and the motor is driven by the synchronous belt to pull to drive the label belt to do linear motion. The label tape passes through the stripping mechanism, the label paper overcomes the adhesive force with the base paper, and stripping is completed. The execution device is a core device of the system and comprises a mechanical arm, a control cabinet and a demonstrator. The upper computer sends an instruction to the control cabinet to control the mechanical arm to complete a series of operations such as label suction, label sticking point positioning, label sticking and the like, and finally the label is stuck to the surface of the nonferrous metal ingot.
Preferably, in the positioning device, the camera frame and the lamp holder are arranged in parallel on the same side of the clamp, the camera and the light supplement lamp are respectively fixed on the camera frame and the lamp holder, the contour centers of the camera and the light supplement lamp are on a straight line parallel to the side edge of the clamp, and the laser displacement sensor is fixed at the position on the other side of the positioning device, which is at a certain distance from the edge of the clamp;
30 air suction holes on the surface of the sucking disc at the tail end of the pneumatic device are distributed in an array, a vacuum conversion device is arranged in the vacuum conversion device, and the vacuum conversion device is fixed on the head of the fixture.
The present invention is described in more detail below. The automatic labeling method of the nonferrous metal based on image processing comprises the following steps:
step 1: the mechanical arm drives the laser displacement sensor to find the positions of the upper edge and the left edge of the first layer of the nonferrous metal, and the mechanical arm is controlled to move to the position with the laser displacement sensor according to the positionsThe upper left corner of the color metal stack is marked with the current position of P1. The non-ferrous metal automatic labeling robot is integrally shown in figure 1, and comprises two parts, a labeling position identification part and an execution part in an end execution device. The labeling position identification part comprises a camera 2, a lamp bracket 3, a dome lamp source light supplement lamp 4 and a laser displacement sensor 7. The implement part comprises a suction cup 5. The labeling position identification part and the execution part are fixedly connected to the fixture 6. Reference numeral 1 denotes a camera fixture mount.
Step 2: the arm being in position P1And then the non-ferrous metal ingot is moved downwards by the thickness H of m non-ferrous metal ingots to reach the m-th layer (the step can only determine the approximate position and can not be directly used as the basis for determining the labeling point, and image processing is needed for accurate positioning). Then the mechanical arm moves to the right side by a distance of 3X H, and a camera is used for collecting the picture T of the layer of nonferrous metal0. The camera is fastened on the front panel of the camera frame through a screw, and the rear panel of the camera frame is provided with a through hole and a sliding groove for connecting the camera frame and the clamp, conveniently adjusting the position of the camera frame relative to the lamp holder and conveniently finding the best light supplementing effect. The fixture is with one side, camera frame the place ahead, and dome lamp source light filling lamp passes through the lighting fixture to be connected on the fixture, provides stable light source for the camera, and the arm passes through CCD camera collection nonferrous metal picture T in the assigned position0
And step 3: according to the pre-trained YOLOv5 model, the collected picture T is subjected to image matching0Extracting ROI, and storing the extracted picture as T1(before training of the Yolov5 model for extracting the ROI, a certain number of pictures shot in the field environment need to be labeled according to the characteristic parts of the regions of interest of different non-ferrous metal stacks, and a proper number of pictures are selected as verification data sets.
And 4, step 4: for T1Image preprocessing is carried out to obtain a graph T2The processing flow comprises image graying, image median filtering, gray level transformation and image binaryzation. Processing the extracted colored nonferrous metal image into a gray-scale image, and removing the brightness in the gray-scale image through median filteringNoise points are displayed, and the details of the image contour are reserved and highlighted. In order to avoid the influence of strong light in a factory on the contrast of a non-ferrous metal image at noon, the quality of the non-ferrous metal image is improved through gray level conversion.
And 5: graph T using improved Hough line detection algorithm for horizontal gap detection2Detecting the labeled pixel point and converting the pixel point into a point D under a robot coordinate system1. For T2Morphological treatment is carried out, straight lines in the vertical direction are restrained, and horizontal line characteristics are highlighted (labeling point detection is based on horizontal gaps among nonferrous metal ingots). As shown in fig. 3, at a horizontal distance T2And judging whether edge points are remained within A/2 distance of the left edge of the image, if so, randomly extracting an edge point as a starting point, and setting a rectangular frame a by taking the point as the center, A as the length and W as the width. And ending the algorithm if no residue is left. And voting the straight line passing through the point in the a frame in the (rho, theta) parameter space, wherein one group of rho and theta data corresponds to one straight line. Judging whether the frame a has a threshold value thr greater than1If not, returning to the previous step at a horizontal distance T2And judging whether a point exists within the A/2 distance of the left edge of the image, and resetting the frame a. If so, the straight line is recorded as L (j), and the process continues to the next step, and all points in the a frame are cleaned. Translating the center of the frame a by (A + B)/2 along the direction of a straight line L (j) to obtain a rectangular frame B (i), wherein B is the length of the frame B (i), and W is the width (A)>>B) And an accumulator acc (j) is provided for recording the number of line breaks (this accumulator can be reused in each b-box). Voting the (rho, theta) points corresponding to the L (j) straight line in the frame (b) (i), recording the increased voting number n (i) of the L (j), and judging whether n (i) is less than the threshold thr2If the value is less than the preset value, the straight line in the frame is discontinuous, acc (j) is added by 1, otherwise acc (j) has a constant value. Clean all points in b (i) frame. Then, it is determined whether the value of acc (j) is greater than the threshold value thr3If the distance is larger than the preset distance, the straight line is interrupted too much, the detection of the straight line is abandoned, and the previous step is returned to, at the horizontal distance T2And judging whether a point exists within the A/2 distance of the left edge of the image, and resetting the frame a. If not, proceeding to the next step. Shifting the center of the frame B (i) along the direction of the line L (j) to obtain a rectangularAnd c, forming a frame b (i +1), judging whether the current frame b (i +1) position moves out of the range of the image horizontal direction or not, and returning to the step of voting for the L (j) straight line in the frame b (i). If the horizontal range is left, go on and clear the accumulator acc (j), and save the straight Line L (j) in the set Line, repeat the process until the horizontal distance T2The edge points within a/2 distance of the left edge of the image are exhausted.
Step 6: calculating a labeling pixel point, and converting the pixel point into a point D under a robot coordinate system1. With a diagram T2The upper left corner point is an origin, the right side is the positive direction of an x axis, and the downward side is the positive direction of a y axis to establish a coordinate system; calculating the intersection point of all the lines in the set Line and the y axis, and storing the intersection point in the set P1In (1). Get set P1Straight line L corresponding to the minimum valuesThe straight line is the gap line between the layer of non-ferrous metal ingot and the layer of non-ferrous metal ingot above. Calculating LsAt y-axis and x ═ T2The midpoint D0 of the line segment between Width. According to the need, D0Offset by a distance D along the y-axis to obtain D1And (4) converting the point into a point under a robot coordinate system through hand-eye calibration, wherein the point is the labeling pixel point.
And 7: at D1Taking n around the point>2 points are combined with the distance returned by the laser displacement sensor to respectively obtain n three-dimensional coordinate points which are not on the same line, and plane fitting is carried out to obtain a plane M1。D1Taking n around the point>2 points, respectively obtaining three-dimensional coordinates of n points by combining the returned distances of the laser displacement sensor, and performing plane fitting to obtain a plane M1. The method specifically comprises the following steps: the mechanical arm drives the laser displacement sensor to be in D1The surroundings are moved n positions and the three-dimensional coordinates of the n positions are recorded. The general form of the plane equation is: axn+Byn+CznAnd + D ═ 0. It is converted into: axn+bynAnd + c ═ z. Wherein:
Figure BDA0003384432610000141
for the n points obtained, the least square method is used
Figure BDA0003384432610000142
And (3) minimum, namely S is used for solving the partial derivatives of a, b and c respectively to obtain a formula and then solving simultaneously. In the method, a normal equation is used for solving, and the corresponding least square matrix form is as follows: AX is b. Wherein:
Figure BDA0003384432610000143
using the normal equation: x ═ ATA)-1ATAnd b, solving. Solving the values of a, b and c to obtain a plane M1And (4) an equation.
And 8: to plane M1Solving normal vectors
Figure BDA0003384432610000144
And then solving the deflection angle delta. M obtained according to step 51General equation Axn+Byn+CznM is known when + D is 01Normal vector
Figure BDA0003384432610000145
To (a, B, C), normal vectors and horizontal plane declination angles δ are obtained as shown in fig. 4.
And step 9: and adjusting the pose of the sucker at the tail end of the mechanical arm according to the deflection angle delta to finish the labeling operation. Reading the pose of the current mechanical arm tail end sucker, and recording the included angle delta between the current mechanical arm tail end sucker and the plane0And with M1And comparing the normal vector with the horizontal plane deflection angle delta to further adjust the position and posture of the tail end sucker of the mechanical arm, and recording the current RPY coordinate as RPY. And (3) starting a two-position five-way electromagnetic pneumatic valve, conveying gas into the vacuum chuck by a one-way valve through a pneumatic element by air supply of an air pump, and driving the chuck to suck the label and return the label to the position with the coordinate of rpy by a mechanical arm for labeling. And closing the two-position five-way electromagnetic pneumatic valve until the labeling action is finished.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still belong to the protection scope of the technical solution of the present invention.

Claims (10)

1. The non-ferrous metal automatic labeling method based on image processing is characterized by comprising the following steps of: which comprises the following steps:
(1) the mechanical arm drives the laser displacement sensor to find the positions of the upper edge and the left edge of the first layer of the nonferrous metal, the mechanical arm is controlled to move to the upper left corner point of the nonferrous metal stack according to the positions, and the current position is recorded as P1
(2) The arm being in position P1The distance of the thickness H of m nonferrous metal ingots is moved downwards to the mth layer, then the mechanical arm moves 3 x H to the right side, and a camera is used for collecting the nonferrous metal picture T of the layer0
(3) According to the pre-trained YOLOv5 model, the collected picture T is subjected to image matching0Extracting ROI, and storing the extracted picture as T1
(4) For T1Image preprocessing is carried out to obtain a graph T2The processing flow comprises the following steps: graying an image, filtering a median value of the image, transforming the grayscale and binarizing the image;
(5) graph T using improved Hough line detection algorithm for horizontal gap detection2Extracting gap lines among the nonferrous metal ingots;
(6) calculating a labeling pixel point, and converting the pixel point into a point D under a robot coordinate system1
(7) At D1Taking n around the point>2 points are combined with the distance returned by the laser displacement sensor to respectively obtain n three-dimensional coordinate points which are not on the same line, and plane fitting is carried out to obtain a plane M1
(8) To plane M1Solving normal vectors
Figure FDA0003384432600000011
Further solving a deflection angle delta;
(9) adjusting the pose of the sucker at the tail end of the mechanical arm according to the deflection angle delta to finish the labeling operation;
wherein the step (5) comprises:
(5.1) for T2Performing morphological treatment to inhibit straight line in vertical direction, highlight horizontal line characteristics, and detect labeling pointHorizontal gaps are formed between the nonferrous metal ingots;
(5.2) at a horizontal distance T2Judging whether edge points are left or not within A/2 distance of the left edge of the image, if so, randomly extracting an edge point as a starting point, and setting a rectangular frame a by taking the point as the center, A as the length and W as the width; if no, executing the step (6);
(5.3) voting on a straight line passing through the point in the a frame in a (rho, theta) parameter space, wherein a group of rho and theta data corresponds to a straight line;
(5.4) judging whether the frame a is larger than the threshold value thr1If the straight line is L (j), the step (5.5) is carried out; if not, returning to the step (5.2); cleaning all points in the frame a;
(5.5) translating the center of the frame a in the direction of a straight line L (j) by (A + B)/2 to obtain a rectangular frame B (i), wherein B is the length of the frame B (i), W is the width, and A > > B, and arranging an accumulator acc (j) for recording the number of straight line interruption times, wherein the accumulator is repeatedly used in each frame B;
(5.6) voting for the (rho, theta) point corresponding to the L (j) straight line in the frame b (i), and recording the increased voting number n (i) of L (j);
(5.7) determining whether n (i) is less than threshold thr2If the value is less than the preset value, the straight line in the frame is discontinuous, acc (j) is added with 1, otherwise acc (j) has a constant value; cleaning all points in the frame of b (i);
(5.8) judging whether the value of acc (j) is greater than the threshold value thr3If the value is larger than the preset value, the straight line is interrupted too much, the detection of the straight line is abandoned, and the step (5.2) is returned;
(5.9) translating the center of the frame B (i) along the direction of a straight line L (j) by B to obtain a rectangular frame B (i + 1);
(5.10) judging whether the current b frame position leaves the range of the image horizontal direction or not, if not, returning to the step (5.6), otherwise, performing the step (5.11); clearing the accumulator acc (j);
(5.11) saving the straight Line L (j) in the set Line, and returning to the step (5.2);
wherein the step (6) comprises:
(6.1) by the diagram T2The upper left corner point is the origin, the right is the positive direction of the x axis, and the downward is the positive direction of the y axisEstablishing a coordinate system; calculating the intersection point of all the lines in the set Line and the y axis, and storing the intersection point in the set P1Performing the following steps;
(6.2) taking the set P1Straight line L corresponding to the minimum valuesThe straight line is a gap line between the layer of non-ferrous metal ingot and the layer of non-ferrous metal ingot above;
(6.3) calculation of LsAt y-axis and x ═ T2Middle point D of line segment between Width0
(6.4) if necessary, mixing D with0Offset by a distance D along the y-axis to obtain D1And the point is a labeling pixel point and is converted into a point in a robot coordinate system through hand-eye calibration.
2. The automatic labeling method of nonferrous metal based on image processing according to claim 1, wherein: in the step (1), the mechanical arm moves from the outer side to the inner side of the nonferrous metal stack, the laser displacement sensor fixed at the tail end of the mechanical arm takes detected data as a signal, if the data is detected, the mechanical arm stops moving and records the current point, according to the principle, the mechanical arm moves from top to bottom, from left to right and after moving twice, the positions of the upper edge and the left edge of the first layer of the nonferrous metal are found, according to the position, the mechanical arm is controlled to move to the upper left corner point of the nonferrous metal stack, and the current position is recorded as P1
3. The automatic labeling method for nonferrous metals based on image processing according to claim 2, wherein: in the step (2), the mechanical arm is at the position P1The distance of the thickness H of m nonferrous metal ingots is moved downwards to the mth layer, then the mechanical arm moves 3 x H to the right side, and a camera is used for collecting the nonferrous metal picture T of the layer0(ii) a The camera is fastened on the front panel of the camera frame through a screw, and the rear panel of the camera frame is provided with a through hole and a sliding chute for connecting the camera frame and the clamp, so that the position of the camera frame relative to the lamp bracket can be conveniently adjusted, and the optimal light supplementing effect can be conveniently found; the fixture is at the same side, the camera frame the place ahead, and the dome lamp light filling lamp passes through the lighting fixture to be connected on the fixture, provides stable light source for the camera, machineryThe arm collects the nonferrous metal picture T of the layer at the designated position through a CCD camera0
4. The automatic labeling method for nonferrous metals based on image processing according to claim 3, wherein the labeling method comprises the following steps: in the step (3), the collected picture T is processed according to the actual image processing area0And extracting the ROI by using a pre-trained YOLOv5 model, and saving the extracted picture as T1.
5. The automatic labeling method for nonferrous metals based on image processing according to claim 4, wherein the automatic labeling method comprises the following steps: in the step (4), the extracted colored nonferrous metal image is processed into a gray-scale image, obvious noise points in the gray-scale image are removed through median filtering, and details of the image contour are reserved and highlighted; in order to avoid the influence of strong light in a factory on the contrast of a non-ferrous metal image at noon, the quality of the non-ferrous metal image is improved through gray level conversion;
in the step (5), the threshold value thr is adjusted3And eliminating straight lines with different numbers of discontinuities.
6. The automatic labeling method for nonferrous metals based on image processing according to claim 5, wherein the automatic labeling method comprises the following steps: in the step (6), the mechanical arm drives the laser displacement sensor to be in the position D1Moving n positions around, and recording three-dimensional coordinates of the n positions; the general form of the plane equation is: axn+Byn+Czn+ D ═ 0, converting it to: axn+bynZ, wherein:
Figure FDA0003384432600000041
fitting the plane by adopting a least square method, wherein a corresponding least square method matrix is as follows: AX ═ b, where:
Figure FDA0003384432600000042
Figure FDA0003384432600000051
using the normal equation: x ═ ATA)-1ATb, solving to obtain the values of a, b and c, and obtaining the plane M1And (4) an equation.
7. The automatic labeling method for nonferrous metals based on image processing according to claim 6, wherein the automatic labeling method comprises the following steps: in the step (7), according to the plane M1General equation Axn+Byn+Czn+ D ═ 0 or M1Normal vector
Figure FDA0003384432600000052
The normal vector and the horizontal plane deflection angle δ are obtained as (a, B, C).
8. The automatic labeling method for nonferrous metals based on image processing according to claim 7, wherein the labeling method comprises the following steps: in the step (8), the pose of the current mechanical arm tail end sucker is read, and the included angle delta between the current mechanical arm tail end sucker and the plane is recorded0And with M1Comparing the normal vector with the horizontal plane deflection angle delta, further adjusting the position and posture of a sucker at the tail end of the mechanical arm, and recording the current RPY coordinate as RPY; opening a two-position five-way electromagnetic pneumatic valve, conveying gas into a vacuum sucker through a one-way valve by a pneumatic element by supplying gas to an air pump, and driving the sucker to suck the label to return to the position with the coordinate of rpy by a mechanical arm; and closing the two-position five-way electromagnetic pneumatic valve until the labeling action is finished.
9. Automatic subsides mark device of non ferrous metal based on image processing, its characterized in that: it includes: the device comprises a positioning device, a fixture device, an execution device, a pneumatic device and a printing device;
the positioning device detects a labeling position, the positioning device is fixedly connected to the executing device through a clamp, and after the labeling position is detected, the pneumatic device absorbs the label through the executing device and moves to the labeling position for labeling; the whole fixture is a cubic shell, and the periphery of the bottom of the fixture is flanged inwards;
4 holes are formed in one side face of the clamp body and used for being connected with a laser displacement sensor, the 4 holes are distributed on rectangular corner points with certain width and height, and threads are tapped in the 4 holes; 4 holes are respectively formed in the upper side and the lower side of the other side face of the clamp body, the upper hole forming position and the lower hole forming position are respectively distributed along two straight lines, the two straight lines are respectively away from the upper edge and the lower edge by a certain distance, the first holes in the upper row and the lower row are both away from the side edge by a certain distance, and 8 holes are all tapped; a circular hole is formed in the center of the outline of the head of the clamp body and used for connecting a sucker, and 4 through holes are formed in the concentric circle of the circular hole and used for fixing the sucker.
10. The automatic non-ferrous metal labeling device based on image processing as claimed in claim 9, wherein: in the positioning device, a camera frame and a lamp holder are arranged on the same side of a clamp in parallel, a camera and a light supplementing lamp are respectively fixed on the camera frame and the lamp holder, the contour centers of the camera and the light supplementing lamp are on a straight line parallel to the side edge of the clamp, and a laser displacement sensor is fixed at the other side of the laser displacement sensor at a certain distance from the side edge of the clamp;
30 air suction holes on the surface of the sucking disc at the tail end of the pneumatic device are distributed in an array, a vacuum conversion device is arranged in the vacuum conversion device, and the vacuum conversion device is fixed on the head of the fixture.
CN202111461039.XA 2021-11-30 2021-11-30 Non-ferrous metal automatic labeling method and device based on image processing Withdrawn CN114104453A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111461039.XA CN114104453A (en) 2021-11-30 2021-11-30 Non-ferrous metal automatic labeling method and device based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111461039.XA CN114104453A (en) 2021-11-30 2021-11-30 Non-ferrous metal automatic labeling method and device based on image processing

Publications (1)

Publication Number Publication Date
CN114104453A true CN114104453A (en) 2022-03-01

Family

ID=80365713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111461039.XA Withdrawn CN114104453A (en) 2021-11-30 2021-11-30 Non-ferrous metal automatic labeling method and device based on image processing

Country Status (1)

Country Link
CN (1) CN114104453A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114851160A (en) * 2022-05-24 2022-08-05 国网上海市电力公司 Mechanical arm control method for mobile robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114851160A (en) * 2022-05-24 2022-08-05 国网上海市电力公司 Mechanical arm control method for mobile robot

Similar Documents

Publication Publication Date Title
CN108399639B (en) Rapid automatic grabbing and placing method based on deep learning
CN104626169B (en) Robot part grabbing method based on vision and mechanical comprehensive positioning
CN110580725A (en) Box sorting method and system based on RGB-D camera
CN106044570B (en) It is a kind of that automatic identification equipment and method are hung using the coil of strip of machine vision
CN107009358B (en) Single-camera-based robot disordered grabbing device and method
CN110293559B (en) Installation method for automatically identifying, positioning and aligning
CN112529858A (en) Welding seam image processing method based on machine vision
CN111761575B (en) Workpiece, grabbing method thereof and production line
CN110211183B (en) Multi-target positioning system based on single-imaging large-view-field LED lens mounting
CN111562791A (en) System and method for identifying visual auxiliary landing of unmanned aerial vehicle cooperative target
CN114758236A (en) Non-specific shape object identification, positioning and manipulator grabbing system and method
CN113418933B (en) Flying shooting visual imaging detection system and method for detecting large-size object
CN114104453A (en) Non-ferrous metal automatic labeling method and device based on image processing
CN117689717B (en) Ground badminton pose detection method for robot pickup
CN111768369B (en) Steel plate corner point and edge point positioning method, workpiece grabbing method and production line
US20240051146A1 (en) Autonomous solar installation using artificial intelligence
CN113021391A (en) Integrated vision robot clamping jaw and using method thereof
CN210072415U (en) System for unmanned aerial vehicle cooperation target recognition vision assists landing
CN113495073A (en) Auto-focus function for vision inspection system
CN213154395U (en) Device for measuring rubber edge rubber line of shoe brush
CN115520479A (en) Automatic labeling process
TW202331655A (en) Label integrity self-adaptive detection method and system
CN113112541A (en) Silkworm pupa body pose measuring and calculating method and system based on image processing
CN113715935A (en) Automatic assembling system and automatic assembling method for automobile windshield
CN111973134A (en) Method for aligning test body to channel to be tested based on vision, navigation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220301