CN114570674A - Automatic sorting method and device based on height sensor and readable medium - Google Patents

Automatic sorting method and device based on height sensor and readable medium Download PDF

Info

Publication number
CN114570674A
CN114570674A CN202210057346.XA CN202210057346A CN114570674A CN 114570674 A CN114570674 A CN 114570674A CN 202210057346 A CN202210057346 A CN 202210057346A CN 114570674 A CN114570674 A CN 114570674A
Authority
CN
China
Prior art keywords
grabbing
height
image
pose
pixel value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210057346.XA
Other languages
Chinese (zh)
Inventor
曹礼禧
杨建红
张宝裕
王英俊
毕雪涛
庄汉强
黄文景
黄骁明
陈海生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian South Highway Machinery Co Ltd
Original Assignee
Fujian South Highway Machinery Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian South Highway Machinery Co Ltd filed Critical Fujian South Highway Machinery Co Ltd
Priority to CN202210057346.XA priority Critical patent/CN114570674A/en
Priority to PCT/CN2022/084339 priority patent/WO2023137871A1/en
Publication of CN114570674A publication Critical patent/CN114570674A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • B07C5/362Separating or distributor mechanisms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C2501/00Sorting according to a characteristic or feature of the articles or material to be sorted
    • B07C2501/0063Using robots

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an automatic sorting method, a device and a readable medium based on a height sensor, which are used for obtaining a height image and a color image of the same sorting area; identifying a mask and a type of an object on the color image through an example segmentation model, acquiring a minimum rectangular frame surrounding the object based on the mask, and obtaining a central point, a length, a width and a deflection angle of the minimum rectangular frame; processing the height image according to different height thresholds to obtain a processed height image; comparing the area of the mask of the color image with the area of the object at the corresponding position on the processed height image to obtain an area ratio, and determining whether the grabbing pose of the mechanical claw can be searched based on the area ratio; and searching the grabbing pose based on the processed height image, and determining the grabbing pose of the mechanical claw, wherein the grabbing pose comprises a grabbing central point, a grabbing width and a grabbing angle. The invention increases the grabbing possibility for the working conditions of high object distribution density, high stacking rate and the like, and improves the sorting efficiency.

Description

Automatic sorting method and device based on height sensor and readable medium
Technical Field
The invention relates to the field of automatic sorting, in particular to an automatic sorting method and device based on a height sensor and a readable medium.
Background
The traditional method for realizing object grabbing by the automatic sorting robot is to acquire images through a color camera, identify and position the object to be grabbed, and then send the class and plane position information of the object to a lower computer.
Because the two-dimensional position information of the object is obtained by positioning, the mechanical claw needs to descend to be close to the belt and then grab each time when grabbing. The gripping mode depends on the distribution density and the stacking condition of the objects on the conveyor belt, and if the distribution density and the stacking rate of the objects on the conveyor belt are high, the possibility that the mechanical claw grips the objects is low. Under the working condition that the distribution density of objects is high, the automatic sorting robot does not grab due to the fact that the grabbing space is insufficient, and therefore the automatic sorting robot grabs the problem of low efficiency.
The stacking rate of the objects is high under the actual working condition, which is a common scene, so that enough space is needed around the objects to enable the mechanical claws to grab the objects, and the grabbing difficulty is increased.
Disclosure of Invention
The problems that the grabbing difficulty is large and the like due to the fact that the stacking rate of the objects is high and the distribution density is large under the actual working condition are solved. An object of the embodiments of the present application is to provide an automatic sorting method, apparatus and readable medium based on height sensor, which solve the technical problems mentioned in the background section above.
In a first aspect, an embodiment of the present application provides an automatic sorting method based on a height sensor, including the following steps:
s1, acquiring a height image and a color image of the same sorting area;
s2, identifying the mask and the type of the object on the color image through the example segmentation model, acquiring a minimum rectangular frame surrounding the object based on the mask, and obtaining the central point, the length, the width and the deflection angle of the minimum rectangular frame;
s3, processing the height image according to different height thresholds to obtain a processed height image;
s4, comparing the area of the mask of the color image with the area of the object at the corresponding position on the processed height image to obtain an area ratio, and determining whether the grabbing pose of the mechanical claw can be found based on the area ratio;
and S5, performing grabbing pose search based on the processed height image, and determining the grabbing pose of the mechanical claw, wherein the grabbing pose comprises a grabbing center point, a grabbing width and a grabbing angle.
In some embodiments, the grasp pose search in step S5 includes a grasp pose search based on a minimum rectangular frame and a grasp pose search based on the shape of the object, the grasp pose search adjusting the grasp pose of the gripper to obtain an interference-free grasp pose according to the processed height images at different height thresholds.
In some embodiments, step S5 specifically includes:
s51, calculating a first pixel value Sum Sum1 of the processed height image;
and S52, drawing a grab line of the gripper on the processed height image according to the center point, the length, the width and the deflection angle of the minimum rectangular frame according to different grab center points, grab widths and grab angles, setting the pixel value of the grab line to be 0, calculating a second pixel value Sum Sum2 of the processed height image for drawing the grab line, firstly performing grab pose search based on the minimum rectangular frame, then performing grab pose search based on the shape of the object, and determining the grab pose in response to the difference value between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 being less than a preset threshold value.
In some embodiments, the searching for the grab pose based on the minimum rectangular frame in step S5 specifically includes:
judging whether the difference value between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 is smaller than a preset threshold value or not when the mechanical gripper draws the grabbing line at the first grabbing pose, if so, grabbing along the length direction of the minimum rectangular frame by taking the central point surrounding the minimum rectangular frame as a grabbing central point; otherwise, rotating the grabbing angle of the first grabbing pose by 90 degrees to obtain a second grabbing pose, judging whether the difference value between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 is smaller than a preset threshold value when the mechanical claw draws a grabbing line with the second grabbing pose, and if so, grabbing along the width direction of the minimum rectangular frame by taking the central point surrounding the minimum rectangular frame as a grabbing central point; otherwise, carrying out grabbing pose search based on the shape of the object.
In some embodiments, the grasping pose search based on the shape of the object in step S5 specifically includes: judging the shape of the object, if the object is in a strip shape, fixing a grabbing angle, converting the grabbing center point and the grabbing width to draw a grabbing line of the mechanical claw, judging whether the difference between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 is smaller than a preset threshold value, if so, outputting the grabbing center point, the grabbing width and the grabbing angle when the difference between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 is minimum, and otherwise, converting the height threshold value and repeating the steps S3-S5; if the object is non-strip-shaped, fixing the grabbing center point, converting the grabbing angle to draw a grabbing line of the mechanical claw and judging whether the difference between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 is smaller than a preset threshold, if so, outputting the grabbing center point, the grabbing width and the grabbing angle when the difference between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 is minimum, otherwise, converting the height threshold and repeating the steps S3-S5.
In some embodiments, step S3 specifically includes:
acquiring height information based on the height image, and filtering the height information lower than the height threshold value according to the height threshold value to obtain a filtered height image;
and carrying out binarization processing on the filtered height image to obtain a processed height image.
In some embodiments, step S1 specifically includes:
acquiring a plurality of height single-line images shot by a single-color camera arranged at a first fixed position on a conveyor belt, and splicing the images in sequence to obtain a height spliced image;
acquiring a plurality of color single-line images shot by a color camera arranged at a second fixed position on the conveyor belt, and splicing the images in sequence to obtain a color spliced image;
and cutting the height spliced image and the color spliced image according to the offset d of the first fixed position and the second fixed position to obtain the height image and the color image.
In a second aspect, an embodiment of the present application provides an automatic sorting apparatus based on a height sensor, including:
the image acquisition module is configured to acquire a height image and a color image of the same sorting area;
the object identification module is configured to identify a mask and a type of an object on the color image through the example segmentation model, acquire a minimum rectangular frame surrounding the object based on the mask, and obtain a central point, a length, a width and a deflection angle of the object according to the minimum rectangular frame;
the height image processing module is configured to process the height image according to different height thresholds to obtain a processed height image;
the area comparison module is configured to compare the area of the mask of the color image with the area of the object at the corresponding position on the processed height image to obtain an area ratio, and determine whether the grabbing pose of the mechanical claw can be found based on the area ratio;
and the grabbing pose searching module is configured to search grabbing poses based on the processed height image and determine the grabbing poses of the mechanical claws, wherein the grabbing poses comprise a grabbing center point, a grabbing width and a grabbing angle.
In a third aspect, embodiments of the present application provide an electronic device comprising one or more processors; storage means for storing one or more programs which, when executed by one or more processors, cause the one or more processors to carry out a method as described in any one of the implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the method as described in any of the implementations of the first aspect.
Compared with the prior art, the invention has the following beneficial effects:
(1) compared with the traditional automatic sorting robot, the automatic sorting robot has the advantages that the sensors are utilized to position the space positions of the objects, whether the objects are successfully grabbed or not is judged by the sensors, the grabbing possibility of the automatic sorting robot to the working conditions of high object distribution density and high stacking rate can be increased, and the sorting efficiency is effectively improved.
(2) The example segmentation model adopted by the invention is mature, and the height image is processed by adjusting the height thresholds to filter the height information lower than the height threshold, so that the accurate grabbing posture is obtained by combining the height information of the objects at different heights.
(3) According to the automatic sorting method based on the height sensor, the automatic sorting robot is additionally used for positioning the three-dimensional position of the object in the space, the grabbing lines of the mechanical claws at different heights can be simulated, the grabbing is accurate, and therefore the grabbing efficiency of the automatic sorting robot is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is an exemplary device architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a schematic flow chart of an automatic sorting method based on height sensors according to an embodiment of the present invention;
FIG. 3 is a schematic view of a light source system dimensional modeling for a height sensor based automated sorting method according to an embodiment of the present invention;
FIG. 4 is a timing diagram of data acquisition triggers for a height sensor based automated sorting method according to an embodiment of the present invention;
FIG. 5 is a schematic image stitching diagram of an automatic height sensor-based sorting method according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a building solid waste image acquisition platform of an automatic sorting method based on a height sensor according to an embodiment of the present invention;
FIG. 7 is a color image of a height sensor based automated sorting method according to an embodiment of the present invention;
FIG. 8 is a graph of results of example segmentation model identification of color images for a height sensor based automated sorting method according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a minimum rectangular frame and its center point, length, width and deflection angle obtained from an example division model recognition result in the height sensor-based automatic sorting method according to the embodiment of the present invention;
FIG. 10 is a height image of a height sensor based automated sorting method according to an embodiment of the present invention;
FIG. 11 is a processed height image of a height sensor based automated sorting method according to an embodiment of the present invention;
fig. 12 is a schematic view of an area ratio acquisition process of an automatic sorting method based on height sensors according to an embodiment of the present invention;
fig. 13 is a color image of a height sensor based automated sorting method of an embodiment of the present invention with red bricks needing to be grabbed on a block of wood;
fig. 14 is a processed height image of the height sensor-based automatic sorting method of the present invention after drawing a gripping line of a gripper;
FIG. 15 is a schematic diagram of the center point and length or width of the minimum rectangular box obtained by the height sensor based automatic sorting method according to the embodiment of the present invention;
FIG. 16 is a schematic view of a height sensor based automated sorting method of an embodiment of the present invention depicting straight lines according to a minimum rectangular box, length or width, and deflection angle;
fig. 17 is a schematic view illustrating a gripping line of a robot jaw according to a straight line in the automatic sorting method based on a height sensor according to an embodiment of the present invention;
fig. 18 is a processed height image of the gripping line of the robot claw drawn at different gripping attitudes where the height threshold value is 10 according to the height sensor-based automatic sorting method of the embodiment of the present invention;
fig. 19 is a height image after processing of the gripping line of the gripper is completed in different gripping postures with a height threshold value of 20 according to the automatic sorting method based on a height sensor of the embodiment of the present invention;
fig. 20 is a processed height image of the gripping line of the robot claw drawn at different gripping attitudes where the height threshold is 30 according to the height sensor-based automatic sorting method of the embodiment of the present invention;
FIG. 21 is a schematic view of an automatic height sensor-based sorting apparatus according to an embodiment of the present invention;
fig. 22 is a schematic structural diagram of a computer device suitable for implementing an electronic apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 illustrates an exemplary apparatus architecture 100 to which the height sensor-based automatic sorting method or the height sensor-based automatic sorting apparatus of the embodiments of the present application may be applied.
As shown in fig. 1, the apparatus architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various applications, such as data processing type applications, file processing type applications, etc., may be installed on the terminal apparatuses 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices including, but not limited to, smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a background data processing server that processes files or data uploaded by the terminal devices 101, 102, 103. The background data processing server can process the acquired file or data to generate a processing result.
It should be noted that the automatic sorting method based on height sensors provided in the embodiment of the present application may be executed by the server 105, or may be executed by the terminal devices 101, 102, and 103, and accordingly, the automatic sorting apparatus based on height sensors may be disposed in the server 105, or may be disposed in the terminal devices 101, 102, and 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In the case where the processed data does not need to be acquired from a remote location, the above device architecture may not include a network, but only a server or a terminal device.
Fig. 2 illustrates an automatic sorting method based on height sensors according to an embodiment of the present application, including the following steps:
and S1, acquiring a height image and a color image of the same sorting area.
In a specific embodiment, step S1 specifically includes:
acquiring a plurality of height single-line images shot by a monochromatic camera arranged at a first fixed position on a conveyor belt, and splicing the images in sequence to obtain a height spliced image;
acquiring a plurality of color single-line images shot by a color camera arranged at a second fixed position on the conveyor belt, and splicing the images in sequence to obtain a color spliced image;
and cutting the height spliced image and the color spliced image according to the offset d of the first fixed position and the second fixed position to obtain the height image and the color image.
In particular, referring to fig. 3, the brightness, uniformity, and illumination angle of the light source all affect the final imaging quality of the camera. In order to meet the requirements of high light intensity, large range, long service life and the like, a high-brightness linear white LED light source with the model number of OPTLSG1254-W of Orptt corporation is selected in the embodiment of the application, and because the light source irradiation direction and the camera shooting direction form a certain angle, the brightness of the same target shot under different heights is different. To maximally alleviate the above problems, the light source illumination angle and the camera shooting angle should be as small as possible, and the light source system is modeled to obtain a schematic diagram as shown in fig. 3.
To simplify the model, the shape of the side surface of the light source is simplified into a rectangle with a length of L and a width of W, and the light source is installed
Height H, the angle should satisfy the following expression:
Figure BDA0003476910120000071
the programmed approximation solution according to the above equation yields an angle of about 4.87 °.
The acquisition of the height single-line image and the color single-line image comprises two steps of data acquisition triggering and data matching, in the data acquisition triggering process, an encoder is used for converting the displacement of a conveyor belt into pulse signals according to a fixed proportion and simultaneously inputting the pulse signals to a height camera and a color camera, the height camera and the color camera are set to be both rising edge triggering acquisition, and when the encoder sends out a pulse, the height single-line image H corresponding to the current pulse can be obtainediAnd color single line image CiThe trigger timing diagram is shown in fig. 4.
As can be seen from fig. 4, since the conversion coefficient between the pulse number and the displacement of the conveyor belt is fixed, it can be ensured that the acquired image data is not deformed when the conveyor belt changes speed or speed fluctuation exists, and the acquisition rhythms of the two images are consistent. For all the height single line images HiAnd color single line image CiSplicing is carried out, namely a high single-line image H is obtainedi-1,Hi,Hi+1… to obtain a height image, color single-line image Ci-1,Ci,Ci+1…, a color image is obtained by splicing, and a complete color image and a height image are obtained, as shown in fig. 5. Since the physical mounting positions of the height camera and the color camera have a fixed offset d in the direction of travel of the conveyor belt, the actual positions of the data of the height camera and the color camera are different at the same time. Therefore, when the data are matched, corresponding data are intercepted according to the actual offset d of the installation positions of the height camera and the color camera for matching, the specific operation is to remove the data of the dotted line frame part of the two data, the rest data is the matching data, and the object in the height image which is finally matched corresponds to the object in the color image. The final acquisition system configuration is shown in fig. 6. Height camera description:
the monochrome camera and the line laser in the height camera in the embodiment of the application are integrated, the height of an object at the position of the line laser, namely one frame during splicing, can be directly obtained by using the camera, one encoder pulse signal triggers one-time acquisition, and 960 frames acquired by 960 pulses are spliced into a height photo.
S2, identifying the mask and the type of the object on the color image through the example segmentation model, acquiring a minimum rectangular frame surrounding the object based on the mask, and obtaining the center point, the length, the width and the deflection angle of the minimum rectangular frame.
In a specific embodiment, the color image shown in fig. 7 is obtained through step S1, and the outlines and kinds of objects on the color image are obtained through example segmentation model recognition, as shown in fig. 8. Specifically, the example segmentation model comprises a Mask RCNN neural network, and not only can identify the class and the position of the object, but also can obtain the Mask (Mask) of the object. Then, a minimum rectangular frame surrounding the object is obtained by using the mask, and information such as a center point, a length, a width, a deflection angle, and the like of the object is obtained according to the minimum rectangular width, as shown in fig. 9. In other alternative embodiments, the example segmentation model may also adopt other models formed by neural networks, as long as the outline and the type of the object can be obtained, so as to obtain the minimum rectangular frame of the object, the center point, the length, the width, the deflection angle and the like of the object.
And S3, processing the height image according to different height thresholds to obtain a processed height image.
In a specific embodiment, step S3 specifically includes:
acquiring height information based on the height image, and filtering the height information lower than the height threshold value according to the height threshold value to obtain a filtered height image;
and carrying out binarization processing on the filtered height image to obtain a processed height image, wherein the pixel points of objects in the processed height image are 1, and the pixel points of no object are 0.
Specifically, height information of the object on the conveyor belt can be acquired from the height image, as shown in fig. 10. The height information can be correspondingly obtained from the height image, the height information which is lower than a height threshold value is filtered, the pixel value of each point in the height image is the height of an object at the point, the filtering means that the pixel value of the point which is lower than the height threshold value is set to be 0, the pixel value of the part which is higher than the height threshold value is unchanged, therefore, the object which is lower than a certain height plane is filtered, and the image which is higher than the height threshold value is displayed. Different height threshold values can be set through the method, and filtered height images corresponding to the different height threshold values are obtained through processing. Further, the filtered height image is subjected to binarization processing, that is, the pixel point of an object in the processed height image is 1, the pixel point of an object-free image is 0, and the processed height image is as shown in fig. 11.
And S4, comparing the area of the mask of the color image with the area of the object at the corresponding position on the processed height image to obtain an area ratio, and determining whether the grabbing pose of the mechanical claw can be found or not based on the area ratio.
In a specific embodiment, a ratio of an area of an object mask in a color image to an area of the object mask of a height image after the grabbing plane is pulled up is calculated, and when the ratio is smaller than 0.8, it represents that the shape of the object changes, and a proper grabbing pose cannot be found even if the grabbing plane is pulled up continuously, that is, the height threshold is increased, a processed height image is obtained, and grabbing of the object is abandoned, as shown in fig. 12, where a is an identification result of the color image; b is a mask of an object extracted from the recognition result; the image c is a height image processed when the height threshold value is 5, and the contrast area of the object mask extracted by the color image is not changed greatly, so that the grabbing pose can be searched in the state; fig. d is a height image after processing at a height threshold of 25, when the contrast area of the object mask with the color image extraction is greatly changed, and therefore the grasp pose cannot be found in this state.
And S5, performing grabbing pose search based on the processed height image, and determining the grabbing pose of the mechanical claw, wherein the grabbing pose comprises a grabbing center point, a grabbing width and a grabbing angle.
In a specific embodiment, the grasp pose search in step S5 includes a grasp pose search based on a minimum rectangular frame and a grasp pose search based on the shape of the object, and the grasp pose search adjusts the grasp pose of the gripper to obtain an interference-free grasp pose according to the processed height images at different height thresholds.
In a specific embodiment, step S5 specifically includes:
s51, calculating a first pixel value Sum Sum1 of the processed height image;
and S52, drawing a grab line of the gripper on the processed height image according to the center point, the length, the width and the deflection angle of the minimum rectangular frame according to different grab center points, grab widths and grab angles, setting the pixel value of the grab line to be 0, calculating a second pixel value Sum Sum2 of the processed height image for drawing the grab line, firstly performing grab pose search based on the minimum rectangular frame, then performing grab pose search based on the shape of the object, and determining the grab pose in response to the difference value between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 being less than a preset threshold value.
In a specific embodiment, the searching for the grab pose based on the minimum rectangular frame in step S5 specifically includes:
judging whether the difference value between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 is smaller than a preset threshold value when the mechanical claw draws a grabbing line at the first grabbing pose, if so, grabbing along the length direction of the minimum rectangular frame by taking the central point surrounding the minimum rectangular frame as a grabbing central point; otherwise, rotating the grabbing angle of the first grabbing pose by 90 degrees to obtain a second grabbing pose, judging whether the difference value between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 is smaller than a preset threshold value when the mechanical claw draws a grabbing line with the second grabbing pose, and if so, grabbing along the width direction of the minimum rectangular frame by taking the central point surrounding the minimum rectangular frame as a grabbing central point; otherwise, carrying out grabbing pose search based on the shape of the object.
In a specific embodiment, the grabbing pose search based on the shape of the object in step S5 specifically includes: judging the shape of the object, if the object is in a strip shape, fixing a grabbing angle, converting the grabbing center point and the grabbing width to draw a grabbing line of the mechanical claw, judging whether the difference between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 is smaller than a preset threshold value, if so, outputting the grabbing center point, the grabbing width and the grabbing angle when the difference between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 is minimum, and otherwise, converting the height threshold value and repeating the steps S3-S5; if the object is non-strip-shaped, fixing the grabbing center point, converting the grabbing angle to draw a grabbing line of the mechanical claw and judging whether the difference between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 is smaller than a preset threshold, if so, outputting the grabbing center point, the grabbing width and the grabbing angle when the difference between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 is minimum, otherwise, converting the height threshold and repeating the steps S3-S5.
Specifically, as shown in fig. 13, it is necessary to grab the red brick on the wood block in fig. 13, obtain the processed image at different height thresholds, calculate a first pixel value Sum1 of the processed height image, and since the height information filtering and binarization processing are performed on the processed height image, the pixel point of the retained object is 1, and the values of the pixel points in the rest of the objects are 0, the first pixel value Sum1 of the processed height image at different height thresholds can be calculated. By using the obtained information of the center point, the length, the width, the deflection angle, and the like of the minimum rectangular frame of the object, two thick lines representing the grabbing lines of the gripper (gray thick lines in the drawing) are drawn on the processed height image, the pixel value of the thick line is 0, as shown in fig. 14, the pixel value of the grabbing line of the gripper is 0, and is black, and here, the pixel value is gray for convenience of explanation. Specifically, as shown in fig. 15, the center point, the length, the width, and the deflection angle of the minimum rectangular frame are obtained from the minimum rectangular frame, as shown in fig. 16, a straight line parallel to the width direction or the length direction and passing through the center point of the minimum rectangular frame can be calculated according to the center point, the length, the width, and the deflection angle of the minimum rectangular frame, as a result of the deviation of the detection result, the opening size of the gripper is 10mm larger than the width or the length of the object, respectively, and therefore the straight line is 20mm larger than the width or the length of the minimum rectangular frame, and finally, 2 straight lines having the same length as the gripper are continuously drawn at two end points of the straight line as a gripping line as shown in fig. 17, which represents the gripping position of the gripper in the actual gripping process, and as in the actual gripping, the gripper moves to the center point of the object, rotates to be parallel to the object, and then opens to an opening angle 20mm wider than the object, directly move down and close the gripper to grab the object.
Fig. 18, 19, and 20 show processed height images in which gripping curves of the gripper are plotted in different gripping postures with height thresholds of 10, 20, and 30, respectively. The SUM2 of pixel values of the height image after drawing the grip line of the gripper is calculated, when the difference between SUM1 and SUM2 is 0 or less than a certain value, it represents that there is no object interference in the grip at this position, if there is object interference, when the grip line of the gripper is drawn on the drawing, the pixel value of the object position is changed from 1 to 0, and the calculated total pixel value SUM2 of the image on which the grip line of the gripper is drawn is less than the total pixel value SUM1 of the grip line image on which the gripper is not drawn. If no object interference exists, the central point of the minimum rectangular frame is used as a grabbing central point, grabbing is carried out along the length direction of the rectangle, a grabbing pose is output, if the object interference exists, the grabbing angle is rotated by 90 degrees, the width direction is changed to try to judge whether interference exists under the condition that the grabbing pose cannot be found in the length direction, if the interference exists, the central point of the minimum rectangular frame is used as the grabbing central point, grabbing is carried out along the width direction of the rectangle, and the grabbing pose is output; otherwise, judging the shape of the object, and searching the grabbing pose based on the shape of the object according to the strip shape and the non-strip shape; if the object is in a strip shape, fixing the rotation angle, changing the grabbing central point and the grabbing width, drawing a grabbing line of the mechanical claw, calculating the difference value between SUM1 and SUM2, and outputting the grabbing central point and the grabbing width when the difference value is minimum; if the object is non-strip-shaped, fixing a grabbing central point, transforming grabbing angles, drawing a grabbing line of the mechanical claw, calculating the difference value between the SUM1 and the SUM2, and outputting the grabbing angle with the minimum difference value; and finally, carrying out interference judgment on the grabbing pose, outputting the grabbing pose if no interference exists, and adjusting the height threshold value if the interference exists, and repeating the steps S3-S5 to carry out next search. The height threshold value is 0 when entering the cycle for the first time, 5 is added when each cycle follows, namely the increasing height of 0, 5, 10 and 15 … is added, and the objects with the height lower than the height threshold value are filtered by binarization, so that the grabbing postures of different height planes are searched.
With further reference to fig. 21, as an implementation of the methods shown in the above figures, the present application provides an embodiment of an automatic sorting apparatus based on a height sensor, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
The embodiment of the application provides an automatic sorting device based on height sensor, includes:
an image acquisition module 1 configured to acquire a height image and a color image of the same sorting area;
the object identification module 2 is configured to identify a mask and a type of an object on the color image through an example segmentation model, acquire a minimum rectangular frame surrounding the object based on the mask, and obtain a central point, a length, a width and a deflection angle of the object according to the minimum rectangular frame;
the height image processing module 3 is configured to process the height image according to different height thresholds to obtain a processed height image;
the area comparison module 4 is configured to compare the area of the mask of the color image with the area of the object at the corresponding position on the processed height image to obtain an area ratio, and determine whether to find the grabbing pose of the mechanical gripper based on the area ratio;
and the grabbing pose searching module 5 is configured to perform grabbing pose searching based on the processed height image and determine the grabbing pose of the mechanical claw, wherein the grabbing pose comprises a grabbing center point, a grabbing width and a grabbing angle.
Referring now to fig. 22, a schematic diagram of a computer device 600 suitable for use in implementing an electronic device (e.g., the server or terminal device shown in fig. 1) according to an embodiment of the present application is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer apparatus 600 includes a Central Processing Unit (CPU)601 and a Graphics Processing Unit (GPU)602, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)603 or a program loaded from a storage section 609 into a Random Access Memory (RAM) 604. In the RAM 604, various programs and data necessary for the operation of the apparatus 600 are also stored. The CPU 601, GPU602, ROM 603, and RAM 604 are connected to each other via a bus 605. An input/output (I/O) interface 606 is also connected to bus 605.
The following components are connected to the I/O interface 606: an input portion 607 including a keyboard, a mouse, and the like; an output section 608 including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage section 609 including a hard disk and the like; and a communication section 610 including a network interface card such as a LAN card, a modem, or the like. The communication section 610 performs communication processing via a network such as the internet. The driver 611 may also be connected to the I/O interface 606 as needed. A removable medium 612 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 611 as necessary, so that the computer program read out therefrom is mounted into the storage section 609 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 610 and/or installed from the removable media 612. The computer programs, when executed by a Central Processing Unit (CPU)601 and a Graphics Processor (GPU)602, perform the above-described functions defined in the methods of the present application.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable medium or any combination of the two. The computer readable medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor device, apparatus, or any combination of the foregoing. More specific examples of the computer readable medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution apparatus, device, or apparatus. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution apparatus, device, or apparatus. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based devices that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present application may be implemented by software or hardware. The modules described may also be provided in a processor.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may be separate and not incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a height image and a color image of the same sorting area; identifying a mask and a type of an object on the color image through an example segmentation model, acquiring a minimum rectangular frame surrounding the object based on the mask, and obtaining a central point, a length, a width and a deflection angle of the minimum rectangular frame; processing the height image according to different height thresholds to obtain a processed height image; comparing the area of the mask of the color image with the area of the object at the corresponding position on the processed height image to obtain an area ratio, and determining whether the grabbing pose of the mechanical claw can be searched based on the area ratio; and searching the grabbing pose based on the processed height image, and determining the grabbing pose of the mechanical claw, wherein the grabbing pose comprises a grabbing central point, a grabbing width and a grabbing angle.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (10)

1. An automatic sorting method based on a height sensor is characterized by comprising the following steps:
s1, acquiring a height image and a color image of the same sorting area;
s2, identifying the mask and the type of the object on the color image through an example segmentation model, acquiring a minimum rectangular frame surrounding the object based on the mask, and obtaining the central point, the length, the width and the deflection angle of the minimum rectangular frame;
s3, processing the height image according to different height thresholds to obtain a processed height image;
s4, comparing the area of the mask of the color image with the area of the object at the corresponding position on the processed height image to obtain an area ratio, and determining whether the grabbing pose of the mechanical claw can be found based on the area ratio;
and S5, performing grabbing pose search based on the processed height image, and determining the grabbing pose of the mechanical claw, wherein the grabbing pose comprises a grabbing center point, a grabbing width and a grabbing angle.
2. The height sensor-based automatic sorting method according to claim 1, wherein the grasp pose search in step S5 includes a grasp pose search based on a minimum rectangular frame and a grasp pose search based on a shape of the object, the grasp pose search adjusting the grasp poses of the robot gripper to obtain an interference-free grasp pose according to the processed height images at different height thresholds.
3. The automatic sorting method based on height sensors according to claim 2, wherein the step S5 specifically comprises:
s51, calculating a first Sum of pixel values Sum1 of the processed height image;
s52, drawing a grab line of the gripper according to different grab center point, grab width and grab angle on the processed height image according to the center point, length, width and deflection angle of the minimum rectangular frame, and making the pixel value of the grab line 0, calculating a second pixel value Sum2 of the processed height image drawing the grab line, first performing a grab pose search based on the minimum rectangular frame, then performing a grab pose search based on the shape of the object, and determining the grab pose in response to the difference between the first pixel value Sum1 and the second pixel value Sum2 being less than a preset threshold.
4. The automatic sorting method based on height sensors according to claim 3, wherein the step S5 of searching for the grabbing pose based on the minimum rectangular frame specifically comprises:
judging whether the difference value between the first pixel value Sum1 and the second pixel value Sum2 is smaller than a preset threshold value or not when the gripper draws the grabbing line in a first grabbing pose, and if so, grabbing along the length direction of the minimum rectangular frame by taking the central point surrounding the minimum rectangular frame as a grabbing central point; otherwise, rotating the grabbing angle of the first grabbing pose by 90 degrees to obtain a second grabbing pose, and judging whether the difference value between the first pixel value Sum1 and the second pixel value Sum2 is smaller than a preset threshold value when the mechanical claw draws the grabbing line in the second grabbing pose, if so, grabbing along the width direction of the minimum rectangular frame by taking the central point surrounding the minimum rectangular frame as a grabbing central point; otherwise, carrying out grabbing pose search based on the shape of the object.
5. The automatic sorting method based on height sensors according to claim 3, wherein the step S5 of searching for the grasp pose based on the shape of the object specifically includes: judging the shape of the object, if the object is in a strip shape, fixing the grabbing angle, converting the grabbing center point and the grabbing width to draw a grabbing line of a mechanical claw, judging whether the difference between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 is smaller than a preset threshold value, if so, outputting the grabbing center point, the grabbing width and the grabbing angle when the difference between the first pixel value Sum Sum1 and the second pixel value Sum Sum2 is minimum, and otherwise, converting the height threshold value and repeating the steps S3-S5; if the object is non-strip-shaped, fixing the grabbing central point, converting the grabbing angle to draw a grabbing line of the mechanical claw, judging whether the difference value between the first pixel value Sum1 and the second pixel value Sum2 is smaller than a preset threshold value, if so, outputting the grabbing central point, the grabbing width and the grabbing angle when the difference value between the first pixel value Sum1 and the second pixel value Sum2 is minimum, and otherwise, converting the height threshold value and repeating the steps S3-S5.
6. The automatic sorting method based on height sensors according to claim 2, wherein the step S3 specifically comprises:
acquiring height information based on the height image, and filtering the height information lower than a height threshold value according to the height threshold value to obtain a filtered height image;
and carrying out binarization processing on the filtered height image to obtain the processed height image.
7. The automatic sorting method based on height sensors according to claim 1, wherein the step S1 specifically comprises:
acquiring a plurality of height single-line images shot by a monochromatic camera arranged at a first fixed position on a conveyor belt, and splicing the images in sequence to obtain a height spliced image;
acquiring a plurality of color single-line images shot by a color camera arranged at a second fixed position on the conveyor belt, and splicing the images in sequence to obtain a color spliced image;
and cutting the height splicing image and the color splicing image according to the offset d of the first fixed position and the second fixed position to obtain the height image and the color image.
8. An automatic sorting device based on a height sensor, comprising:
the image acquisition module is configured to acquire a height image and a color image of the same sorting area;
the object identification module is configured to identify a mask and a type of an object on the color image through an example segmentation model, acquire a minimum rectangular frame surrounding the object based on the mask, and obtain a central point, a length, a width and a deflection angle of the object according to the minimum rectangular frame;
the height image processing module is configured to process the height image according to different height thresholds to obtain a processed height image;
an area comparison module configured to compare the area of the mask of the color image with the area of the object at the corresponding position on the processed height image to obtain an area ratio, and determine whether to find the grabbing pose of the gripper based on the area ratio;
and the grabbing pose searching module is configured to perform grabbing pose searching based on the processed height image and determine the grabbing pose of the mechanical claw, wherein the grabbing pose comprises a grabbing center point, a grabbing width and a grabbing angle.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202210057346.XA 2022-01-19 2022-01-19 Automatic sorting method and device based on height sensor and readable medium Pending CN114570674A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210057346.XA CN114570674A (en) 2022-01-19 2022-01-19 Automatic sorting method and device based on height sensor and readable medium
PCT/CN2022/084339 WO2023137871A1 (en) 2022-01-19 2022-03-31 Automatic sorting method and device based on height sensor and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210057346.XA CN114570674A (en) 2022-01-19 2022-01-19 Automatic sorting method and device based on height sensor and readable medium

Publications (1)

Publication Number Publication Date
CN114570674A true CN114570674A (en) 2022-06-03

Family

ID=81770965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210057346.XA Pending CN114570674A (en) 2022-01-19 2022-01-19 Automatic sorting method and device based on height sensor and readable medium

Country Status (2)

Country Link
CN (1) CN114570674A (en)
WO (1) WO2023137871A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117612061A (en) * 2023-11-09 2024-02-27 中科微至科技股份有限公司 Visual detection method for package stacking state for stacking separation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580725A (en) * 2019-09-12 2019-12-17 浙江大学滨海产业技术研究院 Box sorting method and system based on RGB-D camera
CN110648364A (en) * 2019-09-17 2020-01-03 华侨大学 Multi-dimensional space solid waste visual detection positioning and identification method and system
WO2020034872A1 (en) * 2018-08-17 2020-02-20 深圳蓝胖子机器人有限公司 Target acquisition method and device, and computer readable storage medium
CN111079548A (en) * 2019-11-22 2020-04-28 华侨大学 Solid waste online identification method based on target height information and color information
CN111515945A (en) * 2020-04-10 2020-08-11 广州大学 Control method, system and device for mechanical arm visual positioning sorting and grabbing

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11794343B2 (en) * 2019-12-18 2023-10-24 Intrinsic Innovation Llc System and method for height-map-based grasp execution
CN111144426B (en) * 2019-12-28 2023-05-30 广东拓斯达科技股份有限公司 Sorting method, sorting device, sorting equipment and storage medium
CN113379849B (en) * 2021-06-10 2023-04-18 南开大学 Robot autonomous recognition intelligent grabbing method and system based on depth camera
CN113420746B (en) * 2021-08-25 2021-12-07 中国科学院自动化研究所 Robot visual sorting method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020034872A1 (en) * 2018-08-17 2020-02-20 深圳蓝胖子机器人有限公司 Target acquisition method and device, and computer readable storage medium
CN110580725A (en) * 2019-09-12 2019-12-17 浙江大学滨海产业技术研究院 Box sorting method and system based on RGB-D camera
CN110648364A (en) * 2019-09-17 2020-01-03 华侨大学 Multi-dimensional space solid waste visual detection positioning and identification method and system
CN111079548A (en) * 2019-11-22 2020-04-28 华侨大学 Solid waste online identification method based on target height information and color information
CN111515945A (en) * 2020-04-10 2020-08-11 广州大学 Control method, system and device for mechanical arm visual positioning sorting and grabbing

Also Published As

Publication number Publication date
WO2023137871A1 (en) 2023-07-27

Similar Documents

Publication Publication Date Title
CN111784685B (en) Power transmission line defect image identification method based on cloud edge cooperative detection
CN109145759B (en) Vehicle attribute identification method, device, server and storage medium
CN113674421B (en) 3D target detection method, model training method, related device and electronic equipment
WO2019047644A1 (en) Method and device for controlling autonomous vehicle
CN108229418B (en) Human body key point detection method and apparatus, electronic device, storage medium, and program
CN112318485B (en) Object sorting system and image processing method and device thereof
WO2023160312A1 (en) Person re-identification method and apparatus based on self-supervised learning, and device and storage medium
CN115816460B (en) Mechanical arm grabbing method based on deep learning target detection and image segmentation
CN111597857B (en) Logistics package detection method, device, equipment and readable storage medium
CN102706274A (en) System for accurately positioning mechanical part by machine vision in industrially-structured scene
CN117124302B (en) Part sorting method and device, electronic equipment and storage medium
CN110756462A (en) Power adapter test method, device, system, control device and storage medium
CN114570674A (en) Automatic sorting method and device based on height sensor and readable medium
CN111242847B (en) Gateway-based image splicing method, system, equipment and storage medium
CN110110666A (en) Object detection method and device
WO2022227879A1 (en) Logistics management method and system based on qr code, and server and storage medium
CN105787429A (en) Method and apparatus for inspecting an object employing machine vision
Gal Automatic obstacle detection for USV’s navigation using vision sensors
CN111382695A (en) Method and apparatus for detecting boundary points of object
CN113762159B (en) Target grabbing detection method and system based on directional arrow model
CN116152685B (en) Pedestrian detection method and system based on unmanned aerial vehicle visual field
CN115284279A (en) Mechanical arm grabbing method and device based on aliasing workpiece and readable medium
CN113673344B (en) Intelligent tower crane material mounting position identification method and device
CN114155487A (en) Power operator detection method based on multi-group convolution fusion
CN113450291B (en) Image information processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination