CN117340884A - Robot hand grabbing method based on visual effect recognition - Google Patents

Robot hand grabbing method based on visual effect recognition Download PDF

Info

Publication number
CN117340884A
CN117340884A CN202311463968.3A CN202311463968A CN117340884A CN 117340884 A CN117340884 A CN 117340884A CN 202311463968 A CN202311463968 A CN 202311463968A CN 117340884 A CN117340884 A CN 117340884A
Authority
CN
China
Prior art keywords
grabber
grabbing
robot
grabbers
robot hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311463968.3A
Other languages
Chinese (zh)
Inventor
刘加活
毛利波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zeda Automatic Equipment Co ltd
Original Assignee
Shenzhen Zeda Automatic Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zeda Automatic Equipment Co ltd filed Critical Shenzhen Zeda Automatic Equipment Co ltd
Priority to CN202311463968.3A priority Critical patent/CN117340884A/en
Publication of CN117340884A publication Critical patent/CN117340884A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • B25J15/08Gripping heads and other end effectors having finger members
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention belongs to the technical field of industrial robots, and particularly relates to a robot hand grabbing method based on visual effect identification, which is used for shooting and imaging grabbers, and identifying and analyzing the grabbers by using basic grabber images; based on the recognized grabbing object production grabbing signal, controlling the robot to move and adjust to grab the grabbing object; the robot arm is adjusted based on the characteristics of the grabbers, so that the robot arm is suitable for grabbers with different shapes; and placing the grabber to a specified position based on the operation instruction. According to the robot hand grabbing method based on visual effect identification, grabbing states of the robot hand are not changed for simulation information which does not slide and fall off by carrying out identification analysis on grabs according to grabs images, and grabbing states of the robot hand are readjusted for the simulation information which slides and falls off until the robot hand can stably grab and transport the grabs, so that slipping of workpieces is prevented, working efficiency of the robot hand is improved, and damage to the workpieces is avoided.

Description

Robot hand grabbing method based on visual effect recognition
Technical Field
The invention relates to the technical field of industrial robots, in particular to a robot hand grabbing method based on visual effect identification.
Background
The industrial robot is a multi-joint manipulator or a multi-degree-of-freedom machine device facing the industrial field, can automatically execute work, is a machine which realizes various functions by self power and control capability, can accept human command, and can also operate according to a pre-programmed program.
At present, in the existing working environment, the use of an industrial robot is very common, but due to the fact that the shapes of workpieces are different, the industrial robot possibly causes uneven stress of each grabbing point of the robot in the work of grabbing the workpieces, the situation that the workpieces slip easily occurs, the workpieces are possibly damaged, the working efficiency of the robot is reduced, property loss is caused, and in this regard, the robot grabbing method based on visual effect identification is provided.
Disclosure of Invention
The invention mainly aims to provide a robot hand grabbing method based on visual effect recognition, which can solve the problems in the background technology.
In order to achieve the above purpose, the robot hand grabbing method based on visual effect recognition provided by the invention comprises the following steps:
s1, shooting and imaging a grabber, performing identification analysis on the grabber by using a basic grabber image, performing multi-angle shooting on the image to form a plurality of groups of images in the process of shooting the grabber image, and performing three-dimensional modeling on the plurality of groups of images to realize characteristic and size identification of the grabber image;
s2, based on the recognized grabbing object production grabbing signals, controlling the robot to move and adjust to grab the grabbing object, realizing the feature and size recognition of the grabbing object through three-dimensional modeling of the grabbing object, simultaneously, modeling and determining the position of the grabbing object, adjusting the robot through three-dimensional simulation modeling, and grabbing the grabbing object;
s3, adjusting the robot based on the characteristics of the grabbers, adapting to the grabbers with different shapes, and adjusting the positions of grabbing nodes of the robot by identifying the grabbers with different shapes, so that the grabbers can be closely attached in the grabbing process of the robot, and the grabbing stability of the robot is improved;
s4, placing the grabber to the designated position based on the operation instruction.
Preferably, the identifying and analyzing the grabber by using the basic grabber image in S1 includes:
shooting the grabber at multiple angles based on shooting instructions, and integrating the shot pictures;
identifying the characteristics of the object based on the plurality of integrated pictures;
and identifying the position and the placement angle of the object based on the plurality of integrated pictures.
Preferably, the identifying and analyzing the grabber in S1 includes identifying a shape and a size of the grabber, identifying a placement angle of the grabber, and identifying a placement position of the grabber.
Preferably, the shape and size recognition of the object to be grabbed is based on obtaining an object to be grabbed image, wherein the mode of the object to be grabbed image is shooting, the shooting angles of the object to be grabbed image are multiple groups of non-heavy samples to integrate the images of the object to be grabbed, the integrated object to be grabbed image is obtained, and the shape, the size placement angle and the placement position of the object to be grabbed are obtained for the three-dimensional model of the object to be grabbed image component.
Preferably, the controlling the robot to move and adjust the gripping object includes:
receiving position identification information of a grabber, acquiring the position of the grabber, establishing a three-dimensional motion model of the robot and the grabber, and acquiring a grabbing motion path of the robot;
according to the grabbing motion path of the robot, the transverse moving position and the longitudinal moving position of the robot are adjusted, so that the robot is positioned above the grabbed object;
and adjusting the rotation of the robot according to the characteristics of the acquired objects, and determining the grabbing degree of the robot and the grabbing posture of the robot.
Preferably, the adjusting the robot arm in S3 based on the feature of the gripping object includes:
adjusting the robot based on the shape characteristics of the grabber, so that the robot clamps the grabber or the parcel grabber;
judging whether the robot hand stably grabs the grabber based on the shape characteristics of the grabber;
if the robot is stable in grabbing, the object grabbing instruction is moved, and if the robot is unstable in grabbing, the robot is continuously adjusted until grabbing is stable.
Preferably, the adjusting the robot arm based on the shape feature of the grabber includes:
receiving shape characteristic information of a grabbed object, and adjusting a grabbing angle and a grabbing mode of a robot according to the shape characteristic of the grabbed object, wherein the grabbing mode is specifically divided into clamping grabbing and parcel grabbing;
and establishing a three-dimensional motion simulation model of the robot hand for grabbing the grabber according to the shape characteristics of the grabber and the grabbing mode of the robot hand.
Preferably, the determining whether the robot hand stably grabs the grabber based on the shape feature of the grabber includes:
acquiring a three-dimensional motion simulation model of the grabber, and judging whether the grabber slides and falls off in the process of being grabbed according to the three-dimensional motion simulation state of the grabber;
according to the three-dimensional simulation motion state of the grabbed object, the grabbing state of the robot arm is not changed according to the simulation information which does not slide and fall off, and the grabbing state of the robot arm is readjusted according to the simulation information which slides and falls off until the robot arm can stably grab the grabbed object for transportation.
Preferably, the placing the grabber to the designated position in S4 based on the operation instruction includes:
receiving identification analysis information of the grabber, and placing different grabbers in different areas according to different types or sizes of the grabbers;
the method comprises the steps of obtaining the shape characteristics of the grabbers, judging whether the grabbers can be stacked or not according to the shape of the grabbers, stacking the grabbers which can be stacked, and separating the grabbers which cannot be stacked.
Preferably, the stability analysis is performed in the process of stacking the grabs based on the shape characteristics of the grabs, so that the grabs are stacked to a certain height and then are stacked in a stack.
The invention provides a robot hand grabbing method based on visual effect identification. The beneficial effects are as follows:
(1) According to the robot hand grabbing method based on visual effect identification, the grabbing objects are identified and analyzed according to grabbing object images, the shape characteristics and the sizes of the grabbing objects are obtained, the grabbing objects are grabbed through adjusting the positions of the robot hand, in the process, a three-dimensional motion simulation model of the grabbing objects is obtained, whether the grabbing objects slide and fall off in the grabbing motion process or not is judged according to the three-dimensional motion simulation state of the grabbing objects, the grabbing state of the robot hand is not changed for simulation information which does not slide and fall off, the grabbing state of the robot hand is readjusted for the simulation information which slides and falls off until the robot hand can stably grab the grabbing objects for transportation, the situation that workpieces slip off is prevented, the working efficiency of the robot hand is improved, and damage to the workpieces is avoided.
(2) According to the robot hand grabbing method based on visual effect identification, the grabbing angle and grabbing mode of the robot hand are adjusted through the shape characteristics of grabbed objects, the grabbing mode is clamping grabbing or parcel grabbing, different workpieces are grabbed conveniently, and the application range of the device is widened.
(3) According to the robot hand grabbing method based on visual effect identification, through receiving identification analysis information of grabbers, different grabbers are placed in different areas according to different types or sizes of grabbers, whether the grabbers can be stacked or not is judged according to the shape characteristics of the grabbers, the grabbers which can be stacked or placed are stacked or placed, the grabbers which cannot be stacked or placed are separated, and space is saved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a robot hand grabbing method based on visual effect recognition;
fig. 2 is a schematic structural diagram of a robot hand grabbing method S1 based on visual effect recognition according to the present invention;
fig. 3 is a schematic structural diagram of a robot hand grabbing method S3 based on visual effect recognition according to the present invention;
fig. 4 is a block diagram of a robot hand gripping method based on visual effect recognition according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-4, the invention provides a robot hand grabbing method based on visual effect recognition, which comprises the following steps:
s1, shooting and imaging a grabber, and identifying and analyzing the grabber by using a basic grabber image;
s2, based on the recognized grabbing object production grabbing signals, controlling a robot arm to move and adjust to grab the grabbing object;
s3, adjusting the robot arm based on the characteristics of the grabbers, and adapting to the grabbers with different shapes;
s4, placing the grabber to the designated position based on the operation instruction.
The method comprises the steps of shooting images at multiple angles to form multiple groups of images in the process of shooting and imaging the grabber, carrying out three-dimensional modeling on the multiple groups of images, realizing characteristic and size recognition of the grabber images, carrying out three-dimensional modeling on the grabber images, realizing modeling and determining the position of the grabber at the same time of realizing characteristic and size recognition of the grabber, realizing adjustment of a robot through three-dimensional modeling, realizing grabbing of the grabber, and regulating the positions of grabbing nodes of the robot through recognition of grabbers in different shapes, so that the grabber can be clung in the grabbing process of the robot, and the grabbing stability of the robot is improved.
In the embodiment of the invention, in order to further determine the position of the grabber and identify the placement angle of the grabber, specifically, in S1, the identification and analysis of the grabber by using the basic grabber image includes multi-angle shooting of the grabber based on shooting instructions, and integrating the shot pictures;
identifying the characteristics of the object based on the plurality of integrated pictures; and identifying the position and the placement angle of the object based on a plurality of integrated pictures, and identifying the shape and the size of the object to be grabbed based on the acquired object to be grabbed, wherein the mode of capturing the object to be grabbed is shooting, the shooting angles of the object to be grabbed are multiple groups of non-heavy samples, the images of the object to be grabbed are integrated, the integrated object to be grabbed are obtained, and the shape, the size placement angle and the placement position of the object to be grabbed are obtained for the three-dimensional model of the object to be grabbed.
Further, in order to adjust the position of the robot, so as to facilitate grabbing the grabbing object, in S2, controlling the robot to move and adjust to grab the grabbing object includes receiving position identification information of the grabbing object, obtaining the position of the grabbing object, building a three-dimensional motion model of the robot and the grabbing object, and obtaining a grabbing motion path of the robot; according to the grabbing motion path of the robot hand, the transverse moving position and the longitudinal moving position of the robot hand are adjusted, so that the robot hand is positioned above the grabbed objects; and adjusting the rotation of the robot according to the characteristics of the acquired grabbing objects, and determining the grabbing degree of the robot and the grabbing posture of the robot.
In order to enable the robot to stably grasp the grabbers with different shape characteristics, specifically, in S3, the robot is adjusted based on the characteristics of the grabbers, including adjusting the robot based on the shape characteristics of the grabbers, so that the robot clamps the grabbers or packages the grabbers; judging whether the robot hand stably grabs the grabber based on the shape characteristics of the grabber; if the robot hand snatchs stably, then remove and snatch the thing instruction, if the robot hand snatchs unstably, then continue to adjust the robot hand until snatch stably, through carrying out recognition analysis to snatch the thing according to snatch the thing image, obtain the shape characteristic and the size of snatch the thing, the position of rethread adjusting the robot hand snatchs the thing, in this process, obtain the three-dimensional motion simulation model of snatch the thing, and judge according to the three-dimensional simulation motion state of snatch the thing and whether can appear sliding and drop in the in-process of being snatched the motion, the simulation information that does not appear sliding and drop does not change the snatch state of robot hand, the simulation information that appears sliding and drop readjust the snatch state of robot hand, until the robot hand can stably snatch the thing transportation, the slip condition of work piece has been prevented, the work efficiency of robot hand has been improved still avoided the damage of work piece.
Further, in order to further improve the grabbing application range of the robot, specifically, adjusting the robot based on the shape characteristics of the grabbed objects includes receiving the shape characteristic information of the grabbed objects, and adjusting the grabbing angle and grabbing mode of the robot according to the shape characteristics of the grabbed objects, wherein the grabbing modes are specifically classified into clamping grabbing and parcel grabbing; according to the shape characteristics of the grabbed objects and the grabbing mode of the robot, a three-dimensional motion simulation model of the grabbed objects by the robot is established, the grabbing angle and grabbing mode of the robot are adjusted through the shape characteristics of the grabbed objects, the grabbing mode is adjusted to clamp grabbing or parcel grabbing, different workpieces are convenient to grab, the application range of the device is improved, meanwhile, the three-dimensional motion simulation model of the grabbed objects by the robot can simulate the motion of the grabbed objects after grabbing, and the grabbed objects are guaranteed to be stably transported to the designated position.
In addition, in order to be able to place different grabbers in different areas according to different kinds or sizes of grabbers while saving a placement area of the grabbers, specifically, placing the grabbers to the designated positions based on the operation instruction in S4 includes receiving identification analysis information of the grabbers, and placing the different grabbers in the different areas according to the different kinds or sizes of the grabbers; the method comprises the steps of obtaining the shape characteristics of the grabs, judging whether the grabs can be stacked or not according to the shape of the grabs, stacking the grabs which can be stacked or not, separating the grabs which can not be stacked or not, performing stable analysis on the grabs in the stacking process based on the shape characteristics of the grabs, enabling the grabs to be stacked to be replaced after being stacked to a certain height, placing different grabs in different areas according to different types or sizes of the grabs by receiving the identification analysis information of the grabs, judging whether the grabs can be stacked or not according to the shape characteristics of the grabs, stacking the grabs which can not be stacked or not, and separating the grabs which can not be stacked or placed to achieve the effect of saving space.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structural changes made by the description of the present invention and the accompanying drawings or direct/indirect application in other related technical fields are included in the scope of the invention.

Claims (10)

1. The robot hand grabbing method based on visual effect recognition is characterized by comprising the following steps of:
s1, shooting and imaging a grabber, and identifying and analyzing the grabber by using a basic grabber image;
s2, based on the recognized grabbing object production grabbing signals, controlling a robot arm to move and adjust to grab the grabbing object;
s3, adjusting the robot arm based on the characteristics of the grabbers, and adapting to the grabbers with different shapes;
s4, placing the grabber to the designated position based on the operation instruction.
2. The method for grabbing a robot hand based on visual effect recognition according to claim 1, wherein the identifying and analyzing the grabs by the basic grabs image in S1 includes:
shooting the grabber at multiple angles based on shooting instructions, and integrating the shot pictures;
identifying the characteristics of the object based on the plurality of integrated pictures;
and identifying the position and the placement angle of the object based on the plurality of integrated pictures.
3. The robot gripping method based on visual effect recognition according to claim 1, wherein: the step S1 of identifying and analyzing the grabber comprises the steps of identifying the shape and the size of the grabber, identifying the placement angle of the grabber and identifying the placement position of the grabber.
4. A method for gripping a robot based on visual effect recognition according to claim 3, wherein: the method comprises the steps of identifying the shape and the size of a grabber based on the acquired grabber images, wherein the mode of the grabber images is shooting, shooting angles of the grabber images are multiple groups of non-heavy samples, integrating the grabber images to obtain integrated grabber images, and obtaining the shape, the size placement angle and the placement position of the grabber for the three-dimensional model of the grabber image member.
5. The method for gripping a gripper according to claim 2, wherein the controlling the movement and adjustment of the gripper by the robot comprises:
receiving position identification information of a grabber, acquiring the position of the grabber, establishing a three-dimensional motion model of the robot and the grabber, and acquiring a grabbing motion path of the robot;
according to the grabbing motion path of the robot, the transverse moving position and the longitudinal moving position of the robot are adjusted, so that the robot is positioned above the grabbed object;
and adjusting the rotation of the robot according to the characteristics of the acquired objects, and determining the grabbing degree of the robot and the grabbing posture of the robot.
6. The method for grabbing a manipulator based on visual effect recognition according to claim 1, wherein the adjusting the manipulator based on the feature of the grabber in S3 comprises:
adjusting the robot based on the shape characteristics of the grabber, so that the robot clamps the grabber or the parcel grabber;
judging whether the robot hand stably grabs the grabber based on the shape characteristics of the grabber;
if the robot is stable in grabbing, the object grabbing instruction is moved, and if the robot is unstable in grabbing, the robot is continuously adjusted until grabbing is stable.
7. The method of claim 6, wherein the adjusting the robot based on the shape characteristics of the object comprises:
receiving shape characteristic information of a grabbed object, and adjusting a grabbing angle and a grabbing mode of a robot according to the shape characteristic of the grabbed object, wherein the grabbing mode is specifically divided into clamping grabbing and parcel grabbing;
and establishing a three-dimensional motion simulation model of the robot hand for grabbing the grabber according to the shape characteristics of the grabber and the grabbing mode of the robot hand.
8. The method for grasping a robot hand based on visual effect recognition according to claim 6, wherein the determining whether the robot hand stably grasps the grasping object based on the shape feature of the grasping object comprises:
acquiring a three-dimensional motion simulation model of the grabber, and judging whether the grabber slides and falls off in the process of being grabbed according to the three-dimensional motion simulation state of the grabber;
according to the three-dimensional simulation motion state of the grabbed object, the grabbing state of the robot arm is not changed according to the simulation information which does not slide and fall off, and the grabbing state of the robot arm is readjusted according to the simulation information which slides and falls off until the robot arm can stably grab the grabbed object for transportation.
9. The method for grasping a robot hand based on visual effect recognition according to claim 1, wherein the placing the grasping object to the designated position based on the operation instruction in S4 comprises:
receiving identification analysis information of the grabber, and placing different grabbers in different areas according to different types or sizes of the grabbers;
the method comprises the steps of obtaining the shape characteristics of the grabbers, judging whether the grabbers can be stacked or not according to the shape of the grabbers, stacking the grabbers which can be stacked, and separating the grabbers which cannot be stacked.
10. The robot gripping method based on visual effect recognition according to claim 9, wherein: and carrying out stability analysis on the grabs in the stacking process based on the shape characteristics of the grabs, so that the grabs are stacked to a certain height and then are changed to be stacked.
CN202311463968.3A 2023-11-06 2023-11-06 Robot hand grabbing method based on visual effect recognition Pending CN117340884A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311463968.3A CN117340884A (en) 2023-11-06 2023-11-06 Robot hand grabbing method based on visual effect recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311463968.3A CN117340884A (en) 2023-11-06 2023-11-06 Robot hand grabbing method based on visual effect recognition

Publications (1)

Publication Number Publication Date
CN117340884A true CN117340884A (en) 2024-01-05

Family

ID=89355791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311463968.3A Pending CN117340884A (en) 2023-11-06 2023-11-06 Robot hand grabbing method based on visual effect recognition

Country Status (1)

Country Link
CN (1) CN117340884A (en)

Similar Documents

Publication Publication Date Title
CN109483554B (en) Robot dynamic grabbing method and system based on global and local visual semantics
CN105598965B (en) The autonomous grasping means of robot drive lacking hand based on stereoscopic vision
JP5778311B1 (en) Picking apparatus and picking method
WO2017015898A1 (en) Control system for robotic unstacking equipment and method for controlling robotic unstacking
US20190351549A1 (en) Robot system and control method of robot system for taking out workpieces loaded in bulk
CN111923053A (en) Industrial robot object grabbing teaching system and method based on depth vision
CN105345585B (en) The robot handling gripper and handling system of the engine cylinder body of view-based access control model
CN109415175B (en) Intelligent loading and unloading system and working method thereof
CN110605711B (en) Method, device and system for controlling cooperative robot to grab object
CN110125036B (en) Self-recognition sorting method based on template matching
WO2021053750A1 (en) Work robot and work system
CN112079078A (en) Full-automatic unordered feeding system of robot based on binocular vision
CN113538459A (en) Multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection
CN114055501A (en) Robot grabbing system and control method thereof
CN114074331A (en) Disordered grabbing method based on vision and robot
CN117340884A (en) Robot hand grabbing method based on visual effect recognition
KR101197125B1 (en) Object picking system and object picking method
CN210589323U (en) Steel hoop processing feeding control system based on three-dimensional visual guidance
CN210589283U (en) Truss type metal sample sorting manipulator
CN115556102B (en) Robot sorting and planning method and planning equipment based on visual recognition
An et al. An Autonomous Grasping Control System Based on Visual Object Recognition and Tactile Perception
CN212312046U (en) Robot three-dimensional vision system positioning and grabbing platform
CN217914230U (en) Gear shaft automatic sorting equipment
TW202027934A (en) System for eliminating interference of randomly stacked workpieces
Ata et al. Design and development of 5-DOF color sorting manipulator for industrial applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination