CN116766201A - Industrial robot control system based on machine vision - Google Patents
Industrial robot control system based on machine vision Download PDFInfo
- Publication number
- CN116766201A CN116766201A CN202310896922.4A CN202310896922A CN116766201A CN 116766201 A CN116766201 A CN 116766201A CN 202310896922 A CN202310896922 A CN 202310896922A CN 116766201 A CN116766201 A CN 116766201A
- Authority
- CN
- China
- Prior art keywords
- target workpiece
- robot
- coordinates
- grabbing
- placement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 claims abstract description 21
- 230000007547 defect Effects 0.000 claims abstract description 20
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000003708 edge detection Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 230000005764 inhibitory process Effects 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 abstract description 12
- 238000000034 method Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J15/00—Gripping heads and other end effectors
- B25J15/08—Gripping heads and other end effectors having finger members
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
The invention discloses an industrial robot control system based on machine vision, which is applied to the technical field of industrial robots. Comprising the following steps: the robot comprises a grabbing robot, a robot driving platform, an upper computer control center, an AOI detection device, a first CCD camera and a second CCD camera; the first CCD camera is arranged at a workpiece grabbing position, the second CCD camera is arranged at a workpiece placing position, the AOI detection device is arranged at the workpiece placing position and used for detecting defects of the placed workpiece, the upper computer control center is used for identifying coordinates of the workpiece, and grabbing driving instructions and placing driving instructions of the industrial robot are calculated according to the coordinates of the target workpiece and the coordinates of the placing position; the robot driving platform controls the grabbing robot to grab and place workpieces. According to the invention, the placement and grabbing positions of the workpieces are identified through the double CCD cameras, so that the influence of deviation of the workpiece positions on the grabbing operation of the industrial robot in the production process is avoided, and the production efficiency is improved.
Description
Technical Field
The invention relates to the technical field of industrial robots, in particular to an industrial robot control system based on machine vision.
Background
The industrial robot is digital control equipment with wide application range and high technical added value, and plays an important role in the production and manufacturing industry, such as automobile manufacturing, electric appliance manufacturing and the like. The transfer robot is an industrial robot capable of performing automated transfer work, and is an automated product that replaces manual transfer. The robot gradually replaces manpower today, the appearance of the transfer robot greatly liberates labor force, the prospect is bright, and the development of the robot technology can play a powerful role in the promotion of future production and social development. However, most of the existing industrial robots require preset working procedures for their operation, and the working schemes of the industrial robots are controlled by preset schemes. The control mode has strict requirements on the placement position and angle of the grabbing object, and if deviation occurs, the production is affected, so that the production efficiency is reduced, and the existing robot control method based on image identification needs to perform a large amount of calculation to acquire the position and grabbing scheme of the target workpiece. Therefore, how to provide an industrial robot control system that has low operation amount and can flexibly adjust a gripping scheme is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides an industrial robot control system based on machine vision, which respectively identifies the position and the placement position of a grabbing target workpiece through two CCD cameras and converts the grabbing target workpiece into robot coordinates, so as to avoid the influence of deviation of the workpiece position on the grabbing operation of the industrial robot in the production process.
In order to achieve the above object, the present invention provides the following technical solutions:
an industrial robot control system based on machine vision, comprising: the robot comprises a grabbing robot, a robot driving platform, an upper computer control center, an AOI detection device, a first CCD camera and a second CCD camera; the first CCD camera is arranged at the grabbing position of the target workpiece and is used for acquiring image data of the grabbing position and transmitting the image data to the upper computer control center; the second CCD camera is arranged at the target workpiece placing position and is used for collecting image data of the placing position and transmitting the image data to the upper computer control center; the AOI detection device is arranged at the placement position of the target workpiece and is used for detecting defects of the placed target workpiece; the upper computer control center identifies the coordinates of the target workpiece before grabbing based on the image data of the grabbing positions, identifies the coordinates of the target workpiece after placing based on the image data of the placing positions, calculates grabbing driving instructions and placing driving instructions of the industrial robot according to the coordinates of the target workpiece and the coordinates of the placing positions, transmits the grabbing driving instructions and the placing driving instructions to the robot driving platform, and judges whether the placing positions are correct or not based on the coordinates of the target workpiece after placing; and the robot driving platform is used for controlling the grabbing robot to grab and place the target workpiece based on the received grabbing driving instruction and the placement driving instruction.
Optionally, the AOI detection device includes: the device comprises a preprocessing unit, a defect identification unit and a speech unit, wherein the speech unit performs standardized processing on image data acquired by the second CCD camera and then inputs the image data into the defect identification unit, and the defect identification unit identifies defects in a target workpiece.
Optionally, the upper computer control center includes: the device comprises a coordinate recognition unit, a coordinate conversion unit, an instruction generation unit and a placement detection unit; a coordinate recognition unit that recognizes an image coordinate of the target workpiece before gripping and an image coordinate of the target workpiece after placement based on the image data of the gripping position and the image data of the placement position; a coordinate conversion unit that converts the image coordinates of the target workpiece into robot coordinates; an instruction generation unit that calculates a gripping drive instruction and a placement drive instruction of the industrial robot based on coordinates of the target workpiece and coordinates of the placement position; and the placement detection unit is used for judging whether the placement position is correct after the placement of the target workpiece is completed.
Optionally, after the image of the target workpiece is acquired, the coordinate recognition unit calculates a position of each pixel in the image in an image coordinate system, extracts a target workpiece contour in the image, and obtains an image coordinate of the target workpiece, and the coordinate conversion unit converts the image coordinate of the target workpiece to a robot coordinate.
Optionally, the coordinate recognition unit recognizes the contour of the target workpiece by using an edge detection algorithm, specifically: filtering the image; calculating the gradient and direction of the filtered image; non-maximum value inhibition is carried out on the amplitude value of the gradient to obtain edge points; and detecting according to the maximum threshold value and the minimum threshold value of the edge points, and connecting the detected edge points to obtain the outline of the target workpiece.
Optionally, the gripper robot is provided with 4 articulation points and 1 gripper, and the instruction generating unit calculates the coordinate difference between the two based on the coordinates of the 4 articulation points and the gripper in the robot coordinate system and the coordinates of the target workpiece, so as to obtain an adjustment scheme of the 4 articulation points and the gripper.
Optionally, after the joint point and the gripper of the grabbing robot move according to the adjustment scheme, judging whether the target workpiece is in the grabbing range, and if so, controlling the industrial robot to execute grabbing instructions; when the robot control module judges that the target workpiece is conveyed to the target coordinate point, the industrial robot is controlled to execute a placement instruction, and the placement detection unit judges whether the placement position is correct or not based on the image data of the placement position acquired by the second CCD camera.
Compared with the prior art, the invention discloses an industrial robot control system based on machine vision, which has the following beneficial effects: the invention recognizes the position of the workpiece based on the CCD camera, does not need to preset the control scheme of the industrial robot in advance, does not need to strictly limit the placement position and angle of the target workpiece, and can flexibly adjust the grabbing action of the industrial robot in the production process; the two cameras are arranged to identify the position and the placement position of the grabbing target workpiece, the adjustment scheme of the industrial robot is calculated through the first CCD camera, whether the industrial robot is transported to the target position is judged through the second CCD camera, the problem that the single-position camera is complicated in identification process algorithm due to the fact that the single-position camera follows the shooting of the industrial robot is avoided, and meanwhile the identification accuracy is improved; the second CCD camera is used for collecting and carrying the image after the carrying is finished to carry out AOI detection on the workpiece, so that the defect of the workpiece can be effectively identified, the processing flow is reduced, and the production efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an industrial robot control system according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention discloses an industrial robot control system based on machine vision, which is shown in fig. 1 and comprises the following components: the robot comprises a grabbing robot, a robot driving platform, an upper computer control center, an AOI detection device, a first CCD camera and a second CCD camera; the first CCD camera is arranged at the grabbing position of the target workpiece and is used for acquiring image data of the grabbing position and transmitting the image data to the upper computer control center; the second CCD camera is arranged at the target workpiece placing position and is used for collecting image data of the placing position and transmitting the image data to the upper computer control center; the AOI detection device is arranged at the placement position of the target workpiece and is used for detecting defects of the placed target workpiece; the upper computer control center identifies the coordinates of the target workpiece before grabbing based on the image data of the grabbing positions, identifies the coordinates of the target workpiece after placing based on the image data of the placing positions, calculates grabbing driving instructions and placing driving instructions of the industrial robot according to the coordinates of the target workpiece and the coordinates of the placing positions, transmits the grabbing driving instructions and the placing driving instructions to the robot driving platform, and judges whether the placing positions are correct or not based on the coordinates of the target workpiece after placing; and the robot driving platform is used for controlling the grabbing robot to grab and place the target workpiece based on the received grabbing driving instruction and the placement driving instruction.
Further, the AOI detection device includes: the device comprises a preprocessing unit, a defect identification unit and a speech unit, wherein the speech unit performs standardized processing on image data acquired by the second CCD camera and then inputs the image data into the defect identification unit, and the defect identification unit identifies defects in a target workpiece.
Furthermore, the preprocessing unit expands the size of the image data acquired by the second CCD camera into a standard size, so that the height and the width of the image are consistent with those of the training image of the defect recognition unit, and in the expansion process, a pixel point with a pixel value of 0 is used.
Furthermore, the defect recognition unit adopts a BP network model, and the training process of the defect recognition model is as follows: inputting sample information, including inputting a sample image and a sample label, initializing the structure and parameters of a neural network, and selecting an activation function, wherein in the embodiment, the activation function is sigmoid; inputting an input sample image into a network, calculating the function output of each node, calculating an error value e, adjusting the sample weight and the bias, adopting a proper method to perform reverse process optimization, and repeating the processes until the stop condition is reached to initialize the sample weight and the bias.
Further, the error value e is calculated by:
wherein y is k For the output value of the kth set of output samples,is y k Is set to the desired output value of (1); input sample x= { x 1 ,x 2 ,…,x n Is an input sample set corresponding to an output value y k 。
The updating mode of the weight is as follows:
w i (t+1)=w i t+ηH j (1-H j )x i w j te
w j (t+1)=w j t+ηH j e
wherein w is i (t+1) is w after t+1st update i ,w i t is w after the t time update i ,w j (t+1) is w after t+1st update i ,w j t is w after the t time update i η is the learning rate;
the updating mode of the bias is as follows:
θ a (t+1)=θ a t+ηH j (1-H j )w j te
θ b (t+1)=θ b t+ηe
in θ a (t+1) is θ after the t+1st update a ,θ a t is θ after the t-th update a ,θ b (t+1) is θ after the t+1st update b ,θ b t is θ after the t-th update b 。
Further, the upper computer control center includes: the device comprises a coordinate recognition unit, a coordinate conversion unit, an instruction generation unit and a placement detection unit; a coordinate recognition unit that recognizes an image coordinate of the target workpiece before gripping and an image coordinate of the target workpiece after placement based on the image data of the gripping position and the image data of the placement position; a coordinate conversion unit that converts the image coordinates of the target workpiece into robot coordinates; an instruction generation unit that calculates a gripping drive instruction and a placement drive instruction of the industrial robot based on coordinates of the target workpiece and coordinates of the placement position; and the placement detection unit is used for judging whether the placement position is correct after the placement of the target workpiece is completed.
Further, after the image of the target workpiece is acquired, the coordinate recognition unit calculates the position of each pixel in the image coordinate system, extracts the outline of the target workpiece in the image, obtains the image coordinate of the target workpiece, and the coordinate conversion unit converts the image coordinate of the target workpiece into the robot coordinate.
Further, the coordinate recognition unit recognizes the outline of the target workpiece by adopting an edge detection algorithm, specifically: filtering the image; calculating the gradient and direction of the filtered image; non-maximum value inhibition is carried out on the amplitude value of the gradient to obtain edge points; and detecting according to the maximum threshold value and the minimum threshold value of the edge points, and connecting the detected edge points to obtain the outline of the target workpiece.
Further, the gradient M (i, j) and the direction of the image are respectively:
where P [ i, j ] is the pixel matrix in the x-axis direction of the image, and Q [ i, j ] is the pixel matrix in the y-axis direction of the image. The non-maximum suppression is: for a pixel, judging whether the pixel is an extremum of the pixel in the 3×3 neighborhood, and if so, taking the pixel as an edge point.
Further, the gripper robot is provided with 4 articulation points and 1 gripper, and the instruction generation unit calculates coordinate differences between the 4 articulation points and coordinates of the gripper in a robot coordinate system and coordinates of the target workpiece based on the coordinates of the gripper and the coordinates of the target workpiece, so that an adjustment scheme of the 4 articulation points and the gripper is obtained.
Further, after the joint point and the gripper of the grabbing robot move according to the adjusting scheme, judging whether the target workpiece is in the grabbing range, and if so, controlling the industrial robot to execute grabbing instructions; when the robot control module judges that the target workpiece is conveyed to the target coordinate point, the industrial robot is controlled to execute a placement instruction, and the placement detection unit judges whether the placement position is correct or not based on the image data of the placement position acquired by the second CCD camera.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (7)
1. An industrial robot control system based on machine vision, comprising: the robot comprises a grabbing robot, a robot driving platform, an upper computer control center, an AOI detection device, a first CCD camera and a second CCD camera; the first CCD camera is arranged at the grabbing position of the target workpiece and is used for acquiring image data of the grabbing position and transmitting the image data to the upper computer control center; the second CCD camera is arranged at the target workpiece placing position and is used for collecting image data of the placing position and transmitting the image data to the upper computer control center; the AOI detection device is arranged at the placement position of the target workpiece and is used for detecting defects of the placed target workpiece; the upper computer control center identifies the coordinates of the target workpiece before grabbing based on the image data of the grabbing positions, identifies the coordinates of the target workpiece after placing based on the image data of the placing positions, calculates grabbing driving instructions and placing driving instructions of the industrial robot according to the coordinates of the target workpiece and the coordinates of the placing positions, transmits the grabbing driving instructions and the placing driving instructions to the robot driving platform, and judges whether the placing positions are correct or not based on the coordinates of the target workpiece after placing; and the robot driving platform is used for controlling the grabbing robot to grab and place the target workpiece based on the received grabbing driving instruction and the placement driving instruction.
2. The machine vision-based industrial robot control system of claim 1, wherein the AOI detection device comprises: the device comprises a preprocessing unit, a defect identification unit and a speech unit, wherein the speech unit performs standardized processing on image data acquired by the second CCD camera and then inputs the image data into the defect identification unit, and the defect identification unit identifies defects in a target workpiece.
3. The machine vision-based industrial robot control system of claim 1, wherein the upper computer control center comprises: the device comprises a coordinate recognition unit, a coordinate conversion unit, an instruction generation unit and a placement detection unit; a coordinate recognition unit that recognizes an image coordinate of the target workpiece before gripping and an image coordinate of the target workpiece after placement based on the image data of the gripping position and the image data of the placement position; a coordinate conversion unit that converts the image coordinates of the target workpiece into robot coordinates; an instruction generation unit that calculates a gripping drive instruction and a placement drive instruction of the industrial robot based on coordinates of the target workpiece and coordinates of the placement position; and the placement detection unit is used for judging whether the placement position is correct after the placement of the target workpiece is completed.
4. A machine vision based industrial robot control system according to claim 3, wherein after the image of the target workpiece is acquired, the coordinate recognition unit calculates the position of each pixel in the image coordinate system, extracts the contour of the target workpiece in the image, and obtains the image coordinates of the target workpiece, and the coordinate conversion unit converts the image coordinates of the target workpiece to the robot coordinates.
5. The machine vision-based industrial robot control system according to claim 4, wherein the coordinate recognition unit recognizes the contour of the target workpiece by using an edge detection algorithm, specifically: filtering the image; calculating the gradient and direction of the filtered image; non-maximum value inhibition is carried out on the amplitude value of the gradient to obtain edge points; and detecting according to the maximum threshold value and the minimum threshold value of the edge points, and connecting the detected edge points to obtain the outline of the target workpiece.
6. The machine vision-based industrial robot control system according to claim 4, wherein the gripper robot is provided with 4 joint points and 1 gripper, and the instruction generating unit calculates the coordinate difference between the 4 joint points and the coordinates of the gripper in the robot coordinate system and the coordinates of the target workpiece based on the coordinates of the 4 joint points and the gripper, and obtains an adjustment scheme of the 4 joint points and the gripper.
7. The machine vision-based industrial robot control system according to claim 6, wherein when the joint point and the gripper of the gripping robot move according to the adjustment scheme, it is determined whether the target workpiece is within the gripping range, and if so, the gripping instruction is executed by the industrial robot; when the robot control module judges that the target workpiece is conveyed to the target coordinate point, the industrial robot is controlled to execute a placement instruction, and the placement detection unit judges whether the placement position is correct or not based on the image data of the placement position acquired by the second CCD camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310896922.4A CN116766201A (en) | 2023-07-21 | 2023-07-21 | Industrial robot control system based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310896922.4A CN116766201A (en) | 2023-07-21 | 2023-07-21 | Industrial robot control system based on machine vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116766201A true CN116766201A (en) | 2023-09-19 |
Family
ID=87991408
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310896922.4A Pending CN116766201A (en) | 2023-07-21 | 2023-07-21 | Industrial robot control system based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116766201A (en) |
-
2023
- 2023-07-21 CN CN202310896922.4A patent/CN116766201A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113146172B (en) | Multi-vision-based detection and assembly system and method | |
CN106853639A (en) | A kind of battery of mobile phone automatic assembly system and its control method | |
CN113379849B (en) | Robot autonomous recognition intelligent grabbing method and system based on depth camera | |
CN111923053A (en) | Industrial robot object grabbing teaching system and method based on depth vision | |
CN109926817B (en) | Machine vision-based automatic transformer assembling method | |
CN111645074A (en) | Robot grabbing and positioning method | |
CN114571153A (en) | Weld joint identification and robot weld joint tracking method based on 3D point cloud | |
JP2019057250A (en) | Work-piece information processing system and work-piece recognition method | |
CN114851209B (en) | Industrial robot working path planning optimization method and system based on vision | |
CN115629066A (en) | Method and device for automatic wiring based on visual guidance | |
CN113715012B (en) | Automatic assembling method and system for remote controller parts | |
CN114132745A (en) | Automatic workpiece loading and unloading system and method based on AGV and machine vision | |
CN113878576A (en) | Robot vision sorting process programming method | |
CN114463244A (en) | Vision robot grabbing system and control method thereof | |
CN116542914A (en) | Weld joint extraction and fitting method based on 3D point cloud | |
CN116766201A (en) | Industrial robot control system based on machine vision | |
CN114193440B (en) | Robot automatic grabbing system and method based on 3D vision | |
CN115471547A (en) | One-key calibration algorithm for visual detection of six-axis manipulator | |
CN111179255B (en) | Feature recognition method in automatic preparation process of membrane water-cooled wall | |
CN110060330B (en) | Three-dimensional modeling method and device based on point cloud image and robot | |
CN109202802B (en) | Visual guide system and method for clamping assembly | |
CN113715935A (en) | Automatic assembling system and automatic assembling method for automobile windshield | |
CN113763462A (en) | Method and system for automatically controlling feeding | |
CN117086519B (en) | Networking equipment data analysis and evaluation system and method based on industrial Internet | |
CN114056704B (en) | Feeding deviation rectifying method and equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |