CN113822810A - Method for positioning workpiece in three-dimensional space based on machine vision - Google Patents

Method for positioning workpiece in three-dimensional space based on machine vision Download PDF

Info

Publication number
CN113822810A
CN113822810A CN202110999633.8A CN202110999633A CN113822810A CN 113822810 A CN113822810 A CN 113822810A CN 202110999633 A CN202110999633 A CN 202110999633A CN 113822810 A CN113822810 A CN 113822810A
Authority
CN
China
Prior art keywords
image
workpiece
camera
positioning
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110999633.8A
Other languages
Chinese (zh)
Inventor
刘志峰
刘康
许静静
赵永胜
李龙飞
雷旦
陈建州
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202110999633.8A priority Critical patent/CN113822810A/en
Publication of CN113822810A publication Critical patent/CN113822810A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manipulator (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for positioning a workpiece in a three-dimensional space based on machine vision. And carrying out image preprocessing on the original image. Binarization in the preprocessing process is carried out based on k entropy in a generalized statistical mechanics theory. And obtaining the position information of the target workpiece in a pixel coordinate system through ROI region extraction and image edge detection. The object distance of the image can be obtained through analysis of the pinhole camera model, and then real-time position information of the target workpiece under a robot base coordinate system can be obtained through coordinate conversion, so that accurate positioning of the target workpiece is achieved.

Description

Method for positioning workpiece in three-dimensional space based on machine vision
Technical Field
The invention relates to a detection method combining machine vision and image recognition, which is applied to workpiece positioning in a three-dimensional space, is suitable for positioning detection of various workpieces in an automatic production process, and particularly relates to a real-time positioning method based on vision.
Background
With the increasing level of automation and intelligence in the manufacturing industry, more and more factories are beginning to introduce a lot of automated production lines including industrial robots. When the robot participates in the automatic production, it is necessary to provide a high requirement for the robot to be able to accurately position the workpiece. However, at present, most robot applications are still staying at a level where manual teaching is required, not fully releasing the maximum potential of industrial robots. The current industrial robot cannot well complete the tasks set by preselection once the working scene changes. This is also true in the general problem of workpiece positioning in such robotic applications.
In order to improve the adaptability of the industrial robot to scene changes and workpiece product changes in the application process, a method for positioning and detecting a target workpiece, which is efficient, accurate, practical and low in cost, is urgently needed, the production efficiency and the application rate of the robot are improved, and the application potential of the industrial robot is maximized as far as possible.
Disclosure of Invention
The invention aims to: aiming at the problem that the positioning precision of a workpiece is improved in the process of positioning a workpiece product in real time by a robot in the existing production. An image binarization method based on k entropy is provided, and the image processing effect is improved. The method for real-time positioning of the workpieces based on machine vision is provided, real-time positioning of various workpieces is achieved, and efficiency and precision of follow-up work are improved.
In order to solve the technical problems, the invention provides a workpiece positioning method based on vision, which comprises the following implementation steps:
after the focal length and the aperture of the industrial camera are adjusted to be proper, in order to enable the use effect to be optimal, the industrial camera cannot be directly used for projects including positioning and measuring processes, the camera must be calibrated, and the internal parameter information and the external parameter information of the camera are acquired through calibration. Therefore, the distortion correction can be carried out on the image through the distortion coefficient obtained by calibration after the image is collected, and the influence of a camera lens and other components on the image quality in the image collection process is eliminated.
The robot adjusts the pose of the robot, enables the target workpiece to appear in the visual field range of the camera, starts the camera to shoot the target workpiece, and obtains an original image of the target workpiece in the current working scene.
And carrying out noise reduction processing on the original image. In the shooting process of the camera, due to the photoelectric reasons in the internal sensor or the external scene of the camera, a large amount of noise exists on the image, and a lot of interference is brought to the image processing process, so that lecturing processing must be carried out on the image, and different filtering methods can be selected according to different noise types. Common filtering methods are: median filtering, mean filtering, gaussian filtering, bilateral filtering.
And selecting a proper threshold value to carry out binarization operation on the image by using a k entropy method based on the generalized statistical mechanics theory. The image is subjected to gray scale conversion, so that the calculation amount can be reduced, and the calculation speed can be improved. Setting t as the threshold value of the obtained optimal segmentation, wherein the threshold value t divides the image containing the target workpiece into two parts, and one part is the image area A corresponding to the target1The other part is a region except the target region and is represented by A2And (4) showing. By using hi=niS represents the ratio of the number of pixels belonging to each gray level to the total number of pixels in the image, where niIndicating the number of pixels with a gray scale i and S the total number of image pixels.
Figure BDA0003235243840000021
Figure BDA0003235243840000022
Figure BDA0003235243840000023
Figure BDA0003235243840000024
Figure BDA0003235243840000025
Figure BDA0003235243840000026
S(t)=S(O)K(B)+S(B)K(O)
The optimal segmentation threshold t is obtained by maximizing s (t).
Detecting the edge of the image by using a canny algorithm, wherein the specific process comprises the following steps: firstly, Gaussian smoothing filtering is carried out on an image, then the gradient amplitude and the direction are calculated, then non-maximum suppression is carried out on the gradient amplitude along the gradient direction, and finally the detected edges are connected by a double-threshold method, so that the contour is complete and continuous. Through the analysis of the workpiece outline, the position information of the target workpiece in a pixel coordinate system can be obtained, and the obtained pixel coordinate of the workpiece feature point is set as (u, v).
After obtaining the pixel coordinates (u, v) of the workpiece, coordinate transformation (unit transformation) is required to convert the pixel coordinates into image coordinates (x, y);
Figure BDA0003235243840000027
where dx, dy is the pixel size of the camera, (u0, v0) is a representation of the origin of the image coordinate system in the pixel coordinate system. γ is a warping factor, and is typically 0.
After obtaining the image coordinates (X, y) of the center point of the workpiece in the image coordinate system, the point needs to be further converted into the camera coordinate system for representation (X)c,Yc,Zc)。
Figure BDA0003235243840000028
Zc is the object distance when the camera shoots, and f is the focal length of the camera.
Obtaining a conversion matrix H between the camera and the tail end of the robot after the hand-eye calibration, obtaining a real-time pose matrix K of the robot through the robot controller, and obtaining real-time information (X) of the center point of the target workpiece under a robot base coordinate system through coordinate conversion againb,Yb,Zb)。
Figure BDA0003235243840000031
The robot vision-based method for accurately positioning the workpiece in the three-dimensional space has the following advantages:
1. the image segmentation method based on the k entropy can effectively perform target segmentation on the image in the actual application scene, so that key information points of the image can be extracted conveniently;
2. the proposed image processing process is simple and efficient, the calculation speed is high, the overall calculation efficiency is high, and the complexity is low; the real-time positioning function of the workpiece can be realized by processing the video image;
3. the method has strong applicability, can be suitable for various part products in industrial production, has simple equipment and can meet the use requirements in various scenes;
additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a block flow diagram of a method for vision-based spatial localization of a workpiece according to the present invention.
Fig. 2 is a gray level histogram of an image.
FIG. 3 is an SK diagram achieved by a k entropy method based on generalized statistical mechanics theory.
Detailed Description
The following are specific embodiments of the present invention and are further described with reference to the drawings, but the present invention is not limited to these embodiments.
It should also be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention.
The target image processed by the invention originates from an industrial camera of an industrial robot, which is used for detecting workpieces on a production line.
As shown in fig. 1, the step-by-step operation of the workpiece inspection method based on vision provided by the present invention is as follows:
step 1, calibrating. Because the developed rates of imaging of the edge area and the central area of the camera lens are different, distortion effect can be caused in the imaging process, and in order to ensure the optimal imaging effect, the camera must be calibrated before being used. The camera and the computer are connected to acquire real-time image information, and clear imaging effect can be achieved by adjusting the focal length and the aperture of the industrial camera. And acquiring the internal and external parameter information of the camera through a calibration program in Halcon image processing software. And carrying out distortion correction on the image by using the calibrated distortion coefficient, and eliminating the influence of a camera lens and other components on the image quality in the image acquisition process.
And 2, adjusting the position (eye-in-hand) of the camera through the mobile robot to enable the target workpiece to appear in the visual field range of the camera, starting the camera to shoot and collect the target workpiece, and acquiring an original image of the target workpiece in the current working scene.
And 3, filtering and denoising the original image. In the shooting process of the camera, due to photoelectric reasons in an internal sensor or an external scene of the camera, a large amount of noise exists on an image, the existence of the noise can bring much interference to the image processing process, and the acquired image noise is analyzed, so that the image is processed by selecting a median filtering method;
and 4, carrying out binarization on the image by using a k entropy method based on the generalized statistical mechanics theory. The image is subjected to gray scale conversion, so that the calculation amount can be reduced, and the calculation speed can be improved. Setting t as the threshold for obtaining the optimal segmentation, the threshold may be such that the image containing the target workpiece is divided into two parts, one part corresponding to the target area a1 of the image, and the part excluding the target area is denoted by a 2. By using hi=niS represents the ratio of the number of pixels belonging to each gray level to the total number of pixels in the image.
Figure BDA0003235243840000041
Figure BDA0003235243840000042
Figure BDA0003235243840000043
Figure BDA0003235243840000044
Figure BDA0003235243840000045
Figure BDA0003235243840000046
S(t)=S(O)K(B)+S(B)K(O)
And (3) obtaining the value of the optimal segmentation threshold t by maximizing the S (t), and segmenting the image by using the threshold t to find the ROI (region of interest) containing the target workpiece.
Step 5, detecting the edge of the image by using a canny algorithm, wherein the specific process comprises the following steps: firstly, Gaussian smoothing filtering is carried out on an image, then gradient amplitude and direction are calculated, then non-maximum suppression is carried out on the gradient amplitude along the gradient direction, finally, double thresholds are used for connecting detected edges, and proper edge information is selected. Through the analysis of the workpiece outline, the position information of the target workpiece in a pixel coordinate system can be obtained, and the obtained pixel coordinate of the workpiece feature point is set as (u, v).
Step 6, after obtaining the pixel coordinates (u, v) of the target point of the workpiece, carrying out coordinate affine transformation, and converting the pixel coordinates of the characteristic point into image coordinates (x, y);
Figure BDA0003235243840000047
where dx, dy is the pixel size of the camera, (u0, v0) is the position of the origin of the image coordinate system in the pixel coordinate system. γ is a warping factor added to the image coordinate system, and is typically taken to be 0.
Step 7, after obtaining the image coordinates (X, y) of the workpiece center point in the image coordinate system, further converting the point into the camera coordinate system for representation (X)c,Yc,Zc)。
Figure BDA0003235243840000051
Zc is the object distance when the camera shoots, and f is the focal length of the camera.
Step 9, obtaining a conversion matrix H between the camera and the tail end of the robot after the hand-eye calibration, obtaining a real-time pose matrix K of the tail end of the robot relative to a base coordinate system through the robot controller, and obtaining real-time information (X) of a target workpiece target point under the base coordinate system of the robot by carrying out coordinate conversion againb,Yb,Zb)
Figure BDA0003235243840000052
Therefore, real-time three-dimensional information of the upper feature points of the target workpiece under the robot base coordinate system is obtained through a series of steps, three-dimensional positioning of the workpiece is achieved, and the robot can be guided to move to a specified position to operate.

Claims (6)

1. A method for positioning a workpiece in a three-dimensional space based on machine vision is characterized in that: before the detection system is used, a camera is calibrated, and firstly, internal and external parameters of the camera are obtained through calibration, so that radial distortion of a camera lens and other components to an acquired image is eliminated, and the accuracy of the image acquired by the camera is ensured; secondly, performing necessary processing on the acquired image, and extracting an ROI (region of interest) of the image by a k entropy method based on a generalized statistical mechanics theory; and finally, converting the camera coordinate system into a world coordinate system according to a camera calibration result, and finding key pixel points for positioning the image so as to realize the positioning of the object in the actual scene.
2. The method of claim 1 for positioning a workpiece in three-dimensional space based on machine vision, wherein the method comprises the following steps: before the detection system is used, a camera is calibrated by adopting a Zhang Zhengyou calibration method, key parameters of a camera model are obtained, a workpiece is placed in a detection area, the workpiece in the detection area is photographed by using the camera, and an original image of the workpiece in the detection area is obtained.
3. The method of claim 1 for positioning a workpiece in three-dimensional space based on machine vision, wherein the method comprises the following steps: the image is subjected to necessary processing processes including noise filtering, image edge enhancement and gray level normalization to improve the image characteristic quality, and the problem that the image quality is reduced due to image noise or local bright spots are caused due to reflection and highlight, so that key characteristics of the image are difficult to extract and subsequent accurate positioning cannot be realized is solved.
4. The method of claim 1 for positioning a workpiece in three-dimensional space based on machine vision, wherein the method comprises the following steps: and extracting the ROI of the image by adopting a k entropy method based on a generalized statistical mechanics theory to realize the segmentation of the target and the background. T is set as a threshold value for obtaining the optimum segmentation, and the threshold value t divides the image including the target workpiece into two parts, one part being an image area a1 corresponding to the target, and the other part being an area (background) excluding the target area, which is denoted by a 2. By using hi=niS represents the ratio of the number of pixels belonging to each gray level to the total number of pixels in the image, where niIndicating the number of pixels with a gray scale i and S the total number of image pixels. And obtaining an optimal segmentation threshold t by maximizing the S (t), and segmenting the image by using the threshold t to find an ROI (region of interest) containing the target workpiece.
5. The method of claim 1 for positioning a workpiece in three-dimensional space based on machine vision, wherein the method comprises the following steps: the method comprises the steps of detecting the edge of an image by using a canny algorithm, firstly carrying out Gaussian smooth filtering on the image, then calculating the amplitude and the direction of a gradient, then carrying out non-maximum suppression on the amplitude of the gradient along the direction of the gradient, and finally connecting the detected edge by using a dual-threshold method to enable the contour to be complete and continuous. And (3) analyzing the contour of the workpiece to obtain the position information of the target workpiece in a pixel coordinate system, wherein the pixel coordinate of the key feature point of the workpiece is (u, v).
6. The method of claim 1 for positioning a workpiece in three-dimensional space based on machine vision, wherein the method comprises the following steps: after obtaining the pixel coordinates (u, v) of the workpiece, coordinate transformation is needed to be carried out, and the pixel coordinates are converted into image coordinates (x, y);
after obtaining the image coordinates (X, y) of the center point of the workpiece in the image coordinate system, the point needs to be further converted into the camera coordinate system for representation (X)c,Yc,Zc);
Obtaining the rotation between the camera and the end of the robot after the calibration by the hands and eyesChanging matrix H, obtaining real-time pose matrix K of the robot through the robot controller, and obtaining real-time information (X) of the center point of the target workpiece under the robot base coordinate system through coordinate transformation againb,Yb,Zb)。
CN202110999633.8A 2021-08-29 2021-08-29 Method for positioning workpiece in three-dimensional space based on machine vision Pending CN113822810A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110999633.8A CN113822810A (en) 2021-08-29 2021-08-29 Method for positioning workpiece in three-dimensional space based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110999633.8A CN113822810A (en) 2021-08-29 2021-08-29 Method for positioning workpiece in three-dimensional space based on machine vision

Publications (1)

Publication Number Publication Date
CN113822810A true CN113822810A (en) 2021-12-21

Family

ID=78923233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110999633.8A Pending CN113822810A (en) 2021-08-29 2021-08-29 Method for positioning workpiece in three-dimensional space based on machine vision

Country Status (1)

Country Link
CN (1) CN113822810A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114292021A (en) * 2021-12-30 2022-04-08 南京春辉科技实业有限公司 System and method for adjusting preform rod in real time in quartz optical fiber drawing process
CN115830053A (en) * 2023-01-17 2023-03-21 江苏金恒信息科技股份有限公司 Cord steel mosaic sample edge positioning method and system based on machine vision
CN116071361A (en) * 2023-03-20 2023-05-05 深圳思谋信息科技有限公司 Visual positioning method and device for workpiece, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663393A (en) * 2012-03-02 2012-09-12 哈尔滨工程大学 Method for extracting region of interest of finger vein image based on correction of rotation
CN104700421A (en) * 2015-03-27 2015-06-10 中国科学院光电技术研究所 Adaptive threshold edge detection algorithm based on canny
CN106272424A (en) * 2016-09-07 2017-01-04 华中科技大学 A kind of industrial robot grasping means based on monocular camera and three-dimensional force sensor
CN109934839A (en) * 2019-03-08 2019-06-25 北京工业大学 A kind of workpiece inspection method of view-based access control model
CN111267094A (en) * 2019-12-31 2020-06-12 芜湖哈特机器人产业技术研究院有限公司 Workpiece positioning and grabbing method based on binocular vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663393A (en) * 2012-03-02 2012-09-12 哈尔滨工程大学 Method for extracting region of interest of finger vein image based on correction of rotation
CN104700421A (en) * 2015-03-27 2015-06-10 中国科学院光电技术研究所 Adaptive threshold edge detection algorithm based on canny
CN106272424A (en) * 2016-09-07 2017-01-04 华中科技大学 A kind of industrial robot grasping means based on monocular camera and three-dimensional force sensor
CN109934839A (en) * 2019-03-08 2019-06-25 北京工业大学 A kind of workpiece inspection method of view-based access control model
CN111267094A (en) * 2019-12-31 2020-06-12 芜湖哈特机器人产业技术研究院有限公司 Workpiece positioning and grabbing method based on binocular vision

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114292021A (en) * 2021-12-30 2022-04-08 南京春辉科技实业有限公司 System and method for adjusting preform rod in real time in quartz optical fiber drawing process
CN115830053A (en) * 2023-01-17 2023-03-21 江苏金恒信息科技股份有限公司 Cord steel mosaic sample edge positioning method and system based on machine vision
CN115830053B (en) * 2023-01-17 2023-09-05 江苏金恒信息科技股份有限公司 Machine vision-based cord steel mosaic edge positioning method and system
CN116071361A (en) * 2023-03-20 2023-05-05 深圳思谋信息科技有限公司 Visual positioning method and device for workpiece, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111951237B (en) Visual appearance detection method
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
CN107263468B (en) SCARA robot assembly method using digital image processing technology
CN111721259B (en) Underwater robot recovery positioning method based on binocular vision
CN111251336B (en) Double-arm cooperative intelligent assembly system based on visual positioning
CN111126174A (en) Visual detection method for robot to grab parts
CN114279357B (en) Die casting burr size measurement method and system based on machine vision
CN109815822B (en) Patrol diagram part target identification method based on generalized Hough transformation
CN115830018B (en) Carbon block detection method and system based on deep learning and binocular vision
CN110334727B (en) Intelligent matching detection method for tunnel cracks
CN109035214A (en) A kind of industrial robot material shapes recognition methods
CN114425776A (en) Automatic labeling positioning and deviation rectifying method based on computer vision
CN112560704B (en) Visual identification method and system for multi-feature fusion
CN108582075A (en) A kind of intelligent robot vision automation grasping system
CN108109154A (en) A kind of new positioning of workpiece and data capture method
CN109671084B (en) Method for measuring shape of workpiece
CN113012228B (en) Workpiece positioning system and workpiece positioning method based on deep learning
CN104966283A (en) Imaging layered registering method
CN116594351A (en) Numerical control machining unit system based on machine vision
CN116862881A (en) Multi-target real-time offset detection method based on image processing
CN116823708A (en) PC component side mold identification and positioning research based on machine vision
CN109410272B (en) Transformer nut recognition and positioning device and method
CN114932292B (en) Narrow-gap passive vision weld joint tracking method and system
CN115753791A (en) Defect detection method, device and system based on machine vision
CN113752258A (en) Machine vision-based shaft hole centering guide method for workpiece assembly process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination