CN118135011A - Meteorological station surrounding obstacle elevation angle measuring method based on visual recognition technology - Google Patents

Meteorological station surrounding obstacle elevation angle measuring method based on visual recognition technology Download PDF

Info

Publication number
CN118135011A
CN118135011A CN202311833347.XA CN202311833347A CN118135011A CN 118135011 A CN118135011 A CN 118135011A CN 202311833347 A CN202311833347 A CN 202311833347A CN 118135011 A CN118135011 A CN 118135011A
Authority
CN
China
Prior art keywords
scale
obstacle
picture
elevation angle
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311833347.XA
Other languages
Chinese (zh)
Inventor
朱宁宁
乔春欣
刘雪梅
彭喜花
庞建峰
朱秀芳
谢兴勇
刘飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202311833347.XA priority Critical patent/CN118135011A/en
Publication of CN118135011A publication Critical patent/CN118135011A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing and machine vision, and discloses a method for measuring the elevation angle of a surrounding obstacle of a weather station based on a visual recognition technology, which comprises the steps of constructing a target detection model of the obstacle and a staff gauge, and training the detection of the staff gauge and the obstacle by adopting a yoloact deep learning model; acquiring scale information in a camera reference position image, extracting a scale contour line, and obtaining the position information of a scale; the camera rotates in the horizontal plane from the reference position to obtain 360-degree panoramic information, and the angle information corresponding to each picture is recorded. And successively inputting the acquired pictures into a trained target detection model to perform target detection, extracting category and contour information of a target obstacle, and calculating the elevation angle of the obstacle by combining the scale information and known scale parameters. Compared with the prior art, the invention obtains the elevation angle information of the obstacle according to the perspective imaging principle and the triangle similarity principle by proportion measurement and calculation, and can reflect the change condition of the obstacle around the observation field in time.

Description

Meteorological station surrounding obstacle elevation angle measuring method based on visual recognition technology
Technical Field
The invention relates to the technical field of image processing and machine vision, in particular to a method for measuring the elevation angle of a surrounding obstacle of a weather station based on a visual identification technology.
Background
When the ground meteorological observation station selects the site, the influence of surrounding obstacle shielding on measurement needs to be considered, and the corresponding evaluation index is called a 'horizon shielding elevation angle'. For the measurement of the 'horizontal ring shielding elevation angle', the traditional method is to horizontally erect the theodolite at the central point of an observation field, the lens is 1.5m away from the ground, and the azimuth scale is aligned to the north at 0 degrees. And measuring the maximum shielding elevation angle of the terrain in the visible range, starting from the north, measuring the azimuth at intervals of 2 degrees in the clockwise direction, and then calculating the accumulated value of the shielding visual angle factors by combining the measurement results.
However, the conventional measuring method is time-consuming and labor-consuming in workload, conventional detection cannot be realized, and the change condition of obstacles around an observation field cannot be reflected in time.
Disclosure of Invention
The invention aims to: aiming at the problems existing in the prior art, the invention provides a method for measuring the elevation angle of a surrounding obstacle of a weather station based on a visual recognition technology, which is characterized in that video image data are collected through a camera, the outline of a target obstacle is extracted, and the elevation angle information of the surrounding obstacle is obtained through proportional measurement according to a perspective imaging principle and a triangle similarity principle.
The technical scheme is as follows: the invention provides a method for measuring the elevation angle of a surrounding obstacle of a weather station based on a visual identification technology, which comprises the following steps:
Step 1: constructing a target detection model of a conventional obstacle and a ruler, wherein the target detection model adopts a yolo deep learning model to train the detection of the ruler and the obstacle;
step 2: acquiring scale information in a camera reference position image, extracting a scale contour line, and obtaining position information and height information of a scale;
step 3: the camera rotates in a horizontal plane from a reference position to obtain 360-degree panoramic information, and angle information corresponding to each picture is recorded;
Step 4: and (3) sequentially inputting the acquired pictures into the target detection model trained in the step (1) to perform target detection, extracting the category and contour information of the target obstacle, and calculating the elevation angle of the obstacle by combining the scale information and the known scale parameters in the step (2).
Further, the step1 includes the following steps:
step 1.1: collecting pictures, and establishing a scale and obstacle data set; the obstacle comprises a mountain, a structure and a tree; setting a scale, wherein the height of the scale is consistent with the height of the center point of the camera;
step 1.2: training a target detection model, performing contour labeling on a target obstacle and a scale by using labelme software, generating a json file by labeling, and then converting the json file into a yolo-format file by using a script;
step 1.3: the Python script file makes the collected pictures into a training data set file 'train. Txt' and a test data set file 'test. Txt';
Step 1.4: and inputting yolo the obtained data set file into a deep learning model, and performing model training of obstacle target detection.
Further, the specific steps of the step 2 are as follows:
Step 2.1: positioning a scale: selecting a characteristic scale with the height consistent with the central position of the camera, and placing the characteristic scale in front of the camera, wherein the distance is based on the fact that a complete picture can be presented in the view range of the camera image;
Step 2.2: preprocessing a picture: after the picture containing the scale is obtained through the camera, taking the center position of the picture as a calibration, and taking one fifth of the picture left and right to ensure the center position of the scale;
Step 2.3: inputting the image into the target detection model trained in the step 1, carrying out scale recognition, extracting the scale outline, and calculating the height c' of the scale in the figure according to the longitudinal outline.
Further, in the step 2.2, after the center picture is obtained, an undistorted change of the image size is performed to change the picture to 640×640 size.
Further, the picture acquisition frequency in the step 3 is not more than 2 degrees, namely, image information is acquired at least every 2 degrees.
Further, the specific step of calculating the elevation angle of the obstacle in the step 4 is as follows:
1) Acquiring a center picture according to the image preprocessing method in the step 2.2, and changing the picture into 640 multiplied by 640;
2) Inputting the image with the modified size into the target detection model trained in the step 1, identifying the category contour of the target obstacle, and extracting relevant information;
3) Extracting characteristic dimensions: extracting the highest point coordinate of the barrier contour line at the vertical center line of each picture, and calculating the difference value between the highest point coordinate and the ordinate of the image center point, wherein the absolute value of the difference value represents the dimension b' of the barrier higher than the scale in the picture;
4) Elevation angle calculation: the elevation angle is calculated by combining the actual height b of the scale and the actual distance a between the scale and the camera through a similar triangle principle, and the related calculation formula is as follows:
tana=b/a (2)
a=arctan(b/a) (3)
Wherein b' is the dimension of the obstacle higher than the scale in the figure; c' is the height of the scale in the figure; c is the actual height of the scale; a is the actual distance between the scale and the camera.
The beneficial effects are that:
The method comprises the steps of collecting video image data through a camera, extracting an external rectangular frame of a target obstacle, and obtaining obstacle elevation angle information through proportional measurement and calculation according to a perspective imaging principle and a triangle similarity principle. Compared with the traditional manual measurement method, the method can provide higher detection precision, is efficient and quick, can realize routine detection, and can timely reflect the change condition of obstacles around an observation field.
Drawings
FIG. 1 is a diagram of an obstacle dataset acquired by an embodiment of the invention;
FIG. 2 is a graph showing the relationship between a scale and a camera according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the principle of perspective imaging of a camera, a scale and an obstacle and triangle similarity according to the embodiment of the invention;
FIG. 4 is a class outline identification chart of a target obstacle and a scale after a target detection model in the embodiment of the invention;
FIG. 5 is a diagram of a deep learning model framework of the present invention yolo.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
The invention discloses a method for measuring the elevation angle of a surrounding obstacle of a weather station based on a visual identification technology, which comprises the steps of collecting video image data through a camera, extracting the outline of a target obstacle, and obtaining the elevation angle information of the obstacle according to a perspective imaging principle and a triangle similarity principle through proportion measurement. The method specifically comprises the following steps:
1. YOLO-based obstacle target detection.
And (3) acquiring conventional obstacle (mountain, structure, tree and the like) images, establishing an obstacle data set, marking the category of the obstacle images, and constructing an obstacle target detection model. Referring to fig. 1, fig. 1 is a selected obstacle dataset.
Calibrating the horizontal view angle of the camera: two scales with the same height as the center point of the camera are selected, and the angle of the camera is adjusted so that the two points with the same height as the camera are overlapped with each other in the view angle of the camera, and the camera is at the horizontal view angle at the moment, and particularly, see fig. 2.
When training the target detection model, the method specifically comprises the following steps:
1) Training a target detection model, wherein the target detection model adopts yolo deep learning model to train a scale and an obstacle, referring to fig. 5, using labelme software to outline the target obstacle and the scale, generating json file by labeling, and then using script to convert the json file into yolo format file.
2) The Python script file makes the captured pictures into a training dataset file "train. Txt" and a test dataset file "test. Txt".
The method comprises the steps of collecting scale information in a camera reference position image, extracting a scale contour line, and obtaining position information and height information of a scale, wherein the method comprises the following steps of:
Step 2.1: positioning a scale: and selecting a characteristic scale with the height consistent with the central position of the camera, and placing the characteristic scale in front of the camera, wherein the distance is based on the fact that a complete picture can be displayed in the image visual field range of the camera.
Step 2.2: preprocessing a picture: after the picture containing the scale is obtained through the camera, the center position of the picture is used as a calibration, and one fifth of the picture is taken leftwards and rightwards, so that the center position of the scale is ensured. After the center picture is taken, an undistorted change in image size is made, changing the picture to 640 x 640 size.
Step 2.3: inputting the image into the target detection model trained in the step 1, carrying out scale recognition, extracting the scale outline, and calculating the height c' of the scale in the figure according to the longitudinal outline.
The camera rotates in the horizontal plane from the reference position to obtain 360-degree panoramic information, and the angle information corresponding to each picture is recorded. The picture acquisition frequency is not more than 2 degrees, namely, the image information is acquired at least every 2 degrees.
And (3) sequentially inputting the acquired pictures into the target detection model trained in the step (1) to perform target detection, extracting the category and contour information of the target obstacle, and calculating the elevation angle of the obstacle by combining the scale information and the known scale parameters in the step (2).
2. Obstacle elevation calculation based on target detection
The basic idea is as follows: according to the triangle-like principle, the elevation angle of the obstacle can be calculated using b and a (the distance of the camera from the scale) in fig. 3 below. b is the characteristic size of the obstacle at the scale, and a is the distance between the camera and the scale through target detection and image characteristic extraction. The proportional relation between the dimension b 'of the obstacle higher than the scale in the image and the height c' of the scale in the image can be obtained, and then elevation angle data can be obtained according to the formulas (1) and (2).
tana=b/a (2)
a=arctan(b/a) (3)
Wherein b' is the dimension of the obstacle higher than the scale in the figure; c' is the height of the scale in the figure; c is the actual height of the scale; a is the actual distance between the scale and the camera.
1) And (3) target detection:
As shown in fig. 4, boundingbox (bounding box) of the scale and the obstacle can be detected in the picture by the object detection model, and the coordinate position and length-width data of the bounding box can be obtained by feature extraction.
2) Feature size extraction
The ordinate of the center point of the picture is the position of the camera horizontal plane, and the difference value between the ordinate of the highest point of the obstacle boundary frame and the ordinate of the center point coordinate is the value of b' in the calculation formula (1); and the height in the graph of the scale bounding box is c' in equation (1).
3) Elevation calculation
The perspective height b of the obstacle at the position of the scale can be obtained by using the formula (1); and then the tangent value of the elevation angle is calculated by using the formula (2), and the elevation angle value is calculated by using the formula (3).
The foregoing embodiments are merely illustrative of the technical concept and features of the present invention, and are intended to enable those skilled in the art to understand the present invention and to implement the same, not to limit the scope of the present invention. All equivalent changes or modifications made according to the spirit of the present invention should be included in the scope of the present invention.

Claims (6)

1. A method for measuring the elevation angle of a surrounding obstacle of a weather station based on a visual recognition technology is characterized by comprising the following steps:
Step 1: constructing a target detection model of a conventional obstacle and a ruler, wherein the target detection model adopts a yolo deep learning model to train the detection of the ruler and the obstacle;
step 2: acquiring scale information in a camera reference position image, extracting a scale contour line, and obtaining position information and height information of a scale;
step 3: the camera rotates in a horizontal plane from a reference position to obtain 360-degree panoramic information, and angle information corresponding to each picture is recorded;
Step 4: and (3) sequentially inputting the acquired pictures into the target detection model trained in the step (1) to perform target detection, extracting the category and contour information of the target obstacle, and calculating the elevation angle of the obstacle by combining the scale information and the known scale parameters in the step (2).
2. The method for measuring the elevation angle of the obstacle around the weather station based on the visual recognition technology according to claim 1, wherein the step 1 comprises the steps of:
step 1.1: collecting pictures, and establishing a scale and obstacle data set; the obstacle comprises a mountain, a structure and a tree; setting a scale, wherein the height of the scale is consistent with the height of the center point of the camera;
step 1.2: training a target detection model, performing contour labeling on a target obstacle and a scale by using labelme software, generating a json file by labeling, and then converting the json file into a yolo-format file by using a script;
step 1.3: the Python script file makes the collected pictures into a training data set file 'train. Txt' and a test data set file 'test. Txt';
Step 1.4: and inputting yolo the obtained data set file into a deep learning model, and performing model training of obstacle target detection.
3. The method for measuring the elevation angle of the obstacle around the weather station based on the visual recognition technology according to claim 1, wherein the specific steps of the step 2 are as follows:
Step 2.1: positioning a scale: selecting a characteristic scale with the height consistent with the central position of the camera, and placing the characteristic scale in front of the camera, wherein the distance is based on the fact that a complete picture can be presented in the view range of the camera image;
Step 2.2: preprocessing a picture: after the picture containing the scale is obtained through the camera, taking the center position of the picture as a calibration, and taking one fifth of the picture left and right to ensure the center position of the scale;
Step 2.3: inputting the image into the target detection model trained in the step 1, carrying out scale recognition, extracting the scale outline, and calculating the height c' of the scale in the figure according to the longitudinal outline.
4. The method for measuring elevation angle of obstacle around weather station based on visual recognition technology according to claim 3, wherein in step 2.2, after the center picture is obtained, the picture is changed to 640 x 640 size by changing the image size without distortion.
5. The method for measuring the elevation angle of the obstacle around the weather station based on the visual recognition technology according to claim 1, wherein the picture acquisition frequency in the step 3 is not more than 2 degrees, namely, the image information is acquired at least every 2 degrees.
6. The method for measuring the elevation angle of the obstacle around the weather station based on the visual recognition technology according to claim 3, wherein the specific step of calculating the elevation angle of the obstacle in the step 4 is as follows:
1) Acquiring a center picture according to the image preprocessing method in the step 2.2, and changing the picture into 640 multiplied by 640;
2) Inputting the image with the modified size into the target detection model trained in the step 1, identifying the category contour of the target obstacle, and extracting relevant information;
3) Extracting characteristic dimensions: extracting the highest point coordinate of the barrier contour line at the vertical center line of each picture, and calculating the difference value between the highest point coordinate and the ordinate of the image center point, wherein the absolute value of the difference value represents the dimension b' of the barrier higher than the scale in the picture;
4) Elevation angle calculation: the elevation angle is calculated by combining the actual height b of the scale and the actual distance a between the scale and the camera through a similar triangle principle, and the related calculation formula is as follows:
tana=b/a (2)
a=arctan(b/a) (3)
Wherein b' is the dimension of the obstacle higher than the scale in the figure; c' is the height of the scale in the figure; c is the actual height of the scale; a is the actual distance between the scale and the camera.
CN202311833347.XA 2023-12-27 2023-12-27 Meteorological station surrounding obstacle elevation angle measuring method based on visual recognition technology Pending CN118135011A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311833347.XA CN118135011A (en) 2023-12-27 2023-12-27 Meteorological station surrounding obstacle elevation angle measuring method based on visual recognition technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311833347.XA CN118135011A (en) 2023-12-27 2023-12-27 Meteorological station surrounding obstacle elevation angle measuring method based on visual recognition technology

Publications (1)

Publication Number Publication Date
CN118135011A true CN118135011A (en) 2024-06-04

Family

ID=91230829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311833347.XA Pending CN118135011A (en) 2023-12-27 2023-12-27 Meteorological station surrounding obstacle elevation angle measuring method based on visual recognition technology

Country Status (1)

Country Link
CN (1) CN118135011A (en)

Similar Documents

Publication Publication Date Title
CN111473739B (en) Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area
CN103411553B (en) The quick calibrating method of multi-linear structured light vision sensors
KR100915600B1 (en) Method for measuring 3-dimensinal coordinates of images using a target for ground control point
CN102980510B (en) A kind of laser light chi image assize device and tree survey method thereof
CN104077760A (en) Rapid splicing system for aerial photogrammetry and implementing method thereof
CN113610060B (en) Structure crack sub-pixel detection method
CN111738229B (en) Automatic reading method for scale of pointer dial
CN112132900B (en) Visual repositioning method and system
CN116758136B (en) Real-time online identification method, system, equipment and medium for cargo volume
KR101255461B1 (en) Position Measuring Method for street facility
CN116051537A (en) Crop plant height measurement method based on monocular depth estimation
CN115854895A (en) Non-contact stumpage breast diameter measurement method based on target stumpage form
CN114492070A (en) High-precision mapping geographic information virtual simulation technology and device
CN112906719A (en) Standing tree factor measuring method based on consumption-level depth camera
CN115761532A (en) Automatic detection system for power transmission line navigation image
CN114812418A (en) Portable plant density and plant spacing measurement system
Peppa et al. Handcrafted and learning-based tie point features-comparison using the EuroSDR RPAS benchmark datasets
CN104613940A (en) Scheme for photographing and measuring forest permanent sample plot
Kaufmann et al. Long-term monitoring of glacier change at Gössnitzkees (Austria) using terrestrial photogrammetry
CN110658844B (en) Ultra-high voltage direct current line channel unmanned aerial vehicle monitoring method and system
CN118135011A (en) Meteorological station surrounding obstacle elevation angle measuring method based on visual recognition technology
CN115830474A (en) Method and system for identifying wild Tibetan medicine lamiophlomis rotata and distribution thereof and calculating yield thereof
CN110617800A (en) Emergency remote sensing monitoring method, system and storage medium based on civil aircraft
CN112767460A (en) Spatial fingerprint image registration element feature description and matching method
KR20210001688A (en) System and method for automatically extracting reference points of images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination