CN116233360A - Automatic tracking shooting cradle head system based on networking - Google Patents
Automatic tracking shooting cradle head system based on networking Download PDFInfo
- Publication number
- CN116233360A CN116233360A CN202211707531.5A CN202211707531A CN116233360A CN 116233360 A CN116233360 A CN 116233360A CN 202211707531 A CN202211707531 A CN 202211707531A CN 116233360 A CN116233360 A CN 116233360A
- Authority
- CN
- China
- Prior art keywords
- point
- motion
- module
- image information
- grid
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000006855 networking Effects 0.000 title claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 22
- 230000008859 change Effects 0.000 claims abstract description 10
- 238000000034 method Methods 0.000 claims description 7
- 238000012795 verification Methods 0.000 claims description 6
- 238000012544 monitoring process Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/083—Network architectures or network communication protocols for network security for authentication of entities using passwords
Abstract
The invention discloses an automatic tracking shooting cradle head system based on networking, and relates to the technical field of automatic tracking shooting systems; to improve the anti-jamming capability; the system comprises: the acquisition module is used for acquiring image information in real time; the identification module is used for identifying the moving points in the acquired image information; the analysis module is used for setting a grid area based on the acquired image information and analyzing the motion state of the motion point by taking the grid area as a reference; the processing module generates a control instruction according to the analysis of the motion state; and the control module is used for controlling the angle change of the cradle head assembly based on the control instruction of the processing module. The automatic tracking shooting tripod head system can track the object moving in the shooting lens in real time, and meets the daily intelligent monitoring requirement; through setting up identification module, can distinguish effective object and invalid object based on object volume, more easily get rid of the interference.
Description
Technical Field
The invention relates to the technical field of automatic tracking shooting systems, in particular to an automatic tracking shooting tripod head system based on networking.
Background
The camera is a video input device, belongs to a kind of closed circuit television, is widely applied to video conference, remote medical treatment, real-time monitoring and other aspects, generally has the basic functions of video photography, propagation, still image capturing and the like, processes the image by a photosensitive component circuit and a control component in the camera after the image is acquired by a lens, converts the image into a digital signal which can be identified by a computer, then is connected by a parallel port and USB, inputs the digital signal into the computer, and then restores the image by software, thereby forming a picture; with the progress of technology, cameras are more and more intelligent, and cameras with tracking shooting functions are more and more popular.
Through retrieval, the patent with the Chinese patent application number of CN201711365727.X discloses a method and a system for automatically tracking preset positions based on cameras, wherein the method automatically associates a group of cameras through positioning data of a moving target in a pipe gallery, and automatically switches the preset positions of the cameras along with the movement of the moving target in the pipe gallery so as to realize automatic tracking; the system comprises a camera basic information unit, a central processing unit and a control unit, wherein the camera basic information unit is connected with the central processing unit and is used for acquiring camera point position information and camera preset position information; the central processing unit is connected with the control corresponding camera switching preset position unit and is used for automatically finding out a group of cameras connected with the central processing unit according to the positioning information of personnel and calculating the optimal preset position of different cameras to be switched; and controlling the corresponding camera to switch the preset bit unit, which is used for receiving the instruction signal of the central processing unit and executing the instruction signal to switch the preset bit of the camera. The system of the above patent suffers from the following disadvantages: in the actual use process, the error control of the camera may be caused by external interference, so as to influence the shooting of the main target.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides an automatic tracking shooting holder system based on networking.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
an automatic tracking shooting tripod head system based on networking, comprising:
the acquisition module is used for acquiring image information in real time;
the identification module is used for identifying the moving points in the acquired image information;
the analysis module is used for setting a grid area based on the acquired image information and analyzing the motion state of the motion point by taking the grid area as a reference;
the processing module generates a control instruction according to the analysis of the motion state;
and the control module is used for controlling the angle change of the cradle head assembly based on the control instruction of the processing module.
Preferably: the identification module comprises:
the identification parameter setting unit is used for setting the upper limit and the lower limit of the size of the identification object and the upper limit and the lower limit of the movement point; wherein the object volume is larger than the set upper limit or smaller than the set lower limit, and is an invalid object, and is an effective object in the opposite direction;
and the object identification unit is used for identifying the effective objects with the volumes within the set values according to the set values of the identification parameter setting unit and generating the movement points with the numbers between the upper limit and the lower limit.
Further: the identification module further comprises:
and the human image recognition unit is used for recognizing human images in the collected image information.
Further preferred is: the analysis module comprises:
the analysis parameter setting unit is used for setting the upper limit and the lower limit of the movement speed of the movement point; when the motion speed of the motion point is greater than the set upper limit or less than the set lower limit, the motion point is an invalid motion point, and otherwise, the motion point is an effective motion point;
and the calculating unit is used for calculating the movement speed of the movement point.
As a preferred embodiment of the present invention: the grid area set by the analysis module is divided into a center grid Wa, an edge grid Wb and a sub-edge grid Wc; the edge grids Wb are positioned at the most edge of the grid area, each edge grid Wb forms a closed-loop structure, the sub-edge grids Wc are positioned at the inner sides of the edge grids Wb, and each sub-edge grid Wc forms a closed-loop structure smaller than the edge grid Wb; the center grid Wa is located inside the sub-edge grid Wc.
Further preferred as the present invention is: the shooting method of the system comprises the following steps:
s1: the acquisition module acquires image information in real time;
s2: the identification module screens the effective part in the image information according to the acquired image information and identifies the motion point of the effective part;
s3: the analysis module sets a grid area based on the acquired image information;
s4: the analysis module takes the grid area as a reference, analyzes the motion state of the motion points, removes invalid motion points and reserves valid motion points;
s5: the processing module generates a control instruction according to the analysis of the motion state;
s6: the control module controls the angle change of the cradle head assembly based on the control instruction of the processing module.
As still further aspects of the invention: in the step S2, the specific workflow of the identification module is as follows:
s21: the identification module establishes a two-dimensional coordinate system according to the image information;
s22: identifying a moving point of a moving object;
s23: taking the two-dimensional coordinate system as a reference, and respectively taking the point positions of the motion points closest to the edge of the two-dimensional coordinate system;
s24: calculating the volume value of the object according to the point position of the edge motion point;
s25: and (3) comparing the volume value with the upper limit and the lower limit of the volume value of the set object, and if the volume value falls within the range, transferring to the step (S3), and transferring to the step (S1) if the volume value falls outside the range.
Based on the scheme: in the step S23, the number of the motion points closest to the edge of the two-dimensional coordinate system is four, and the points are respectively:
right side point location (X) max ,Y n ) Left side point location (X) min ,Y m ) Upper side point location (X) n ,Y max ) Point on the lower side (X) m ,Y min ) Four points;
wherein X is max Is the corresponding X-axis coordinate and Y-axis coordinate of the rightmost motion point in a two-dimensional coordinate system n The Y-axis coordinate corresponding to the rightmost motion point in the two-dimensional coordinate system;
X min is the corresponding X-axis coordinate and Y-axis coordinate of the leftmost movement point in a two-dimensional coordinate system n The Y-axis coordinate corresponding to the leftmost movement point in the two-dimensional coordinate system;
Y max for the Y-axis coordinate and X-axis coordinate corresponding to the uppermost motion point in the two-dimensional coordinate system n The corresponding X-axis coordinate of the uppermost moving point in the two-dimensional coordinate system;
Y min for the Y-axis coordinate and X-axis coordinate corresponding to the lowest motion point in the two-dimensional coordinate system m The corresponding X-axis coordinate of the lowest motion point in the two-dimensional coordinate system.
Preferred on the basis of the foregoing scheme: the identification module is identified with a right-hand point location (X max ,Y n ) Left side point location (X) min ,Y m ) Upper side point location (X) n ,Y max ) Point on the lower side (X) m ,Y min ) The four points are used as the reference, and the central point (X z ,Y z );
Wherein X is z =(X max +X min )/2;Y z =(Y max +Y min )/2;
When the central point position reaches the sub-edge grid Wc, the processing module generates a control instruction, and the control module controls the angle change of the cradle head assembly to enable the central point position to return to the central grid Wa.
Further preferred on the basis of the foregoing scheme is: the system further comprises:
the storage module is used for storing the image information acquired by the acquisition module;
the identity recognition module is used for carrying out identity recognition verification in a password mode;
and the authority management module opens the authority for reading the storage information in the storage module based on the identification verification result.
The beneficial effects of the invention are as follows:
1. the automatic tracking shooting tripod head system can track the object moving in the shooting lens in real time, and meets the daily intelligent monitoring requirement.
2. According to the automatic tracking shooting holder system, the identification module is arranged, so that effective objects and ineffective objects can be distinguished based on the object volume, interference is easier to eliminate, and the running reliability is improved.
3. According to the automatic tracking shooting tripod head system, the effective moving points and the ineffective moving points can be screened out based on the moving speed of the moving points by arranging the analysis module, so that the anti-interference effect is further improved.
4. According to the automatic tracking shooting tripod head system, the center grid Wa, the edge grid Wb and the sub-edge grid Wc are arranged, so that a target object can be better exposed to the middle position of a lens, and the shooting effect is improved
Drawings
FIG. 1 is a workflow diagram of an automatic tracking shooting cradle head system based on networking;
fig. 2 is a grid area schematic diagram of an automatic tracking shooting pan-tilt system based on networking according to the present invention.
Detailed Description
The technical scheme of the patent is further described in detail below with reference to the specific embodiments.
Embodiments of the present patent are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present patent and are not to be construed as limiting the present patent.
Example 1:
an automatic tracking shooting cradle head system based on networking, as shown in fig. 1, comprises:
the acquisition module is used for acquiring image information in real time;
the identification module is used for identifying the moving points in the acquired image information;
the analysis module is used for setting a grid area based on the acquired image information and analyzing the motion state of the motion point by taking the grid area as a reference;
the processing module generates a control instruction according to the analysis of the motion state;
and the control module is used for controlling the angle change of the cradle head assembly based on the control instruction of the processing module.
Wherein, the identification module includes:
the identification parameter setting unit is used for setting the upper limit and the lower limit of the size of the identification object and the upper limit and the lower limit of the movement point; wherein the object volume is larger than the set upper limit or smaller than the set lower limit, and is an invalid object, and is an effective object in the opposite direction;
and the object identification unit is used for identifying the effective objects with the volumes within the set values according to the set values of the identification parameter setting unit and generating the movement points with the numbers between the upper limit and the lower limit.
Wherein the identification module further comprises:
and the human image recognition unit is used for recognizing human images in the collected image information.
Wherein the analysis module comprises:
the analysis parameter setting unit is used for setting the upper limit and the lower limit of the movement speed of the movement point; when the motion speed of the motion point is greater than the set upper limit or less than the set lower limit, the motion point is an invalid motion point, and otherwise, the motion point is an effective motion point;
and the calculating unit is used for calculating the movement speed of the movement point.
As shown in fig. 2, the grid area set by the analysis module is divided into a center grid Wa, an edge grid Wb, and a sub-edge grid Wc; the edge grids Wb are positioned at the most edge of the grid area, each edge grid Wb forms a closed-loop structure, the sub-edge grids Wc are positioned at the inner sides of the edge grids Wb, and each sub-edge grid Wc forms a closed-loop structure smaller than the edge grid Wb; the center grid Wa is located inside the sub-edge grid Wc.
The shooting method of the system comprises the following steps:
s1: the acquisition module acquires image information in real time;
s2: the identification module screens the effective part in the image information according to the acquired image information and identifies the motion point of the effective part;
s3: the analysis module sets a grid area based on the acquired image information;
s4: the analysis module takes the grid area as a reference, analyzes the motion state of the motion points, removes invalid motion points and reserves valid motion points;
s5: the processing module generates a control instruction according to the analysis of the motion state;
s6: the control module controls the angle change of the cradle head assembly based on the control instruction of the processing module.
In order to better discharge the interference item, in the step S2, the specific workflow of the identification module is as follows:
s21: the identification module establishes a two-dimensional coordinate system according to the image information;
s22: identifying a moving point of a moving object;
s23: taking the two-dimensional coordinate system as a reference, and respectively taking the point positions of the motion points closest to the edge of the two-dimensional coordinate system;
s24: calculating the volume value of the object according to the point position of the edge motion point;
s25: and (3) comparing the volume value with the upper limit and the lower limit of the volume value of the set object, and if the volume value falls within the range, transferring to the step (S3), and transferring to the step (S1) if the volume value falls outside the range.
In the step S23, the number of the motion points closest to the edge of the two-dimensional coordinate system is four, and the points are respectively:
right side point location (X) max ,Y n ) Left side point location (X) min ,Y m ) Upper side point location (X) n ,Y max ) Point on the lower side (X) m ,Y min ) Four points;
wherein X is max Is the corresponding X-axis coordinate and Y-axis coordinate of the rightmost motion point in a two-dimensional coordinate system n The Y-axis coordinate corresponding to the rightmost motion point in the two-dimensional coordinate system;
X min is the corresponding X-axis coordinate and Y-axis coordinate of the leftmost movement point in a two-dimensional coordinate system n The Y-axis coordinate corresponding to the leftmost movement point in the two-dimensional coordinate system;
Y max for the Y-axis coordinate and X-axis coordinate corresponding to the uppermost motion point in the two-dimensional coordinate system n The corresponding X-axis coordinate of the uppermost moving point in the two-dimensional coordinate system;
Y min for the Y-axis coordinate and X-axis coordinate corresponding to the lowest motion point in the two-dimensional coordinate system m The corresponding X-axis coordinate of the lowest motion point in the two-dimensional coordinate system.
For better capturing moving objects, the recognition module is configured to detect the object in the right-hand point location (X max ,Y n ) Left side point location (X) min ,Y m ) Upper side point location (X) n ,Y max ) Point on the lower side (X) m ,Y min ) The four points are used as the reference, and the central point (X z ,Y z );
Wherein X is z =(X max +X min )/2;Y z =(Y max +Y min )/2;
When the central point position reaches the sub-edge grid Wc, the processing module generates a control instruction, and the control module controls the angle change of the cradle head assembly to enable the central point position to return to the central grid Wa.
To facilitate recording information; the system further comprises:
the storage module is used for storing the image information acquired by the acquisition module;
the identity recognition module is used for carrying out identity recognition verification in a password mode;
and the authority management module opens the authority for reading the storage information in the storage module based on the identification verification result.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.
Claims (10)
1. An automatic tracking shooting tripod head system based on networking is characterized by comprising:
the acquisition module is used for acquiring image information in real time;
the identification module is used for identifying the moving points in the acquired image information;
the analysis module is used for setting a grid area based on the acquired image information and analyzing the motion state of the motion point by taking the grid area as a reference;
the processing module generates a control instruction according to the analysis of the motion state;
and the control module is used for controlling the angle change of the cradle head assembly based on the control instruction of the processing module.
2. The networked-based automatic tracking camera head system according to claim 1, wherein the identification module comprises:
the identification parameter setting unit is used for setting the upper limit and the lower limit of the size of the identification object and the upper limit and the lower limit of the movement point; wherein the object volume is larger than the set upper limit or smaller than the set lower limit, and is an invalid object, and is an effective object in the opposite direction;
and the object identification unit is used for identifying the effective objects with the volumes within the set values according to the set values of the identification parameter setting unit and generating the movement points with the numbers between the upper limit and the lower limit.
3. The networked-based automatic tracking camera head system according to claim 2, wherein the identification module further comprises:
and the human image recognition unit is used for recognizing human images in the collected image information.
4. A networked automatic tracking camera head system according to claim 3, wherein the analysis module comprises:
the analysis parameter setting unit is used for setting the upper limit and the lower limit of the movement speed of the movement point; when the motion speed of the motion point is greater than the set upper limit or less than the set lower limit, the motion point is an invalid motion point, and otherwise, the motion point is an effective motion point;
and the calculating unit is used for calculating the movement speed of the movement point.
5. The network-based automatic tracking shooting cradle head system according to claim 4, wherein the grid area set by the analysis module is divided into a center grid Wa, an edge grid Wb and a sub-edge grid Wc; the edge grids Wb are positioned at the most edge of the grid area, each edge grid Wb forms a closed-loop structure, the sub-edge grids Wc are positioned at the inner sides of the edge grids Wb, and each sub-edge grid Wc forms a closed-loop structure smaller than the edge grid Wb; the center grid Wa is located inside the sub-edge grid Wc.
6. The networked-based automatic tracking shooting pan-tilt system according to claim 5, wherein the shooting method of the system comprises the following steps:
s1: the acquisition module acquires image information in real time;
s2: the identification module screens the effective part in the image information according to the acquired image information and identifies the motion point of the effective part;
s3: the analysis module sets a grid area based on the acquired image information;
s4: the analysis module takes the grid area as a reference, analyzes the motion state of the motion points, removes invalid motion points and reserves valid motion points;
s5: the processing module generates a control instruction according to the analysis of the motion state;
s6: the control module controls the angle change of the cradle head assembly based on the control instruction of the processing module.
7. The networked automatic tracking shooting pan-tilt system according to claim 6, wherein in step S2, the specific workflow of the identification module is:
s21: the identification module establishes a two-dimensional coordinate system according to the image information;
s22: identifying a moving point of a moving object;
s23: taking the two-dimensional coordinate system as a reference, and respectively taking the point positions of the motion points closest to the edge of the two-dimensional coordinate system;
s24: calculating the volume value of the object according to the point position of the edge motion point;
s25: and (3) comparing the volume value with the upper limit and the lower limit of the volume value of the set object, and if the volume value falls within the range, transferring to the step (S3), and transferring to the step (S1) if the volume value falls outside the range.
8. The networked automatic tracking shooting pan-tilt system according to claim 7, wherein in the step S23, the number of the motion points closest to the edge of the two-dimensional coordinate system is four, and the points are respectively:
right side point location (X) max ,Y n ) Left side point location (X) min ,Y m ) Upper side point location (X) n ,Y max ) Point on the lower side (X) m ,Y min ) Four points;
wherein X is max Is the corresponding X-axis coordinate and Y-axis coordinate of the rightmost motion point in a two-dimensional coordinate system n The Y-axis coordinate corresponding to the rightmost motion point in the two-dimensional coordinate system;
X min is the corresponding X-axis coordinate and Y-axis coordinate of the leftmost movement point in a two-dimensional coordinate system n At the leftmost movement pointCorresponding Y-axis coordinates in the two-dimensional coordinate system;
Y max for the Y-axis coordinate and X-axis coordinate corresponding to the uppermost motion point in the two-dimensional coordinate system n The corresponding X-axis coordinate of the uppermost moving point in the two-dimensional coordinate system;
Y min for the Y-axis coordinate and X-axis coordinate corresponding to the lowest motion point in the two-dimensional coordinate system m The corresponding X-axis coordinate of the lowest motion point in the two-dimensional coordinate system.
9. The web-based auto-tracking camera head system of claim 8, wherein the identification module is configured to identify the camera head with a right-hand point location (X max ,Y n ) Left side point location (X) min ,Y m ) Upper side point location (X) n ,Y max ) Point on the lower side (X) m ,Y min ) The four points are used as the reference, and the central point (X z ,Y z );
Wherein X is z =(X max +X min )/2;Y z =(Y max +Y min )/2;
When the central point position reaches the sub-edge grid Wc, the processing module generates a control instruction, and the control module controls the angle change of the cradle head assembly to enable the central point position to return to the central grid Wa.
10. The networked-based automatic tracking shooting pan-tilt system of claim 1, further comprising:
the storage module is used for storing the image information acquired by the acquisition module;
the identity recognition module is used for carrying out identity recognition verification in a password mode;
and the authority management module opens the authority for reading the storage information in the storage module based on the identification verification result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211707531.5A CN116233360A (en) | 2022-12-29 | 2022-12-29 | Automatic tracking shooting cradle head system based on networking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211707531.5A CN116233360A (en) | 2022-12-29 | 2022-12-29 | Automatic tracking shooting cradle head system based on networking |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116233360A true CN116233360A (en) | 2023-06-06 |
Family
ID=86586377
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211707531.5A Pending CN116233360A (en) | 2022-12-29 | 2022-12-29 | Automatic tracking shooting cradle head system based on networking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116233360A (en) |
-
2022
- 2022-12-29 CN CN202211707531.5A patent/CN116233360A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107346426B (en) | Face information collection method based on camera face recognition | |
CN109074084B (en) | Robot control method, device and system and applicable robot | |
US8314854B2 (en) | Apparatus and method for image recognition of facial areas in photographic images from a digital camera | |
KR102213328B1 (en) | Video processing apparatus, video processing method, and program | |
WO2020083111A1 (en) | Liveness detection method and device, electronic apparatus, storage medium and related system using the liveness detection method | |
WO2020057355A1 (en) | Three-dimensional modeling method and device | |
JP5567853B2 (en) | Image recognition apparatus and method | |
WO2020094091A1 (en) | Image capturing method, monitoring camera, and monitoring system | |
Wheeler et al. | Face recognition at a distance system for surveillance applications | |
CN105844659B (en) | The tracking and device of moving component | |
CN103685940A (en) | Method for recognizing shot photos by facial expressions | |
CN103167270B (en) | Personnel's head image pickup method, system and server | |
CN109492550B (en) | Living body detection method, living body detection device and related system applying living body detection method | |
TW202211668A (en) | Shooting processing method, electronic equipment, and computer-readable storage medium | |
CN105872363A (en) | Adjustingmethod and adjusting device of human face focusing definition | |
CN107105159A (en) | The real-time detecting and tracking system and method for embedded moving target based on SoC | |
JP6025557B2 (en) | Image recognition apparatus, control method thereof, and program | |
Haque et al. | Real-time acquisition of high quality face sequences from an active pan-tilt-zoom camera | |
CN116233360A (en) | Automatic tracking shooting cradle head system based on networking | |
TWM560035U (en) | Image tracking device | |
KR100656345B1 (en) | Method and apparatus for tracking moving object by using two-cameras | |
WO2023149603A1 (en) | Thermal-image-monitoring system using plurality of cameras | |
CN215298410U (en) | Invasion identification system | |
CN115100245A (en) | High-altitude parabola-based tracing system and tracing method thereof | |
CN110572618B (en) | Illegal photographing behavior monitoring method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |