CN116152342A - Guideboard registration positioning method based on gradient - Google Patents

Guideboard registration positioning method based on gradient Download PDF

Info

Publication number
CN116152342A
CN116152342A CN202310229650.2A CN202310229650A CN116152342A CN 116152342 A CN116152342 A CN 116152342A CN 202310229650 A CN202310229650 A CN 202310229650A CN 116152342 A CN116152342 A CN 116152342A
Authority
CN
China
Prior art keywords
guideboard
image
gradient
positioning
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310229650.2A
Other languages
Chinese (zh)
Inventor
陈辉
刘莹
吕传栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202310229650.2A priority Critical patent/CN116152342A/en
Publication of CN116152342A publication Critical patent/CN116152342A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a guideboard registration positioning method based on gradient, which comprises the following steps: A. constructing a database; B. coarsely positioning and extracting guideboards; a. acquiring a road image; b. performing target detection on the road image by using a trained YOLOv8 network model, realizing rough positioning of a target guideboard, identifying the category of the current guideboard, acquiring database information of the current guideboard, and solving a rough positioning area image; c. converting the coarse positioning area image from RGB space to HSV space; d. threshold processing is carried out on the processed image in HSV space to obtain a guideboard region ROI image; C. gradient-based registration optimization; D. calculating the pose of the vehicle; the invention uses the highest-version YOLO network model to carry out rough positioning of the guideboard, realizes the problem of detecting the target area of the guideboard in a complex scene, improves the speed and reliability of the detection result, ensures the integrity of the guideboard area and effectively eliminates the interference of other similar areas.

Description

Guideboard registration positioning method based on gradient
Technical Field
The invention relates to a guideboard registration positioning method based on gradients, and belongs to the fields of digital image processing and computer vision.
Background
With the rapid improvement of national economic level, the automobile conservation amount is rapidly increased, the traffic jam problem is increasingly serious, and intelligent traffic systems and automatic driving technology become hot problems for domestic and foreign scholars to study. Among them, vehicle self-positioning is extremely important as a basic and key technology. In underground parking lots, tunnels or urban central areas with dense buildings, satellite signals are blocked, so that the positioning accuracy of a satellite-based vehicle-mounted positioning system is greatly affected or the satellite-based vehicle-mounted positioning system cannot work normally. Based on the background, a guideboard-based vehicle positioning system is studied in depth, and the aim is to improve the positioning accuracy and reliability of the vehicle positioning system in a specific area.
The most widely used positioning technology with earliest development is the global positioning system GPS, but its positioning effect is significantly reduced in dense urban areas. If a single sensor is used, the interference-free high-precision positioning cannot be realized under the condition of dense vehicles; the multi-sensor fusion positioning system has high cost and is not flexible enough, so that the large-scale productization of the system is hindered; meanwhile, most of positioning systems which rely on sensor cascading can cause accumulated errors and larger positioning errors in complex urban environments and congested road conditions.
Based on the limitations of the above methods, with the development of computer vision, vision sensors are increasingly used for vehicle positioning, the systems of binocular cameras, monocular cameras and various sensors are endless, and the use of monocular cameras to complete vehicle positioning generally requires the construction of a more complex and accurate comparison database with lower accuracy.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a guideboard registration positioning method based on gradient.
The invention adopts binocular vision vehicle self-positioning algorithm, combines GPS or off-line downloaded route planning and road sign database to realize autonomous navigation, and particularly utilizes guideboards to realize accurate self-positioning of vehicles when vehicles at intersections are dense and lane lines are covered.
Description of the terminology:
1. RGB color space: the color display device consists of three basic color channels of red (R), green (G) and blue (B), wherein the value range of pixels in each channel is [0,255], and the values of the three channels are changed and then overlapped, so that different colors are obtained.
2. HSV color space: the color is described in terms of three characteristics, hue (H), saturation (S), and brightness (V), much like the way a human eye perceives a color.
3. getperspective transform function: a perspective transformation matrix from the perspective transformation of the original image to the target head portrait is calculated from four pairs of point coordinates on the source image and the target image.
4. Internal reference matrix of camera: the 3D camera coordinates can be transformed to 2D homogeneous image coordinates, expressed as
Figure BDA0004119809910000021
Wherein f is the focal length, ">
Figure BDA0004119809910000022
Pixel length representing x-axis focal length, < >>
Figure BDA0004119809910000023
Pixel length, u, representing focal length in the y-axis direction 0 ,v 0 These parameters are determined only by the camera's own properties and are not changed by the external environment, which is the actual position of the principal point of the image.
5. External parameter matrix: the transformation from world coordinate system to camera coordinate system is realized and can be expressed as
Figure BDA0004119809910000024
Wherein R is a rotation matrix, each column vector of which represents the orientation of each coordinate axis of the world coordinate system under the camera coordinate system; t is a translation matrix, which is a representation of the world coordinate system origin under the camera coordinate system.
6. Perspective transformation matrix: the two cameras image the same scene, and a unique geometric corresponding relation exists between the two images, which can be expressed in a matrix form. One case where perspective transformation is satisfied is where the light centers are coincident when two images are taken, and another case where the scene taken is in one plane, such as a guideboard used in the present invention.
7. Homogeneous coordinates: an n-dimensional vector is expressed by an n+1-dimensional vector, and is used for a coordinate system in projection geometry, a homogeneous coordinate form of two-dimensional coordinates (x, y) is generally expressed by three-dimensional coordinates (hx, hy, h), and h is a scale factor in the homogeneous coordinates, and can be arbitrarily set to be 1 so as to keep consistency of two coordinates.
8. Image registration: refers to a process of matching and superimposing two or more images acquired at different times, with different sensors (imaging devices) or under different conditions (weather, illuminance, imaging position, angle, etc.).
9. Optical flow: the instantaneous speed of the pixel motion of a spatially moving object in the viewing imaging plane is also equivalent to the displacement of the target point when the time interval is small (e.g. between two consecutive frames of video).
The technical scheme of the invention is as follows:
a guideboard registration positioning method based on gradient comprises the following steps:
A. building a database
The database includes the following information for each guideboard: geographic coordinates, distance information between the center of the guideboard and the lane and ground color, wherein the geographic coordinates refer to longitude and latitude of the guideboard; the distance information between the center of the guideboard and the lane refers to the transverse distance between the center of the guideboard and each lane line; the ground color refers to the color of the guideboard;
B. rough positioning and extracting method for guideboard
a. Installing a binocular camera in front of a vehicle to acquire road images in real time;
b. c, carrying out target detection on the road image obtained in the step a by using a trained YOLOv8 network model, realizing rough positioning of a target guideboard, identifying the category of the current guideboard, obtaining database information of the current guideboard, and solving a rough positioning area image;
c. b, converting the rough positioning area image obtained in the step b from RGB space into HSV space;
d. c, performing threshold processing on the image processed in the step c in HSV space, and obtaining a mask image through morphological operation and constraint of aspect ratio of the connected domain, thereby obtaining a guideboard region ROI image;
C. gradient-based registration optimization
According to the obtained mask image, four pairs of initial corresponding points are preliminarily obtained, so that an initial perspective transformation matrix is obtained, sub-pixel level registration is carried out on the left and right guideboard region ROI images after Gaussian smoothing by utilizing a gradient-based optimization algorithm, and an accurate perspective transformation matrix is obtained through iterative updating;
D. vehicle pose calculation
Obtaining the overall displacement of the guideboard, namely obtaining parallax, and further obtaining the position information of the vehicle, wherein the position information comprises the transverse and normal distances of the camera relative to the guideboard, namely the distance of the vehicle relative to the guideboard and the lane where the vehicle is located.
According to the invention, the specific implementation process of the step b comprises the following steps:
(1) Training a YOLOv8 network model:
acquiring a training set: selecting a picture containing the guideboard, labeling a label, wherein the label comprises coordinate information of the guideboard in the picture and the category of the guideboard;
the YOLOv8 network model comprises a Backbone unit, a Neck unit and a Head unit;
the Backbone unit comprises a convolution module, a C2f module and an SPPF module, wherein the convolution module (Conv module) comprises a convolution layer, a batch normalization layer and a SiLU activation function layer; the C2f module comprises a convolution module, a Bottleneck module and a residual error structure module; the SPPF module comprises a convolution layer and a pooling layer;
the Neck unit comprises a convolution module, a C2f module and an up-sampling layer;
the Head unit comprises a detection module (detection module) which comprises a convolution module and a convolution layer;
scaling and convolving each picture in the training set based on a Backbone unit, so as to obtain an initial feature map; performing secondary extraction on the obtained initial feature map based on a Neck unit to obtain intermediate feature maps with different scales; inputting the obtained intermediate feature graphs with different scales into a Head unit to obtain guideboard coordinates predicted by a YOLOv8 network model;
calculating loss through the guideboard coordinates predicted by the YOLOv8 network model and the real guideboard coordinates, obtaining a gradient optimized by the YOLOv8 network model through the loss, updating the weight of the YOLOv8 network model, and continuously reducing the accuracy of network prediction and continuously increasing the loss, so as to obtain a trained YOLOv8 network model;
b, performing target detection on the road image obtained in the step a by using a trained YOLOv8 network model, realizing rough positioning of a target guideboard, identifying the category of the current guideboard, and obtaining database information;
(2) In the actual reasoning test stage, inputting the road image obtained in the step a into a trained YOLOv8 network model to obtain predicted rough positioning coordinates of the guideboard and the type of the guideboard;
(3) And setting 1 in the rough positioning area and 0 in the other areas obtained by rough positioning coordinates of the guideboard to obtain a rough positioning area image.
According to a preferred embodiment of the invention, in step c,
in HSV space, pixels are isolated that meet the following threshold ranges:
the threshold value range of the saturation S is 0.35< S <1, and the threshold value range of the brightness V is 0.35< V <1; the threshold value range of the hue H is determined by the guideboard, 200< H <280 is set for a rectangular blue guideboard, and H >330 or H <30 is set for a self-made standard guideboard to extract a quadrangle red area.
According to a preferred embodiment of the invention, in step d,
thresholding refers to: setting 255 for pixels meeting a threshold range, and setting 0 for the rest pixels to obtain a preliminary mask image;
morphological operations refer to: calling a morphism library function, removing external noise points and internal holes, solving the possible edge discontinuity condition in a closed operation mode, and eliminating most of interference areas;
the aspect ratio constraint of the connected domain refers to: constraining the area according to the aspect ratio and the area size of the target area to obtain a final target area without interference;
obtaining the minimum circumscribed rectangle of the target area, setting the minimum circumscribed rectangle as 255, obtaining a mask image of the guideboard area, performing AND operation on the mask image and the original guideboard image, and finally obtaining the ROI image of the guideboard area.
According to the invention, the specific implementation process of the step C preferably comprises the following steps:
extracting vertex coordinates of a minimum circumscribed rectangle in the left and right target mask images as four pairs of initial corresponding points to obtain an initial perspective transformation matrix M, wherein the plane perspective projection transformation relation is shown in the formula (I):
Figure BDA0004119809910000041
/>
where x= (x, y, 1) and x ' = (x ', y ', 1) are homogeneous coordinates, and-represent proportionality, rewritten as:
Figure BDA0004119809910000042
Figure BDA0004119809910000043
taking the right image as a target image, performing perspective projection transformation on the left image to approximate the right image, and iteratively updating a transformation matrix by using M≡ (E+D) M, wherein,
Figure BDA0004119809910000044
in the formula (V), d 0 To d 7 Corresponding to M in M matrix 0 To m 7 Updating parameters of each iteration;
at this time, the left-hand image I is transformed with new transforms x' - (E+D) Mx 1 Resampling corresponds to resampling the left image using an x' to (E+D) x transform
Figure BDA0004119809910000051
Namely:
Figure BDA0004119809910000052
where x "= (x", y ", 1) is a homogeneous coordinate, rewritten as:
Figure BDA0004119809910000053
the motion of the pixel is estimated by minimizing the intensity error between the two images, then the intensity error equation is as follows:
Figure BDA0004119809910000054
Figure BDA0004119809910000055
in the formulas (X) and (XI),
Figure BDA0004119809910000056
is the resampled left image +.>
Figure BDA0004119809910000057
At x i Image gradient, x i The value range of (2) is the guideboard ROI area;
Figure BDA0004119809910000058
is the resampled left image +.>
Figure BDA0004119809910000059
And target image I 0 Intensity errors of corresponding points;
d=(d 0 ,d 1 ,...,d 7 ) Is a motion update parameter, J i =J d (x i ) Is the resample point coordinates x' i ' jacobian with respect to d, corresponds to the optical flow caused by the instantaneous motion of a three-dimensional plane, expressed as:
Figure BDA00041198099100000510
at this time, an analytical solution is obtained by the least square method:
Ad=-b (XIII)
wherein, hessian matrix:
Figure BDA00041198099100000511
cumulative gradient:
Figure BDA00041198099100000512
according to a preferred embodiment of the invention, in step D,
the imaging principle of the binocular camera is as follows:
Figure BDA0004119809910000061
Figure BDA0004119809910000062
wherein X is the transverse distance from the camera to the center of the guideboard, and the lane where the vehicle is located is calculated according to the distance information between the center of the guideboard and the lane in the database; z refers to the normal distance from the camera to the guideboard plane, i.e., the distance from the vehicle to the guideboard;
O L 、O R is the center of the left aperture and the right aperture of the binocular camera, O L 、O R The distance between the two is the base line b and f of the binocular camera, and the focal length is the distance; p (X, Y, Z) is a point in three-dimensional space, and P (X, Y, Z) is imaged in a binocular camera, denoted as P L And P R After correction, P L And P R The coordinate of the x-axis of the imaging plane is u L And u R The obtained parallax disp=u L -u R
The transverse and normal distances of the camera relative to the guideboard are obtained by combining the internal and external parameters obtained by the calibration of the camera, namely the distance of the vehicle relative to the guideboard and the lane where the vehicle is located are obtained, and the self-positioning of the vehicle is realized.
A computer device comprising a memory storing a computer program and a processor implementing the steps of a gradient-based guideboard registration positioning method when the computer program is executed.
A computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of a gradient-based guideboard registration positioning method.
The beneficial effects of the invention are as follows:
1. the invention uses the highest-version YOLO (You Only Look Once version) network model to perform rough positioning of the guideboard, and combines with color extraction, thereby realizing the problem of detecting the target area of the guideboard in a complex scene, improving the speed and reliability of the detection result, ensuring the integrity of the guideboard area, and effectively eliminating the interference of other similar areas.
2. The invention fully utilizes the characteristic that the guideboard is a plane, matches the guideboard as a characteristic point to obtain the perspective transformation matrix of the left guideboard area and the right guideboard area, calculates the integral optical flow between the planes to carry out three-dimensional matching, avoids the redundancy of point-by-point calculation, ensures that the robustness of the registration result is stronger, the precision of the detection result is higher, and the speed is faster.
3. The invention provides a simple database system, which has simple structure, small data volume and easy post maintenance, and the database content mainly comprises the position information, the ground color and the lane information of the roads at the guideboards, so that the lane-level positioning of the vehicle can be realized.
Drawings
FIG. 1 is a flow diagram of a gradient-based guideboard registration positioning method of the present invention;
FIG. 2 is a schematic view of a blue rectangular road sign;
FIG. 3 (a) is a schematic illustration of a home-made standard guideboard designated by the reference numeral A1;
FIG. 3 (b) is a schematic illustration of a home-made standard guideboard designated by the reference numeral A2;
FIG. 3 (c) is a schematic illustration of a homemade standard guideboard labeled B3;
FIG. 3 (d) is a schematic illustration of a homemade standard guideboard designated by the reference numeral B4;
FIG. 3 (e) is a schematic illustration of a home-made standard guideboard labeled C5;
FIG. 3 (f) is a schematic illustration of a home-made standard guideboard designated by the reference numeral C6;
FIG. 4 (a) is a schematic diagram of a YOLOv8 network model;
FIG. 4 (b) is a schematic diagram of the structure of the YOLOv8 network model;
FIG. 4 (c) is a detailed schematic of the Conv module of YOLOv 8;
FIG. 4 (d) is a schematic diagram of the detailed structure of the C2f module of YOLOv 8;
FIG. 4 (e) is a schematic diagram of the detailed structure of the Bottleneck module of YOLOv 8;
FIG. 4 (f) is a detailed schematic of the SPPF module of YOLOv 8;
FIG. 4 (g) is a detailed schematic diagram of the detection module of YOLOv 8;
FIG. 5 is a schematic diagram of the effect of YOLOv8 on detecting a blue rectangular guideboard;
FIG. 6 is a schematic illustration of the effect of YOLOv8 detection on homemade standard guideboards;
FIG. 7 is a schematic illustration of an ROI image of a blue rectangular guideboard;
FIG. 8 is a schematic of an ROI image of a homemade standard guideboard;
FIG. 9 is a schematic diagram of error variation during iterative optimization;
fig. 10 is a schematic diagram of an imaging model of a binocular camera.
Detailed Description
The invention is further illustrated, but not limited, by the following examples and figures of the specification.
Example 1
The guideboard registration positioning method based on gradient is characterized in that the guideboard is any guideboard in a database, is arranged on the right side of a road and is perpendicular to the road surface, and takes a blue rectangular road guideboard (shown in fig. 2) and a homemade standard guideboard as an example, the homemade standard guideboard is a square plane indicator, the ground color is white, square red areas are arranged at four vertex angles, indicator characters are letters and numbers, and are marked with black, as shown in fig. 1, and the method comprises the following steps:
A. building a database
The database includes the following information for each guideboard: geographic coordinates, distance information between the center of the guideboard and the lane, and ground color, wherein the geographic coordinates refer to longitude and latitude of the guideboard; the distance information between the center of the guideboard and the lane refers to the transverse distance between the center of the guideboard and each lane line; the ground color refers to the color of the guideboard; the invention adopts the binocular camera, is applicable to different guideboards, and does not need to collect the sizes of the guideboards in advance.
B. Rough positioning and extracting method for guideboard
a. Installing a binocular camera in front of a vehicle to acquire road images in real time; the optical axes of the binocular cameras are parallel and the same as the running direction of the vehicle, the focal lengths of the left camera and the right camera of the binocular cameras are equal, and the positive directions of the x axes are coincident;
b. the road image obtained in the step a is subjected to target detection by using a trained YOLOv8 network model, coarse positioning of a target guideboard is realized, the class of the current guideboard is identified, 7 classes of guideboards used in the embodiment are adopted, wherein the class of the guideboard is a blue rectangular guideboard, six classes of standard guideboards are self-made, and marks on the guideboards are respectively A1, A2, B3, B4, C5 and C6, and are respectively shown in the figures 3 (a), 3 (B), 3 (C), 3 (d), 3 (e) and 3 (f);
c. b, converting the rough positioning area image obtained in the step b from RGB space into HSV space; three color components in the RGB space are highly correlated, difficult to analyze and are easily affected by illumination and the like, while the HSV space can remove the influence of illumination by adjusting the saturation and the brightness, so that a certain color can be separated more accurately.
d. C, performing threshold processing on the image processed in the step c in HSV space, and obtaining a mask image through morphological operation and constraint of aspect ratio of the connected domain, thereby obtaining a guideboard region ROI image;
C. gradient-based registration optimization
According to the obtained mask image, four pairs of initial corresponding points are preliminarily obtained, so that an initial perspective transformation matrix is obtained, sub-pixel level registration is carried out on the left and right guideboard region ROI images after Gaussian smoothing by utilizing a gradient-based optimization algorithm, and an accurate perspective transformation matrix is obtained through iterative updating;
D. vehicle pose calculation
The steps are iteratively updated to obtain an accurate perspective transformation matrix, the overall displacement of the guideboard is obtained according to the central point coordinates of the guideboard and the formula (I), namely parallax is obtained, and further the position information of the vehicle is obtained, wherein the position information comprises the transverse and normal distances of the camera relative to the guideboard, namely the distance of the vehicle relative to the guideboard and the lane where the vehicle is located.
Example 2
The gradient-based guideboard registration positioning method according to embodiment 1 is different in that:
the specific implementation process of the step b comprises the following steps:
(1) Training a YOLOv8 network model:
acquiring a training set: selecting a picture containing the guideboard, labeling a label, wherein the label comprises coordinate information of the guideboard in the picture and the category of the guideboard;
because the original model can not better identify the target information such as the guideboard, aiming at the practical problem faced by the invention, 100 pictures containing the guideboard are respectively selected for training each guideboard, and more guideboard scenes are covered in the pictures, so that the guideboard detection accuracy of the new scenes can be ensured, the labels in the training process are coordinate information of the guideboard in the pictures and the category of the guideboard, and the needed guideboard coordinate and category information can be acquired in the reasoning stage, thereby being convenient for the next operation.
As shown in fig. 4 (a) and fig. 4 (b), the YOLOv8 network model includes a backhaul unit, a neg unit, and a Head unit;
the backhaul unit comprises a convolution module, a C2f module and an SPPF module, wherein, as shown in fig. 4 (C), the convolution module (Conv module) comprises a convolution layer, a batch normalization layer and a SiLU activation function layer; the specific structure of the C2f module is shown in fig. 4 (d), the C2f module comprises a convolution module, a Bottleneck module and a residual structure module, and the specific structure of the Bottleneck module is shown in fig. 4 (e); the specific structure of the SPPF module is shown in fig. 4 (f), and the SPPF module comprises a convolution layer and a pooling layer;
inputting a picture which is scaled to 640 in height and width, and obtaining an initial feature map with 80 in height and width, 40 in height and width and 20 in height and width through a backlight unit;
the Neck unit comprises a convolution module, a C2f module and an up-sampling layer; after passing through the Neck unit, an intermediate feature map with a height and width of 80, a height and width of 40 and a height and width of 20 is obtained.
The Head unit comprises a detection module (Detect module), as shown in fig. 4 (g), wherein the detection module comprises a convolution module and a convolution layer; the predicted target category information and target coordinate information are obtained after passing through the Head unit;
scaling (scaling into pictures with heights and widths of 640) and convolution operation are carried out on each picture in the training set based on the Backbone unit, so that an initial feature map is obtained; performing secondary extraction on the obtained initial feature map based on a Neck unit to obtain intermediate feature maps with different scales; inputting the obtained intermediate feature graphs with different scales into a Head unit to obtain guideboard coordinates predicted by a YOLOv8 network model;
calculating losses through guideboard coordinates predicted by the YOLOv8 network model and real guideboard coordinates, obtaining a gradient optimized by the YOLOv8 network model through the losses, and fig. 9 is an error change schematic diagram in the iterative optimization process; updating the weight of the YOLOv8 network model, and continuously reducing the loss and continuously increasing the accuracy of network prediction so as to obtain a well-trained YOLOv8 network model;
b, performing target detection on the road image obtained in the step a by using a trained YOLOv8 network model, realizing rough positioning of a target guideboard, identifying the category of the current guideboard, and obtaining database information;
(2) In the actual reasoning test stage, inputting the road image obtained in the step a into a trained YOLOv8 network model to obtain predicted rough positioning coordinates of the guideboard and the type of the guideboard; FIG. 5 is a schematic diagram of the effect of YOLOv8 on detecting a blue rectangular guideboard; FIG. 6 is a schematic representation of the effect of YOLOv8 detection on homemade standard guideboards.
(3) And setting 1 in the rough positioning area and 0 in the other areas obtained by rough positioning coordinates of the guideboard to obtain a rough positioning area image.
In step c, three color components in the RGB space are highly correlated, difficult to analyze and easy to be influenced by illumination and the like, and the HSV space can more accurately separate a certain color by adjusting the saturation and the brightness to eliminate the influence of illumination. In HSV space, pixels are isolated that meet the following threshold ranges:
according to priori knowledge and experimental determination, the threshold value range of the saturation S is 0.35< S <1, and the threshold value range of the brightness V is 0.35< V <1; the threshold value range of the hue H is determined by the guideboard, 200< H <280 is set for a rectangular blue guideboard, and H >330 or H <30 is set for a self-made standard guideboard to extract a quadrangle red area.
In step d, thresholding refers to: setting 255 for pixels meeting a threshold range, and setting 0 for the rest pixels to obtain a preliminary mask image;
morphological operations refer to: calling a morphy library function, removing external noise points and internal holes, solving the possible edge discontinuity condition in a closed operation mode, eliminating most of interference areas, and ensuring that the guideboard area is a complete communication area;
the aspect ratio constraint of the connected domain refers to: according to the aspect ratio and the area size of the target area, the area is constrained, and experiments show that the final target area without interference can be obtained;
because the four corner areas of the self-made red standard guideboard used in the invention are red squares, namely the aspect ratio is 1, the aspect ratio can be limited to be more than 0.8 and less than 1.2; meanwhile, as the red color of the guideboard in the rough positioning area is the largest red area, area constraint can be carried out, and four areas with the largest area are reserved as final target areas;
so far, all the interferences are eliminated, four target areas are extracted at present, the minimum circumscribed rectangle is obtained, the minimum circumscribed rectangle is set to 255, a mask image of the whole guideboard area is obtained, the mask image and the original guideboard image are subjected to AND operation, and finally an ROI image of the guideboard area is obtained, and fig. 7 is a schematic ROI image of a blue rectangular guideboard; fig. 8 is a schematic of an ROI image of a homemade standard guideboard.
The specific implementation process of the step C comprises the following steps:
extracting vertex coordinates of a minimum circumscribed rectangle in left and right target mask images, taking the vertex coordinates as four pairs of initial corresponding points, and calling a getPerspolectTransform function in OpenCV to obtain an initial perspective transformation matrix M, wherein the plane perspective projection transformation relation is shown in the formula (I):
Figure BDA0004119809910000101
where x= (x, y, 1) and x ' = (x ', y ', 1) are homogeneous coordinates, and-represent proportionality, rewritten as:
Figure BDA0004119809910000102
/>
Figure BDA0004119809910000103
taking the right image as a target image, performing perspective projection transformation on the left image to approximate the right image, to optimize the eight parameters of the projection matrix, the transformation matrix is iteratively updated using m≡ (e+d) M, where,
Figure BDA0004119809910000104
in the formula (XXII), d 0 To d 7 Corresponding to M in M matrix 0 To m 7 Updating parameters of each iteration;
at this time, the left-hand image I is transformed with new transforms x' - (E+D) Mx 1 Resampling corresponds to resampling the left image using an x' to (E+D) x transform
Figure BDA0004119809910000111
Namely:
Figure BDA0004119809910000112
where x "= (x", y ", 1) is a homogeneous coordinate, rewritten as:
Figure BDA0004119809910000113
to recover an accurate perspective transformation relationship, the motion of a pixel is estimated by minimizing the intensity error between two images, and the intensity error equation is as follows:
Figure BDA0004119809910000114
Figure BDA0004119809910000115
in the formulae (XXVII), (XXVIII),
Figure BDA0004119809910000116
is the resampled left image +.>
Figure BDA0004119809910000117
At x i Image gradient, x i The value range of (2) is the guideboard ROI area;
Figure BDA0004119809910000118
is the resampled left image +.>
Figure BDA0004119809910000119
And target image I 0 Intensity errors of corresponding points;
d=(d 0 ,d 1 ,...,d 7 ) Is a motion update parameter, J i =J d (x i ) Is the coordinate x' of the resampling point i With respect to the jacobian of d, the optical flow caused by the instantaneous motion of the three-dimensional plane is represented as:
Figure BDA00041198099100001110
at this time, an analytical solution is obtained by the least square method:
Ad=-b (XXX)
wherein, hessian matrix:
Figure BDA00041198099100001111
cumulative gradient:
Figure BDA00041198099100001112
in step D, it is available according to the imaging principle of a binocular camera:
Figure BDA0004119809910000121
Figure BDA0004119809910000122
wherein X is the transverse distance from the camera to the center of the guideboard, and the lane where the vehicle is located is calculated according to the distance information between the center of the guideboard and the lane in the database; z refers to the normal distance from the camera to the guideboard plane, i.e., the distance from the vehicle to the guideboard;
the imaging model of the binocular camera is shown in FIG. 10, O L 、O R Is the center of the left aperture and the right aperture of the binocular camera, O L 、O R The distance between the two is the base line b of the binocular camera, the square frame is an imaging plane, and f is a focal length; p (X, Y, Z) is a point in three-dimensional space (taking the optical center of the left camera as the origin coordinate), and P (X, Y, Z) is imaged in each binocular camera and is recorded as P L And P R After correction, P L And P R The coordinate of the x-axis of the imaging plane is u L And u R (since the origin is the principal point of the image, u R Negative), the parallax disp=u obtained in the above step L -u R
The transverse and normal distances of the camera relative to the guideboard are obtained by combining the internal and external parameters obtained by the calibration of the camera, namely the distance of the vehicle relative to the guideboard and the lane where the vehicle is located are obtained, and the self-positioning of the vehicle is realized.
Example 3
A computer device comprising a memory storing a computer program and a processor implementing the steps of the gradient-based guideboard registration positioning method of embodiment 1 or 2 when the computer program is executed by the memory.
Example 4
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the gradient-based guideboard registration positioning method of embodiment 1 or 2.

Claims (8)

1. The guideboard registration positioning method based on the gradient is characterized by comprising the following steps:
A. building a database
The database includes the following information for each guideboard: geographic coordinates, distance information between the center of the guideboard and the lane and ground color, wherein the geographic coordinates refer to longitude and latitude of the guideboard; the distance information between the center of the guideboard and the lane refers to the transverse distance between the center of the guideboard and each lane line; the ground color refers to the color of the guideboard;
B. rough positioning and extracting method for guideboard
a. Installing a binocular camera in front of a vehicle to acquire road images in real time;
b. c, carrying out target detection on the road image obtained in the step a by using a trained YOLOv8 network model, realizing rough positioning of a target guideboard, identifying the category of the current guideboard, obtaining database information of the current guideboard, and solving a rough positioning area image;
c. b, converting the rough positioning area image obtained in the step b from RGB space into HSV space;
d. c, performing threshold processing on the image processed in the step c in HSV space, and obtaining a mask image through morphological operation and constraint of aspect ratio of the connected domain, thereby obtaining a guideboard region ROI image;
C. gradient-based registration optimization
According to the obtained mask image, four pairs of initial corresponding points are preliminarily obtained, so that an initial perspective transformation matrix is obtained, sub-pixel level registration is carried out on the left and right guideboard region ROI images after Gaussian smoothing by utilizing a gradient-based optimization algorithm, and an accurate perspective transformation matrix is obtained through iterative updating;
D. vehicle pose calculation
Obtaining the overall displacement of the guideboard, namely obtaining parallax, and further obtaining the position information of the vehicle, wherein the position information comprises the transverse and normal distances of the camera relative to the guideboard, namely the distance of the vehicle relative to the guideboard and the lane where the vehicle is located.
2. The gradient-based guideboard registration positioning method of claim 1, wherein the specific implementation process of the step b comprises:
(1) Training a YOLOv8 network model:
acquiring a training set: selecting a picture containing the guideboard, labeling a label, wherein the label comprises coordinate information of the guideboard in the picture and the category of the guideboard;
the YOLOv8 network model comprises a Backbone unit, a Neck unit and a Head unit;
the backbox unit comprises a convolution module, a C2f module and an SPPF module, wherein the convolution module comprises a convolution layer, a batch normalization layer and a SiLU activation function layer; the C2f module comprises a convolution module, a Bottleneck module and a residual error structure module; the SPPF module comprises a convolution layer and a pooling layer;
the Neck unit comprises a convolution module, a C2f module and an up-sampling layer;
the Head unit comprises a detection module, wherein the detection module comprises a convolution module and a convolution layer;
scaling and convolving each picture in the training set based on a Backbone unit, so as to obtain an initial feature map; performing secondary extraction on the obtained initial feature map based on a Neck unit to obtain intermediate feature maps with different scales; inputting the obtained intermediate feature graphs with different scales into a Head unit to obtain guideboard coordinates predicted by a YOLOv8 network model;
calculating loss through the guideboard coordinates predicted by the YOLOv8 network model and the real guideboard coordinates, obtaining a gradient optimized by the YOLOv8 network model through the loss, updating the weight of the YOLOv8 network model, and continuously reducing the accuracy of network prediction and continuously increasing the loss, so as to obtain a trained YOLOv8 network model;
b, performing target detection on the road image obtained in the step a by using a trained YOLOv8 network model, realizing rough positioning of a target guideboard, identifying the category of the current guideboard, and obtaining database information;
(2) In the actual reasoning test stage, inputting the road image obtained in the step a into a trained YOLOv8 network model to obtain predicted rough positioning coordinates of the guideboard and the type of the guideboard;
(3) And setting 1 in the rough positioning area and 0 in the other areas obtained by rough positioning coordinates of the guideboard to obtain a rough positioning area image.
3. The method of gradient-based guideboard registration positioning of claim 1, wherein, in step c,
in HSV space, pixels are isolated that meet the following threshold ranges:
the threshold value range of the saturation S is 0.35< S <1, and the threshold value range of the brightness V is 0.35< V <1; the threshold value range of the hue H is determined by the guideboard, 200< H <280 is set for a rectangular blue guideboard, and H >330 or H <30 is set for a self-made standard guideboard to extract a quadrangle red area.
4. The method of gradient-based guideboard registration positioning of claim 1, wherein in step d,
thresholding refers to: setting 255 for pixels meeting a threshold range, and setting 0 for the rest pixels to obtain a preliminary mask image;
morphological operations refer to: calling a morphism library function, removing external noise points and internal holes, solving the possible edge discontinuity condition in a closed operation mode, and eliminating most of interference areas;
the aspect ratio constraint of the connected domain refers to: constraining the area according to the aspect ratio and the area size of the target area to obtain a final target area without interference;
obtaining the minimum circumscribed rectangle of the target area, setting the minimum circumscribed rectangle as 255, obtaining a mask image of the guideboard area, performing AND operation on the mask image and the original guideboard image, and finally obtaining the ROI image of the guideboard area.
5. The gradient-based guideboard registration positioning method according to claim 1, wherein the specific implementation process of the step C includes:
extracting vertex coordinates of a minimum circumscribed rectangle in the left and right target mask images as four pairs of initial corresponding points to obtain an initial perspective transformation matrix M, wherein the plane perspective projection transformation relation is shown in the formula (I):
Figure FDA0004119809900000031
where x= (x, y, 1) and x ' = (x ', y ', 1) are homogeneous coordinates, and-represent proportionality, rewritten as:
Figure FDA0004119809900000032
Figure FDA0004119809900000033
taking the right image as a target image, performing perspective projection transformation on the left image to approximate the right image, and iteratively updating a transformation matrix by using M≡ (E+D) M, wherein,
Figure FDA0004119809900000034
in the formula (V), d 0 To d 7 Corresponding to M in M matrix 0 To m 7 Updating parameters of each iteration;
at this time, the left-hand image I is transformed with new transforms x' - (E+D) Mx 1 Resampling corresponds to resampling the left image using an x' to (E+D) x transform
Figure FDA0004119809900000035
Namely: />
Figure FDA0004119809900000036
Where x "= (x", y ", 1) is a homogeneous coordinate, rewritten as:
Figure FDA0004119809900000037
the motion of the pixel is estimated by minimizing the intensity error between the two images, then the intensity error equation is as follows:
Figure FDA0004119809900000038
Figure FDA0004119809900000039
in the formulas (X) and (XI),
Figure FDA00041198099000000310
is the resampled left image +.>
Figure FDA00041198099000000311
At x i Image gradient, x i The value range of (2) is the guideboard ROI area;
Figure FDA00041198099000000312
is the resampled left image +.>
Figure FDA00041198099000000313
And target image I 0 Intensity errors of corresponding points;
d=(d 0 ,d 1 ,...,d 7 ) Is a motion update parameter, J i =J d (x i ) Is the resample point coordinates x' i ' jacobian with respect to d, corresponds to the optical flow caused by the instantaneous motion of a three-dimensional plane, expressed as:
Figure FDA0004119809900000041
at this time, an analytical solution is obtained by the least square method:
Ad=-b (XIII)
wherein, hessian matrix:
Figure FDA0004119809900000042
cumulative gradient:
Figure FDA0004119809900000043
6. the method of gradient-based guideboard registration and positioning of claim 1, wherein, in step D,
the imaging principle of the binocular camera is as follows:
Figure FDA0004119809900000044
Figure FDA0004119809900000045
wherein X is the transverse distance from the camera to the center of the guideboard, and the lane where the vehicle is located is calculated according to the distance information between the center of the guideboard and the lane in the database; z refers to the normal distance from the camera to the guideboard plane, i.e., the distance from the vehicle to the guideboard;
O L 、O R is the center of the left aperture and the right aperture of the binocular camera, O L 、O R The distance between the two is the base line b and f of the binocular camera, and the focal length is the distance; p (X, Y, Z) is a point in three-dimensional space, and P (X, Y, Z) is imaged in a binocular camera, denoted as P L And P R After correction, P L And P R The coordinate of the x-axis of the imaging plane is u L And u R The obtained parallax disp=u L -u R
The transverse and normal distances of the camera relative to the guideboard are obtained by combining the internal and external parameters obtained by the calibration of the camera, namely the distance of the vehicle relative to the guideboard and the lane where the vehicle is located are obtained, and the self-positioning of the vehicle is realized.
7. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the gradient-based guideboard registration positioning method of any of claims 1-6 when the computer program is executed.
8. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor implements the steps of the gradient-based guideboard registration positioning method of any of claims 1-6.
CN202310229650.2A 2023-03-10 2023-03-10 Guideboard registration positioning method based on gradient Pending CN116152342A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310229650.2A CN116152342A (en) 2023-03-10 2023-03-10 Guideboard registration positioning method based on gradient

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310229650.2A CN116152342A (en) 2023-03-10 2023-03-10 Guideboard registration positioning method based on gradient

Publications (1)

Publication Number Publication Date
CN116152342A true CN116152342A (en) 2023-05-23

Family

ID=86350635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310229650.2A Pending CN116152342A (en) 2023-03-10 2023-03-10 Guideboard registration positioning method based on gradient

Country Status (1)

Country Link
CN (1) CN116152342A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681885A (en) * 2023-08-03 2023-09-01 国网安徽省电力有限公司超高压分公司 Infrared image target identification method and system for power transmission and transformation equipment
CN116895030A (en) * 2023-09-11 2023-10-17 西华大学 Insulator detection method based on target detection algorithm and attention mechanism

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681885A (en) * 2023-08-03 2023-09-01 国网安徽省电力有限公司超高压分公司 Infrared image target identification method and system for power transmission and transformation equipment
CN116681885B (en) * 2023-08-03 2024-01-02 国网安徽省电力有限公司超高压分公司 Infrared image target identification method and system for power transmission and transformation equipment
CN116895030A (en) * 2023-09-11 2023-10-17 西华大学 Insulator detection method based on target detection algorithm and attention mechanism
CN116895030B (en) * 2023-09-11 2023-11-17 西华大学 Insulator detection method based on target detection algorithm and attention mechanism

Similar Documents

Publication Publication Date Title
CN106651953B (en) A kind of vehicle position and orientation estimation method based on traffic sign
CN108802785B (en) Vehicle self-positioning method based on high-precision vector map and monocular vision sensor
EP3735675A1 (en) Image annotation
CN116152342A (en) Guideboard registration positioning method based on gradient
CN104123730A (en) Method and system for remote-sensing image and laser point cloud registration based on road features
CN112308913B (en) Vehicle positioning method and device based on vision and vehicle-mounted terminal
CN112740225B (en) Method and device for determining road surface elements
Soheilian et al. 3D road marking reconstruction from street-level calibrated stereo pairs
CN112504271A (en) System and method for automatically generating training image sets for an environment
CN112906616A (en) Lane line extraction and generation method
CN112749584B (en) Vehicle positioning method based on image detection and vehicle-mounted terminal
CN115239822A (en) Real-time visual identification and positioning method and system for multi-module space of split type flying vehicle
CN113221883B (en) Unmanned aerial vehicle flight navigation route real-time correction method
CN110415299B (en) Vehicle position estimation method based on set guideboard under motion constraint
CN108846363A (en) A kind of subregion vehicle bottom shadow detection method based on divergence expression scanning
KR102316818B1 (en) Method and apparatus of updating road network
CN106780541A (en) A kind of improved background subtraction method
CN114659513B (en) Unstructured road-oriented point cloud map construction and maintenance method
KR20220151572A (en) Method and System for change detection and automatic updating of road marking in HD map through IPM image and HD map fitting
WO2022133986A1 (en) Accuracy estimation method and system
CN103473787A (en) On-bridge-moving-object detection method based on space geometry relation
Li et al. Lane detection and road surface reconstruction based on multiple vanishing point & symposia
Volkova et al. Aerial wide-area motion imagery registration using automated multiscale feature selection
JP2003085535A (en) Position recognition method for road guide sign
CN117152210B (en) Image dynamic tracking method and related device based on dynamic observation field angle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination