CN117994147A - Low-illumination image contrast enhancement method based on deep neural network - Google Patents

Low-illumination image contrast enhancement method based on deep neural network Download PDF

Info

Publication number
CN117994147A
CN117994147A CN202410244654.2A CN202410244654A CN117994147A CN 117994147 A CN117994147 A CN 117994147A CN 202410244654 A CN202410244654 A CN 202410244654A CN 117994147 A CN117994147 A CN 117994147A
Authority
CN
China
Prior art keywords
image
low
brightness
neural network
enhanced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410244654.2A
Other languages
Chinese (zh)
Inventor
赵金雄
张驯
李林明
狄磊
魏峰
王海
赵红
刘怡彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
STATE GRID GASU ELECTRIC POWER RESEARCH INSTITUTE
Original Assignee
STATE GRID GASU ELECTRIC POWER RESEARCH INSTITUTE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by STATE GRID GASU ELECTRIC POWER RESEARCH INSTITUTE filed Critical STATE GRID GASU ELECTRIC POWER RESEARCH INSTITUTE
Priority to CN202410244654.2A priority Critical patent/CN117994147A/en
Publication of CN117994147A publication Critical patent/CN117994147A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a low-illumination image contrast enhancement method based on a deep neural network, which relates to the technical field of image processing and comprises the following steps: s1, acquiring an original image, and screening the original image to obtain a low-illumination image; s2, scaling the low-illumination image to a fixed size and then extracting brightness information of the low-illumination image to obtain a brightness channel image; s3, learning the brightness channel image based on the deep neural network to obtain a brightness enhancement curve; s4, adjusting the brightness channel image according to the brightness enhancement curve to obtain an enhanced brightness channel image; s5, merging the enhanced brightness channel image with the color information of the original image, and outputting a final enhanced image; the invention can adjust the brightness channel image in real time, dynamically adjust the brightness of each pixel according to the characteristics of the image, remarkably enhance the contrast ratio and the processing efficiency of the image, effectively improve the visual effect of the image, reduce the calculation error and enhance the contrast ratio of the image.

Description

Low-illumination image contrast enhancement method based on deep neural network
Technical Field
The invention relates to the technical field of image processing, in particular to a low-illumination image contrast enhancement method based on a deep neural network.
Background
In recent years, image information is increasingly used in daily life of people, and in some application scenes, the requirements on the quality of images are increasingly high. The contrast of the image is one of the key factors affecting the image quality.
The current self-adaptive image contrast enhancement algorithm is mainly divided into two types, namely a gray level transformation method and a histogram adjustment method. The gray level conversion method can be divided into logarithmic conversion, exponential conversion and the like, and the method improves contrast by adjusting the dynamic range of the gray level of the image, so that the improvement of visual effect is not obvious. The histogram adjustment can be divided into histogram equalization, histogram prescribing and the like, and most of the methods are global processing for the whole image, so that the problems of noise amplification, artifacts and the like of the enhanced image are caused while the detail information of the image is likely to be enhanced. However, although the conventional adaptive histogram equalization method can effectively improve the visual effect of the image, the algorithm is generally complex, the calculated amount is large, and the processing efficiency is low.
Disclosure of Invention
The invention aims to provide a low-illumination image contrast enhancement method based on a deep neural network, which can remarkably enhance the image contrast and effectively improve the visual effect of an image.
The technical scheme of the invention is as follows:
in a first aspect, the present application provides a low-illumination image contrast enhancement method based on a deep neural network, comprising the steps of:
S1, acquiring an original image, and screening the original image to obtain a low-illumination image;
s2, scaling the low-illumination image to a fixed size and then extracting brightness information of the low-illumination image to obtain a brightness channel image;
s3, learning the brightness channel image based on the deep neural network to obtain a brightness enhancement curve;
S4, adjusting the brightness channel image according to the brightness enhancement curve to obtain an enhanced brightness channel image;
S5, merging the enhanced brightness channel image with the color information of the original image, and outputting a final enhanced image.
Further, step S1 includes:
s11, acquiring an original image, calculating the ambiguity of the image by using a Laplacian algorithm, and then filtering the image;
S12, calculating the standard deviation of the pixel intensity in the filtered image through a cv2.meanstddev function;
S13, calculating the variance of the pixel intensity based on the standard deviation to obtain a measurement standard of the original image ambiguity;
s14, screening the original image according to the measurement standard of the original image blur degree to obtain a high-definition normal picture serving as a low-illumination image.
Further, in step S3, the calculation formula for learning the luminance channel image based on the deep neural network to obtain the luminance enhancement curve includes:
LE(I(x);α)=I(x)+αI(x)(1-Ix))
wherein I (x) is an input image, x is pixel coordinates, alpha is a learnable parameter, LE (I (x); alpha) is an enhanced image of I (x).
Further, in step S4, the calculation formula for adjusting the luminance channel image according to the luminance enhancement curve includes:
LEn(x)=LEn-1(x)+αnLEn-1(x)(1-LEn-1(x))
Where LE n (x) is the nth luminance channel image, LE n-1 (x) is the n-1 st luminance channel image, x is the pixel coordinates, n represents the number of iterations, and α n is the curve parameter for each pixel position.
Further, step S5 includes:
s51, calculating the total loss of the enhanced brightness channel image;
s52, setting an image error based on the total loss of the enhanced brightness channel image;
And S53, merging the enhanced brightness channel image with the color information of the original image according to the image error, and outputting a final enhanced image.
Further, in step S51, the total loss of the enhanced luminance channel image includes: the calculation formulas of the spatial consistency loss, the exposure control loss, the color consistency loss and the illumination smoothing loss are as follows:
L=λ1LSC2LEC3LCC4LTV
Where L is the total loss of the enhanced luminance channel image, λ 1 is the spatial uniformity loss weight, L SC is the spatial uniformity loss, λ 2 is the light control loss weight, L EC is the exposure control loss, λ 3 is the color uniformity loss weight, L CC is the color uniformity loss, λ 4 is the illumination smoothing loss weight, and L TV is the illumination smoothing loss.
Further, in step S52, the calculation formula of the image error includes:
Where L spa is the image error, K is the number of pixels, I is the traversal of the pixel, Ω (I) is the 4 neighborhood of the ith pixel, Y i is the ith enhanced image, I i is the ith input image, Y j is the jth enhanced image, and I j is the jth input image.
In a second aspect, the present application provides an electronic device comprising:
A memory for storing one or more programs;
A processor;
the one or more programs, when executed by the processor, implement a low-intensity image contrast enhancement method based on a deep neural network as in any of the first aspects.
In a third aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a deep neural network based low illumination image contrast enhancement method as in any of the first aspects above.
Compared with the prior art, the invention has at least the following advantages or beneficial effects:
(1) According to the low-illumination image contrast enhancement method based on the depth neural network, the brightness enhancement curve is obtained by learning the brightness channel image through the depth neural network, so that the brightness channel image can be adjusted in real time, the brightness of each pixel can be dynamically adjusted according to the characteristics of the image, the image contrast can be remarkably enhanced, and the visual effect of the image can be effectively improved;
(2) The method combines the color information of the enhanced brightness channel image and the color information of the original image, reduces calculation errors and further enhances image contrast;
(3) The method has small calculated amount when processing the image, and remarkably improves the image processing efficiency.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a low-intensity image contrast enhancement method based on a deep neural network of the present invention;
Fig. 2 is a schematic block diagram of an electronic device.
Icon: 101. a memory; 102. a processor; 103. a communication interface.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
It should be noted that, in this document, the term "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The various embodiments and features of the embodiments described below may be combined with one another without conflict.
Example 1
Referring to fig. 1, fig. 1 is a step diagram of a low-illumination image contrast enhancement method based on a deep neural network according to an embodiment of the present application.
The invention provides a low-illumination image contrast enhancement method based on a deep neural network, which comprises the following steps:
in a first aspect, the present application provides a low-illumination image contrast enhancement method based on a deep neural network, comprising the steps of:
S1, acquiring an original image, and screening the original image to obtain a low-illumination image;
s2, scaling the low-illumination image to a fixed size and then extracting brightness information of the low-illumination image to obtain a brightness channel image;
s3, learning the brightness channel image based on the deep neural network to obtain a brightness enhancement curve;
S4, adjusting the brightness channel image according to the brightness enhancement curve to obtain an enhanced brightness channel image;
S5, merging the enhanced brightness channel image with the color information of the original image, and outputting a final enhanced image.
As a preferred embodiment, step S1 includes:
s11, acquiring an original image, calculating the ambiguity of the image by using a Laplacian algorithm, and then filtering the image;
S12, calculating the standard deviation of the pixel intensity in the filtered image through a cv2.meanstddev function;
S13, calculating the variance of the pixel intensity based on the standard deviation to obtain a measurement standard of the original image ambiguity;
s14, screening the original image according to the measurement standard of the original image blur degree to obtain a high-definition normal picture serving as a low-illumination image.
After the variance of the pixel intensity is calculated based on the standard deviation, the data with smaller variance is discarded, and the value with larger variance is left as a normal image, namely, the value is used as a measurement standard of the original image ambiguity to detect the low illumination.
As a preferred embodiment, in step S3, the calculation formula for learning the luminance channel image based on the deep neural network to obtain the luminance enhancement curve includes:
LE(I(x);α)=I(x)+αI(x)(1-Ix))
wherein I (x) is an input image, x is pixel coordinates, alpha is a learnable parameter, LE (I (x); alpha) is an enhanced image of I (x).
In a preferred embodiment, in step S4, the calculation formula for adjusting the luminance channel image according to the luminance enhancement curve includes:
LEn(x)=LEn-1(x)+anLEn-1(x)(1-LEn-1(x))
Where LE n (x) is the nth luminance channel image, LE n-1 (x) is the n-1 st luminance channel image, x is the pixel coordinates, n represents the number of iterations, and α n is the curve parameter for each pixel position.
As a preferred embodiment, step S5 includes:
s51, calculating the total loss of the enhanced brightness channel image;
s52, setting an image error based on the total loss of the enhanced brightness channel image;
And S53, merging the enhanced brightness channel image with the color information of the original image according to the image error, and outputting a final enhanced image.
As a preferred embodiment, in step S51, the total loss of the enhanced luminance channel image includes: the calculation formulas of the spatial consistency loss, the exposure control loss, the color consistency loss and the illumination smoothing loss are as follows:
L=λ1LSC2LEC2LCC4LTV
Where L is the total loss of the enhanced luminance channel image, λ 1 is the spatial uniformity loss weight, L SC is the spatial uniformity loss, λ 2 is the light control loss weight, L EC is the exposure control loss, λ 3 is the color uniformity loss weight, L CC is the color uniformity loss, λ 4 is the illumination smoothing loss weight, and L TV is the illumination smoothing loss.
As a preferred embodiment, in step S52, the calculation formula of the image error includes:
Where L spa is the image error, K is the number of pixels, I is the traversal of the pixel, Ω (I) is the 4 neighborhood of the ith pixel, Y i is the ith enhanced image, I i is the ith input image, Y j is the jth enhanced image, and I j is the jth input image.
Example 2
Referring to fig. 2, fig. 2 is a schematic block diagram of an electronic device according to an embodiment of the present application.
An electronic device comprises a memory 101, a processor 102 and a communication interface 103, wherein the memory 101, the processor 102 and the communication interface 103 are directly or indirectly electrically connected with each other to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 101 may be used to store software programs and modules that are stored within the memory 101 for execution by the processor 102 to perform various functional applications and data processing. The communication interface 103 may be used for communication of signaling or data with other node devices.
The Memory 101 may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor 102 may be an integrated circuit chip with signal processing capabilities. The processor 102 may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It will be appreciated that the configuration shown in the figures is illustrative only and that a deep neural network based low-illumination image contrast enhancement method may also include more or fewer components than shown in the figures or have a different configuration than shown in the figures. The components shown in the figures may be implemented in hardware, software, or a combination thereof.
In the embodiments provided in the present application, it should be understood that the disclosed method may be implemented in other manners as well. The above-described embodiments are merely illustrative, for example, of the flowchart or block diagrams in the figures, which illustrate the architecture, functionality, and operation of possible implementations of methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In summary, according to the low-illumination image contrast enhancement method based on the deep neural network provided by the embodiment of the application, the brightness enhancement curve is obtained by learning the brightness channel image through the deep neural network, so that the brightness channel image can be adjusted in real time, the brightness of each pixel can be dynamically adjusted according to the characteristics of the image, the image contrast can be remarkably enhanced, the visual effect of the image can be effectively improved, the calculated amount is small, and the image processing efficiency is remarkably improved; meanwhile, the application combines the color information of the enhanced brightness channel image and the original image, reduces the calculation error and further enhances the image contrast.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (9)

1. The low-illumination image contrast enhancement method based on the deep neural network is characterized by comprising the following steps of:
S1, acquiring an original image, and screening the original image to obtain a low-illumination image;
s2, scaling the low-illumination image to a fixed size and then extracting brightness information of the low-illumination image to obtain a brightness channel image;
s3, learning the brightness channel image based on the deep neural network to obtain a brightness enhancement curve;
S4, adjusting the brightness channel image according to the brightness enhancement curve to obtain an enhanced brightness channel image;
S5, merging the enhanced brightness channel image with the color information of the original image, and outputting a final enhanced image.
2. The method for enhancing contrast of a low-light image based on a deep neural network as claimed in claim 1, wherein the step S1 comprises:
s11, acquiring an original image, calculating the ambiguity of the image by using a Laplacian algorithm, and then filtering the image;
S12, calculating the standard deviation of the pixel intensity in the filtered image through a cv2.meanstddev function;
S13, calculating the variance of the pixel intensity based on the standard deviation to obtain a measurement standard of the original image ambiguity;
s14, screening the original image according to the measurement standard of the original image blur degree to obtain a high-definition normal picture serving as a low-illumination image.
3. The method for enhancing contrast of a low-illumination image based on a deep neural network as claimed in claim 1, wherein in step S3, the calculation formula for learning the luminance channel image based on the deep neural network to obtain the luminance enhancement curve includes:
LE(I(x);α)=I(x)+αI(x)(1-I(x))
wherein I (x) is an input image, x is pixel coordinates, alpha is a learnable parameter, LE (I (x); alpha) is an enhanced image of I (x).
4. A method for enhancing contrast of a low-light image based on a deep neural network as claimed in claim 3, wherein in step S4, the calculation formula for adjusting the brightness channel image according to the brightness enhancement curve includes:
LEn(x)=LEn-1(x)+αnLEn-1(x)(1-LEn-1(x))
Where LE n (x) is the nth luminance channel image, LE n-1 (x) is the n-1 st luminance channel image, x is the pixel coordinates, n represents the number of iterations, and α n is the curve parameter for each pixel position.
5. The method for enhancing contrast of a low-light image based on a deep neural network as claimed in claim 1, wherein the step S5 comprises:
s51, calculating the total loss of the enhanced brightness channel image;
s52, setting an image error based on the total loss of the enhanced brightness channel image;
And S53, merging the enhanced brightness channel image with the color information of the original image according to the image error, and outputting a final enhanced image.
6. The method for enhancing contrast of a low-intensity image based on a deep neural network as claimed in claim 5, wherein in step S51, the total loss of the enhanced luminance channel image comprises: the calculation formulas of the spatial consistency loss, the exposure control loss, the color consistency loss and the illumination smoothing loss are as follows:
L=λ1LSC2LEC3LCC4LTV
Where L is the total loss of the enhanced luminance channel image, λ 1 is the spatial uniformity loss weight, L SC is the spatial uniformity loss, λ 2 is the light control loss weight, L EC is the exposure control loss, λ 3 is the color uniformity loss weight, L CC is the color uniformity loss, λ 4 is the illumination smoothing loss weight, and L TV is the illumination smoothing loss.
7. The method for enhancing contrast of a low-illumination image based on a deep neural network as claimed in claim 5, wherein in step S52, the calculation formula of the image error includes:
Where L spa is the image error, K is the number of pixels, I is the traversal of the pixel, Ω (I) is the 4 neighborhood of the ith pixel, Y i is the ith enhanced image, I i is the ith input image, Y j is the jth enhanced image, and I j is the jth input image.
8. An electronic device, comprising:
A memory for storing one or more programs;
A processor;
A low-light image contrast enhancement method based on a deep neural network as claimed in any one of claims 1-7, when said one or more programs are executed by said processor.
9. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements a depth neural network based low-intensity image contrast enhancement method as claimed in any one of claims 1 to 7.
CN202410244654.2A 2024-03-01 2024-03-01 Low-illumination image contrast enhancement method based on deep neural network Pending CN117994147A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410244654.2A CN117994147A (en) 2024-03-01 2024-03-01 Low-illumination image contrast enhancement method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410244654.2A CN117994147A (en) 2024-03-01 2024-03-01 Low-illumination image contrast enhancement method based on deep neural network

Publications (1)

Publication Number Publication Date
CN117994147A true CN117994147A (en) 2024-05-07

Family

ID=90897491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410244654.2A Pending CN117994147A (en) 2024-03-01 2024-03-01 Low-illumination image contrast enhancement method based on deep neural network

Country Status (1)

Country Link
CN (1) CN117994147A (en)

Similar Documents

Publication Publication Date Title
Agrawal et al. A novel joint histogram equalization based image contrast enhancement
Vijayalakshmi et al. A comprehensive survey on image contrast enhancement techniques in spatial domain
CN107403421B (en) Image defogging method, storage medium and terminal equipment
US20210158488A1 (en) Image edge processing method, electronic device, and computer readable storage medium
CN111882504B (en) Method and system for processing color noise in image, electronic device and storage medium
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
CN113592776A (en) Image processing method and device, electronic device and storage medium
CN112991197B (en) Low-illumination video enhancement method and device based on detail preservation of dark channel
WO2020001164A1 (en) Image enhancement method and apparatus
CN115496668A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111882565B (en) Image binarization method, device, equipment and storage medium
CN113096035A (en) High dynamic range image generation method and device, intelligent terminal and storage medium
CN110717864B (en) Image enhancement method, device, terminal equipment and computer readable medium
Vijayalakshmi et al. A strategic approach towards contrast enhancement by two-dimensional histogram equalization based on total variational decomposition
Jeon et al. Low-light image enhancement using inverted image normalized by atmospheric light
Rowghanian Underwater image restoration with Haar wavelet transform and ensemble of triple correction algorithms using Bootstrap aggregation and random forests
CN111539975A (en) Method, device and equipment for detecting moving target and storage medium
Ma et al. Image adaptive contrast enhancement for low-illumination lane lines based on improved Retinex and guided filter
CN111833262A (en) Image noise reduction method and device and electronic equipment
CN111754412A (en) Method and device for constructing data pairs and terminal equipment
CN109308690B (en) Image brightness balancing method and terminal
CN117994147A (en) Low-illumination image contrast enhancement method based on deep neural network
CN116091375A (en) Fusion region-optimized high-quality image fusion method and system
Mohd Shapri et al. Accurate retrieval of region of interest for estimating point spread function and image deblurring
CN115239653A (en) Multi-split-screen-supporting black screen detection method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination