CN113538242A - Image interpolation model training method based on residual error guide strategy - Google Patents
Image interpolation model training method based on residual error guide strategy Download PDFInfo
- Publication number
- CN113538242A CN113538242A CN202110830807.8A CN202110830807A CN113538242A CN 113538242 A CN113538242 A CN 113538242A CN 202110830807 A CN202110830807 A CN 202110830807A CN 113538242 A CN113538242 A CN 113538242A
- Authority
- CN
- China
- Prior art keywords
- image
- residual
- interpolation
- random forest
- level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 114
- 238000012549 training Methods 0.000 title claims abstract description 107
- 238000007637 random forest analysis Methods 0.000 claims abstract description 140
- 230000008569 process Effects 0.000 claims abstract description 40
- 238000013507 mapping Methods 0.000 claims abstract description 29
- 239000013598 vector Substances 0.000 claims description 101
- 238000005070 sampling Methods 0.000 claims description 39
- 230000009466 transformation Effects 0.000 claims description 33
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 7
- 230000009467 reduction Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 10
- 238000003066 decision tree Methods 0.000 description 25
- 239000011159 matrix material Substances 0.000 description 15
- 238000007781 pre-processing Methods 0.000 description 6
- 238000007670 refining Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000002360 preparation method Methods 0.000 description 4
- 238000012216 screening Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4007—Interpolation-based scaling, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
Abstract
The application discloses an image interpolation model training method based on a residual guiding strategy, an image interpolation method based on the residual guiding strategy, computer equipment and a readable storage medium. Based on the characteristics of the random forest hierarchical structure, an image interpolation model is constructed and trained by using a residual error guiding strategy. The method comprises the steps of training a random forest by using a pre-interpolation image as characteristic data and using residual errors as label data, wherein the random forest synchronously grows according to layers in the training process, an initial residual error is a difference value between a high-resolution image and the pre-interpolation image, a later residual error is a difference value between a last-stage residual error and an estimation residual error, the residual errors can be updated in iteration, the estimation residual error of each stage is optimized for the previous-stage residual error, the image residual errors are converged to zero along with the increase of training levels, finally, an image interpolation model determined by the mapping relation of each level is obtained, and the low-resolution image is interpolated by using the image interpolation model, so that a good interpolation effect can be obtained.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image interpolation model training method based on a residual guiding policy, an image interpolation method based on a residual guiding policy, a computer device, and a readable storage medium.
Background
Image acquisition devices may have low resolution for some regions of interest in the acquired digital image due to limitations in hardware design and cost. Image interpolation is a method of restoring a low-resolution image to a high-resolution image while maintaining the details and structure of the original low-resolution image as much as possible.
Although the traditional bilinear and Bicubic (Bicubic) method can realize image interpolation, obvious artifacts appear at the edge of an image and more areas containing noise and blur exist in the interpolation result. In order to improve the interpolation performance, more prior information needs to be considered, such as an edge-guided interpolation method and an image interpolation method based on local or non-local pixels or image blocks. However, as can be seen from the analysis of the residual image, which is made worse by comparing the standard image with the interpolation results of various methods, the interpolation method proposed in recent years has a better interpolation result in the smooth region, but has a poor interpolation result in the edge region.
In summary, the effect of the current image interpolation method is not ideal, and how to improve the image interpolation effect is a problem to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide an image interpolation model training method based on a residual guiding strategy, an image interpolation method based on the residual guiding strategy, computer equipment and a readable storage medium, which are used for solving the problem that the interpolation effect of the current image interpolation scheme is not ideal. The specific scheme is as follows:
in a first aspect, the present application provides an image interpolation model training method based on a residual guiding strategy, including:
acquiring a high-resolution image; down-sampling the high-resolution image to obtain a low-resolution image; generating a pre-interpolation image according to the low-resolution image;
performing difference on the high-resolution image and the pre-interpolation image to obtain an initial residual error;
training a random forest by using the pre-interpolation image and the initial residual error; in the training process, the random forest grows synchronously according to layers, the initial residual is used as a first-level residual, a mapping relation between the pre-interpolation image and the current-level residual is learned at any level of the random forest to generate an estimated residual, and the current-level residual and the estimated residual are subjected to subtraction to obtain a next-level residual;
and when the training termination condition is reached, outputting the random forest determined by the mapping relation of each level as an image interpolation model.
Optionally, the random forest is divided into multiple groups of random forests, and the training of the random forests by using the pre-interpolation images and the initial residuals includes:
generating a feature vector of the pre-interpolated image;
grouping the feature vectors according to a fixed point distribution mode, wherein the grouping number of the feature vectors is equal to that of the random forest;
and when training the random forest, training a group of random forests by using each group of the feature vectors and corresponding residual vectors in the initial residual.
Optionally, the generating the feature vector of the pre-interpolation image includes:
filtering the pre-interpolation image by utilizing a one-dimensional first-order gradient operator and a one-dimensional second-order gradient operator to generate four corresponding characteristic images; and sampling the four characteristic images to obtain a characteristic vector of each sampling position.
Optionally, the sampling mode of the four characteristic images is specifically as follows: sampling at intervals with step length of 1;
correspondingly, the number of the fixed point distribution modes is 4, and the grouping number of the feature vectors and the grouping number of the random forests are both 4.
Optionally, the image interpolation model specifically includes K levels of the random forest; the pre-interpolation image of the first-level random forest is an image generated by using a preset interpolation algorithm, and for any K ∈ [2, K ], the pre-interpolation image of the kth-level random forest is an image obtained by sequentially carrying out interpolation on the previous K-1-level random forest.
Optionally, in the image interpolation model, the high-resolution images of different levels of random forests are different.
Optionally, the random forest grows synchronously according to layers, including:
judging whether unprocessed target nodes exist at any level of the random forest;
if the residual vector exists, generating a first linear transformation from the feature vector contained in the target node to the residual vector contained in the target node, and further generating a second linear transformation from the feature vector contained in the target node to a target residual vector, wherein the target residual vector is a residual vector intersected with the target node;
if not, judging whether the splitting termination condition is reached;
if yes, determining that the target node belongs to a leaf node, recording second linear transformation of the target node, and finally performing second linear transformation of all leaf nodes, namely mapping relation between the pre-interpolation image and sum of all levels of residual errors;
if not, determining that the target node belongs to the internal node, and entering the next level through node splitting; in the node splitting process, splitting parameters are randomly selected, the target node is split, the optimal splitting parameter is determined according to the error reduction amount before and after splitting, and the optimal splitting parameter of the target node is recorded.
In a second aspect, the present application provides an image interpolation method based on a residual guiding strategy, including:
acquiring a low-resolution image to be interpolated;
generating a pre-interpolation image according to the low-resolution image;
inputting the pre-interpolation image into a trained random forest; generating an estimated residual error at any level of the random forest according to a mapping relation between a pre-interpolation image learned in a training process and a residual error of a current level;
and generating an interpolation image of the low-resolution image according to the estimation residual and the pre-interpolation image of each level.
In a third aspect, the present application provides a computer device, comprising:
a memory: for storing a computer program;
a processor: for executing the computer program to implement the residual guiding strategy based image interpolation model training method as described above, and/or the residual guiding strategy based image interpolation method as described above.
In a fourth aspect, the present application provides a readable storage medium having stored thereon a computer program for implementing, when executed by a processor, a residual guiding strategy-based image interpolation model training method as described above, and/or a residual guiding strategy-based image interpolation method as described above.
In summary, the present application provides an image interpolation model training method based on a residual guiding strategy, including: acquiring a high-resolution image; down-sampling the high-resolution image to obtain a low-resolution image; generating a pre-interpolation image according to the low-resolution image; performing difference on the high-resolution image and the pre-interpolation image to obtain an initial residual error; training a random forest by utilizing the pre-interpolation image and the initial residual error; in the training process, the random forest synchronously grows according to layers, an initial residual error is used as a first-level residual error, a mapping relation between a pre-interpolation image and a current-level residual error is learned at any level of the random forest so as to generate an estimated residual error, and the current-level residual error and the estimated residual error are subjected to subtraction to obtain a next-level residual error; and when the training termination condition is reached, outputting the random forest determined by the mapping relation of each level as an image interpolation model.
Therefore, the method is based on the characteristics of the random forest hierarchical structure, and the image interpolation model is constructed and trained by using the residual error guiding strategy. Specifically, a pre-interpolation image is used as feature data, a residual error is used as label data to train a random forest, the random forest synchronously grows according to layers in the training process, an initial residual error is a difference value between a high-resolution image and the pre-interpolation image, a residual error of a later layer is a difference value between a last-level residual error and a last-level estimated residual error, the residual errors can be updated in iteration, the estimated residual errors of each layer are optimized for the previous-level residual error, the image residual errors converge to zero along with the increase of training layers, an image interpolation model determined by mapping relations of all layers is finally obtained, and the interpolation effect can be remarkably improved by utilizing the image interpolation model to interpolate low-resolution images.
In addition, the application also provides an image interpolation method based on the residual guiding strategy, computer equipment and a readable storage medium, and the technical effect of the method corresponds to that of the method, and the method is not repeated herein.
Drawings
For a clearer explanation of the embodiments or technical solutions of the prior art of the present application, the drawings needed for the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is an overall flowchart of a first embodiment of an image interpolation model training method based on a residual guiding strategy provided in the present application;
fig. 2 is a schematic diagram of a data preprocessing process in a first embodiment of an image interpolation model training method based on a residual error guiding strategy provided in the present application;
FIG. 3 is a schematic diagram of a random forest layer-by-layer training process provided by the present application;
fig. 4 is a flowchart of a first embodiment of an image interpolation method based on a residual guiding strategy according to the present application;
fig. 5 is a schematic diagram of a process for interpolating an image according to a training result according to the present application.
Detailed Description
In order that those skilled in the art will better understand the disclosure, the following detailed description will be given with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In summary, the goal of image interpolation is to transform a low resolution image into a high resolution image and to preserve as much detail and structure as possible in the original low resolution image. The image interpolation is realized through a machine learning model. Usually, a model is trained, namely the model learns the mapping relationship between the feature data X and the label data Y, but the learning capability of the model M selected by us may have limitations, so that the prediction capability of the model M may have a bottleneck, and the application of the residual guiding strategy may help the model break through the limitation.
The residual error is used as the difference value between the true value and the predicted value, reflects the place where the model is insufficient for data prediction, and also provides the basis for modifying the model. The training is participated in by using the residual error instead of the true value as the label, and the training is not different from the training by directly using the true value as the label, but the residual error has a remarkable characteristic that the residual error can be updated in iteration. Therefore, the model is expanded under the guidance of the residual error, the new model refines the previous residual error, and the final model obtained after repeated iteration can produce a better prediction result. According to the strategy, the estimated value output by the model continuously approaches the true value. If the first level model M(1)The prediction cannot be made perfectly, and a new residual must be generated, here denoted by ε, for which the second model M is reused(2)Learning is performed for this portion of the residual. Let M(2)Can compensate for M(1)This will result in a new residual, but ideally the components of this residual will be smaller than before. This is done. Until the nth model, the intensity of each component of the residual error epsilon is almost 0, that is, after one iteration is completed, the predicted value of the model approaches the true value. This is the overall idea of the residual guiding strategy.
The core of the application is to provide an image interpolation model training method based on a residual guiding strategy, an image interpolation method based on the residual guiding strategy, computer equipment and a readable storage medium. And constructing and training a model by using the characteristics of the random forest hierarchical structure and applying a residual error guide strategy. Theoretically, as the training level increases, the image residual converges to zero, so that an improvement in interpolation effect can be achieved. In the training process, each level stores a corresponding regression function, namely the mapping relation between the low-resolution image block and the residual image block at the current stage. In the interpolation stage, the residual errors are predicted by utilizing the mapping relation of each level, in view of the convergence property of model training, the estimated residual error of each level is optimized for the residual error of the previous level, and the reconstruction of the image is completed and the reconstruction quality is ensured by fusing the estimated residual errors of each level. A large number of experimental results show that the interpolation method and the interpolation device can provide interpolation images with high precision and good subjective feeling.
A first embodiment of the image interpolation model training method based on the residual guiding strategy provided by the present application is described below.
Referring to fig. 1, an embodiment includes the following steps:
s11, acquiring a high-resolution image; down-sampling the high-resolution image to obtain a low-resolution image; generating a pre-interpolation image according to the low-resolution image;
s12, carrying out difference on the high-resolution image and the pre-interpolation image to obtain an initial residual error;
s13, training the random forest by utilizing the pre-interpolation image and the initial residual error; in the training process, the random forest synchronously grows according to layers, an initial residual error is used as a first-level residual error, a mapping relation between a pre-interpolation image and a current-level residual error is learned at any level of the random forest so as to generate an estimated residual error, and the current-level residual error and the estimated residual error are subjected to subtraction to obtain a next-level residual error;
and S14, outputting the random forest with the mapping relation of each level determined when the training termination condition is reached, and taking the random forest as an image interpolation model.
Specifically, in order to improve efficiency, the pre-interpolated image and the initial residual are used as training data after being preprocessed. Such pretreatment processes include, but are not limited to: sampling the pre-interpolation image to obtain a feature vector for training; and sampling the initial residual error to obtain a residual error vector for training.
In order to further improve the interpolation effect, training data can be grouped according to a certain rule, correspondingly, all random forests are also divided into a plurality of groups of random forests, the grouping number of the training data is equal to that of the random forests, and in the training process, one group of the training data trains one group of random forests. As a specific implementation, considering that the image interpolation must keep the value of the pixel at the motionless point position unchanged before and after the interpolation, i.e. the value of the pixel at the motionless point position of the image after the interpolation corresponds to the value in the low-resolution image one by one, the feature vectors may be grouped according to the motionless point distribution pattern included in the sampling result of the pre-interpolation image block. Here, the distribution pattern of the fixed points included in the sampling result is mainly affected by the sampling rule, and it is assumed that the sampling mode is: the sampling is performed at intervals with the step length of 1, so that the distribution pattern of the stationary points included in the sampling result is 4, the grouping number of the feature vectors is 4 at this time, and the grouping number of the random forest is also 4.
One way to improve the interpolation effect is to divide the training data and the random forest into a number of groups. In practical applications, the two methods may be combined or used separately.
In order to improve the interpolation effect, the image interpolation model may be allowed to include multiple levels of random forests, that is, the structure of the image interpolation model is a cascade random forest. Each level of random forest is trained in the above manner, it should be noted that, in the training process, the pre-interpolation image of the first level of random forest is an image generated by using a preset interpolation algorithm, and this embodiment does not limit which image interpolation algorithm is selected; and for any K belonging to [2, K ], the pre-interpolation image of the kth-level random forest is an image obtained by sequentially interpolating the previous K-1-level random forest.
On the basis, in order to avoid overfitting, different high-resolution images are selected for random forests of different levels.
In summary, according to the image interpolation model training method based on the residual error guiding strategy provided by this embodiment, an initial residual error generated by subtracting the pre-interpolation image, the high-resolution image and the pre-interpolation image is used as an input of a random forest, the random forest grows layer by layer in the training process, residual error refining is performed after each layer grows, that is, a new residual error is generated, and the new residual error guides the growth of the next layer of the random forest. The initial residual is the difference between the high-resolution image and the pre-interpolation image, and the residual of the later level is the difference between the last level residual and the last level estimation residual. According to the residual guiding thought, an image interpolation model based on random forests is constructed, the image interpolation model is trained based on a residual guiding strategy, and the trained model can show the image interpolation quality.
On the basis of the first embodiment, the pre-interpolation image and the initial residual error preprocessing process are described in detail below. The method is provided as a feasible preprocessing method, and the embodiment does not limit the preprocessing method.
As shown in fig. 2, the pre-processing procedure of the pre-interpolated image includes the following steps:
s21, filtering and sampling the pre-interpolation image to obtain a feature vector of each sampling position;
specifically, the filtering and sampling process may specifically be: filtering the pre-interpolation image by utilizing a one-dimensional first-order gradient operator and a one-dimensional second-order gradient operator to generate four corresponding characteristic images; and sampling the four characteristic images to obtain a characteristic vector of each sampling position.
S22, carrying out edge detection on the pre-interpolation image to obtain an edge image; sampling the edge image to obtain an edge image block;
s23, sampling the pre-interpolation image to obtain a pre-interpolation image block;
s24, screening the feature vectors of all sampling positions according to the edge image block of each sampling position, and screening out the feature vectors with the edge pixel intensity value greater than 0;
s25, grouping the screened feature vectors according to the distribution mode of the fixed points contained in all the pre-interpolation image blocks; recording the grouping number of the feature vectors as H, and dividing the random forest into H groups of random forests;
s26, for any H epsilon [1, H ], reducing the dimension of the H group of feature vectors to obtain the feature vectors of the pre-interpolation image for training.
Fig. 2 illustrates a generation process of a first-level residual, that is, a difference is made between a high-resolution image and a pre-interpolation image to obtain an initial residual. Fig. 2 also illustrates the initial residual pre-processing process, which includes the following steps:
s31, sampling the initial residual error to obtain a residual error image block at each sampling position;
s32, for any H epsilon [1, H ], determining a residual image block corresponding to the H group of feature vectors of the pre-interpolation image to obtain an H group of residual vectors of the initial residual. That is, the residual vectors and the reduced-dimension feature vectors are combined according to the sampling positions, and the feature vectors and the residual vectors processed in S26 are associated one by one.
As described above, a set of training data is used to train a set of random forests, i.e., a set of feature vectors and their corresponding residual vectors are used to train a set of random forests. Therefore, in detail, the training process is as follows: and for any H epsilon [1, H ], training the H set of random forests by using the H set of characteristic vectors of the pre-interpolation image and the H set of residual vectors of the initial residual.
In fig. 2, the pre-interpolation image, the edge image, and the residual image are sampled in the same manner. As a specific embodiment, the specific sampling rule may be: the sampling is performed at intervals of step size 1. At this time, there are 4 kinds of motionless point distribution patterns included in the pre-interpolation image sampling result, the number of groups of feature vectors is 4, that is, the value of H is 4, and correspondingly, the random forest is divided into 4 groups.
On the basis of the first embodiment, the following describes the training process of the random forest in detail. The present invention is provided only as a feasible training method, and the present embodiment does not limit what kind of training method is adopted.
As described in S13, in the training process of this embodiment, random forests are grown simultaneously by layer. As shown in FIG. 3, in the first level, the pre-interpolated image X and the first level residual R(1)As training data, random forest numberThe nodes of one level learn the mapping relation between the two nodes, and the first level residual error R can be obtained according to the mapping relation(1)Estimated residual error ofFor first-level residual error R(1)And estimating the residualMaking a difference to obtain a refined residual F as a second-level residual R(2). In the second stage random forest, pre-interpolation image X and second stage residual R(2)As training data, and so on.
In the above S13, the process of learning the mapping relationship between the pre-interpolated image and the residual error of the current level at any level of the random forest to generate the estimated residual error specifically includes the following steps:
s40, for any H belonging to [1, H ], initializing root nodes of all decision trees of the H group of random forests by using the H group of feature vectors of the pre-interpolation image and the H group of residual vectors of the initial residual;
s41, controlling all decision trees to grow synchronously according to layers, judging whether unprocessed target nodes exist at any level of the random forest, if yes, entering S42, and if not, entering S43;
s42, generating a first linear transformation from the feature vector contained in the target node to the residual vector contained in the target node, and further generating a second linear transformation from the feature vector contained in the target node to the target residual vector, wherein the target residual vector is a residual vector which has an intersection with the target node in the residual vectors contained in the root node;
s43, judging whether the splitting termination condition is reached, if so, entering S44, otherwise, entering S45;
s44, determining that the target node belongs to leaf nodes, recording second linear transformation of the target node, and finally performing second linear transformation of all the leaf nodes, namely mapping relation between the sum of the pre-interpolation image and each level of residual error;
s45, determining that the target node belongs to the internal node, and entering the next level through node splitting; in the node splitting process, splitting parameters are randomly selected, target nodes are split, the optimal splitting parameters are determined according to the error reduction amount before and after splitting, and the optimal splitting parameters of the target nodes are recorded.
It can be seen that, in this embodiment, the node not only calculates the linear transformation between the feature vector included in the node itself and the residual vector included in the node itself, but also superimposes the linear transformation of all ancestor nodes thereof, and further calculates the linear transformation between the feature vector included in the node itself and the target residual vector, where the target residual vector is the residual vector that has an intersection with the node in all the residual vectors. Therefore, in this embodiment, only leaf nodes are required to record linear transformation, and internal nodes are not required to record linear transformation. It can be understood that, since the leaf node does not need to continue splitting, the leaf node does not need to record the optimal splitting parameter, but only the internal node needs to record the optimal splitting parameter.
In summary, the linear transformation computed at each level is superimposed on the linear transformation of its child node (i.e. the process of generating the second linear transformation from the first linear transformation as described above), and in another aspect, the linear transformation computed at the child node can be used to refine the linear transformation at the parent node, according to the fact that the residual is additive, and the linear transformations computed from the residual can also be superimposed.
The second embodiment of the image interpolation model training method based on the residual guiding strategy provided by the present application is described in detail below.
In the second embodiment, the image interpolation model is a K-level cascade random forest, and each level of random forest is divided into four groups of random forests. The inputs and outputs of example two are as follows:
inputting: training an image data set, the maximum height/maximum level L of the random forest, the number N of decision trees contained in the random forest and the maximum level K of the cascaded random forest.
And (3) outputting: trained cascaded random forestWherein each random forest contains N decision trees, i.e.
This embodiment divides the whole training process into three phases: the method comprises a data preparation stage, a first-stage random forest training stage and other stages of random forest training stages. Each stage is described below.
First, a data preparation phase
S401, converting the high-resolution image from an RGB color space to a YCbCr color space, and then training only aiming at the Y-channel image.
S402, aiming at high-resolution images { IYOne pixel down-sampling is performed at intervals to simulate a low-resolution image acquired under real conditions.
S403, performing pre-interpolation on the low-resolution image by using Bicubic algorithm to enable the low-resolution image to have the same size as the original image, and performing pre-interpolation on the low-resolution image by using the pre-interpolation imageInstead of low resolution images, participate in the training as feature data.
S404, corresponding high-resolution image IYAnd the pre-interpolation imageObtaining residual image { I by differenceRIt is used to replace high resolution images { I }YTake part in training as label data.
S405, detecting the edges of the pre-interpolation image by using a Canny edge detection function in Matlab to obtain an edge image { I }E}。
S406, filtering the pre-interpolation image according to the one-dimensional first-order gradient operator and the second-order gradient operator to generate four corresponding characteristic imagesThe image blocks are acquired at 1 step intervals on four characteristic images, and four graphs with the size of 5 multiplied by 5 are generated at each positionPhoto blockThe image blocks are vectorized (converted from 5 × 5 to 25 × 1), and then stitched (from 4 pieces of 25 × 1 to 100 × 1), so as to obtain a stitched vectorAs a feature vector for training.
The first order gradient operator and the second order gradient operator are in the following forms:
s407, pre-interpolation imageResidual image { IRAnd edge image { I }EAnd sampling in the same way to obtain a pre-interpolation image block, a residual image block and an edge image block.
S408, each feature vector xiAll have corresponding residual image blocks riThe eigenvectors are pieced together into a matrix X ═ X1,x2,...,xD]Residual image blocks are combined into a matrix, and the matrix and the residual image blocks form a group of { X, R(1)And jointly participating in the training of the random forest, wherein D is the number of image blocks, and the superscript 0 represents the number of layers in the decision tree, namely the number of times of iteration execution. The residual matrix form is as follows:
s409, according to the edge image blocks, screening out the feature vectors of which the intensity values of the edge pixels are greater than 0, and reserving the corresponding feature vectors and residual image blocks.
And S410, grouping the feature vectors according to the distribution mode of the fixed points contained in the sampling result of the pre-interpolation image block. There are only four distribution patterns, so the feature vectors are divided into four groups.
S411, using PCA to respectively reduce the dimension of the feature vectors of different groups, and storing the next four PCA matrixes Pj(j ═ 1, 2, 3, 4) and the reduced feature matrix
S412, finally obtaining training data which are respectively used for training four groups of random forests in each level, wherein the form of the training data is as follows:
it can be understood that, in the embodiment, the feature vectors are divided into four groups according to the stationary point mode, so that there are four groups of training data matrixes, and each group contains a matrix formed by splicing the feature vectors after dimension reductionSum residual matrixEach level of random forest comprises 4 groups of random forests, and the 4 groups of random forests respectively correspond to four different fixed point modes. On the whole, all the feature vectors X train a first-level random forest together, but in detail, all the feature vectors X are divided into 4 groups X according to a mode1,X2,X3,X4Four groups of random forests in the same level of random forest were trained separately.
Second, training phase of first-level random forest
Training a first-level random forest according to a residual guiding strategyThe method comprises the following steps:
s51, first-level random forest to be trainedWhereinFor the totality of decision trees in the jth group of random forest, use the dataThe root node is initialized.
S52, N decision trees in the jth group grow synchronously according to layers, and when the L (L is 1, 2, 1) th layer is trained by all decision trees in the random forest, the N decision trees are used for the N decision treesIf there are unprocessed nodes alpha, the nodes are split.
S53, refining residual error when the current level L is less than L-1, and for the nth decision treeIf there are nodes for unrefined residualsRefining the residual error as follows: according toFor X in node betaβEstimating residual errorWherein each residual vector can be estimated by:
thereafter, refining of the residual error in node β is done according to:
In the above S52, the node splitting process specifically includes:
s521, the node alpha contains dataObtained by solving the following formulaaToThe linear transformation of (a), i.e. the first linear transformation mentioned hereinbefore:
s522, training the ancestor node of the node alpha, obtaining corresponding linear transformation, and performing the linear transformation and the linear transformationAre added up to obtainThat is XαToI.e. the second linear transformation mentioned before, whereinMeans thatThe residual vector with intersection with node alpha. Where α is(i)Refers to the ancestor node of node α, α(0)Is the root node of the node alpha, alpha(l-1)Is the parent node of the node alpha, alpha(l)Is node alpha.
S523, if the number of the feature vectors contained in the node alpha is less than 200 or the current level L is equal to L-1, the splitting is not continued, the node alpha is marked as a leaf node, and W is storedαFor use in the interpolation stage.
S524, randomly selecting a series of splitting parameters theta1,Θ2,...,ΘPIn which Θp={θ1,θ2,τ},θj(j-1, 2) represents XαTheta in (b)jLine according to a splitting function S (x)i,Θp)=xi[θ1]-xi[θ2]The result of τ classifies the ith feature vector if S (x)i,Θp) ≧ 0 classifies it into child node β, otherwise into child node γ.
As a specific embodiment, P is taken as follows: p ═ 6.τ represents a threshold gray value, which is normalized to [0,1] in practical applications, so τ ∈ [0,1 ].
S525, selecting the optimal splitting parameter theta by using the error reduction amount before and after splittingpFinally, selecting the error reduction GαMaximum parameter ΘPThe node alpha is split. Error calculation methodThe formula is as follows:
wherein the content of the first and second substances,to take advantage of linear transformations held in the node deltaFor residual error RδEstimation error of DδIs the number of feature vectors in the node delta.
The nth decision tree in the jth random forestAfter the l-th layer splitting is completed, updatingRandom forest in (1)
In the same random forest, because the splitting parameters of the decision tree in the node splitting process are randomly selected, the training results are different even if the input of different decision trees is the same. That is to say, the training results of different decision trees in the same random forest are different, and the training results specifically refer to: the optimal splitting parameters stored in the root node and the internal node and the mapping relation stored in the leaf node.
In addition, in the same group of random forests, the dimensionality of the mapping relation obtained in each decision tree is the same, so that the method has additivity. If a group of random forests comprises two decision trees, the feature vector x is divided into a certain leaf node in the decision tree 1 and a certain leaf node in the decision tree 2 after being input into the random forests, so that the feature vector x corresponds to two mapping relations W1,W2The two mapping relations can be respectivelyCompleting the mapping of the feature vector x Can be obtained by adding them togetherOr the mapping relation can be added to obtain W ═ W1+W2Then directly obtain
It should be noted that the residual refinement mentioned in the present embodiment is a process of determining a next-level residual according to a current-level residual.
Third, training phase of the rest level random forest
It is noted that the method is used for training the k (k > 1) th level random forestHigh resolution images need to be distinguished from training random forestsTraining data used in the process, and pre-interpolated images are usedAnd sequentially interpolating to generate. The rest training steps are the same as the first-level random forest training process, and are not described here.
Finally, outputting K-level cascaded random forestNamely, the image interpolation model determined by the mapping relations of all levels. In practical application, K may take the value 4.
In summary, the image interpolation model training method based on the residual error guiding strategy provided by this embodiment aims to obtain a high-resolution image from a low-resolution image and ensure that the objective index and the subjective impression of the interpolated image are both improved to a great extent. The embodiment mainly describes an implementation process of an offline training stage, and a series of steps of node splitting and data refining are iteratively performed in each decision tree construction process, wherein the data refining stage includes data partitioning and residual updating, and the updated residual is used for training of a next level. In addition, a cascade strategy is introduced to further improve the quality of image interpolation, and the cascade strategy is analyzed from a high scale, and is also used for guiding the training of a model by using image residual errors.
The above two embodiments describe the training process of the image interpolation model, and the following describes the image interpolation process of the image interpolation model trained in the above manner.
First, a first embodiment of an image interpolation method based on a residual guiding strategy provided in the present application is described, as shown in fig. 4, the embodiment includes the following steps:
s61, acquiring a low-resolution image to be interpolated;
s62, generating a pre-interpolation image according to the low-resolution image;
s63, inputting the pre-interpolation image into the trained random forest; at any level of the random forest, generating an estimated residual error according to a mapping relation between a pre-interpolation image learned in a training process and a residual error of a current level;
and S64, generating an interpolation image of the low-resolution image according to the estimation residual error of each layer and the pre-interpolation image.
In summary, for the online image interpolation stage, a given low-resolution image will pass through the trained cascade random forest in sequence and complete the interpolation. As shown in fig. 5, in each level of random forest, the image is divided from top to bottom in each decision tree in the form of feature vectors, each feature vector is transmitted to a leaf node, an estimated residual is generated by using linear transformation stored in the leaf vector, and the recombined estimated residual is superimposed on the pre-interpolation image, so that the interpolation result of the current level of random forest is obtained.
The following describes a second embodiment of the image interpolation method based on the residual guiding strategy provided in the present application.
The inputs and outputs of example two are as follows:
And (3) outputting: the image is interpolated.
This embodiment divides the whole interpolation process into two stages: a data preparation stage and an image interpolation stage. These two stages will be described separately below.
First, a data preparation phase
S71, converting the low resolution image from the RGB color space to the YCbCr color space.
S72, displaying the Y-channel image IYGenerating a pre-interpolated imageIn particular, when using first-level random forestsDuring interpolation, pre-interpolating the low-resolution image by using a Bicubic algorithm to obtain a pre-interpolation image; when other levels of random forests are used, the interpolation result of the random forest of the previous level is directly used as the pre-interpolation image.
S73, detecting the edges of the pre-interpolation image by using a Canny edge detection function in Matlab to obtain an edge image { I }E}。
S74, pre-interpolation image is subjected to one-dimensional first-order gradient operator and second-order gradient operatorFiltering is carried outGenerating four corresponding characteristic imagesAcquiring image blocks on four characteristic images at intervals of 1 step, wherein four image blocks with the size of 5 multiplied by 5 are generated at each positionVectorizing and splicing the image blocks to obtain spliced vectors
S75, pre-interpolation imageAnd edge image { IEAnd sampling the image blocks in the same way to obtain a pre-interpolation image block, a residual image block and an edge image block.
S76, combining the characteristic vectors into a matrix X ═ X1,x2,...,xD]Where D is the number of image blocks.
And S77, screening out the feature vectors of which the intensity values of the edge pixels are greater than 0 according to the edge image blocks, and reserving the corresponding feature vectors.
And S78, grouping the feature vectors according to the distribution mode of the fixed points contained in the sampling result of the pre-interpolation image block. There are only four distribution patterns, so the feature vectors are divided into four groups.
S79, using PCA in training phase to reduce dimension of different groups of feature vectors, and storing feature matrix after dimension reduction
Second, an image interpolation stage
Level k random forestImage interpolation is carried out according to the processed characteristic matrixThe method comprises the following steps:
s81, random forestWhereinFor the totality of decision trees in the jth group of random forest, use the feature matrixInitializing a root node;
s82, feature matrixPassing from top to bottom up to the leaf nodes, in particular if the feature matrix of the internal node a is reachedIf the split parameter theta is not transmitted downwards, the optimal split parameter theta pair is recorded according to the node alphaDividing;
s83, when the feature matrix is passed to a leaf node ρ, the linear transformation W stored in the leaf node is usedρGenerating an estimated residual
S84 decision treesFinally, an estimated residual error matrix is outputIn order to distinguish the prediction results of different decision trees, the residual matrix estimated by the nth decision tree is recorded asFrom random forestsEstimate the residual error as
S85, recombining the predicted residual vectors into residual images and processing the overlapped area, which specifically comprises the following steps: preparing two zero matrixes with the same size as the interpolated image, wherein one of the zero matrixes stores a residual imageAnother one keeps a count of the overlapping of the positions when all the image blocks are put inThen, the final residual image is obtained by averaging
S86, mixingAnd a pre-interpolated image IXAfter superposition, a k-th random forest is obtainedInterpolated image
If K < K, the image will be interpolatedUsed as the next level of random forestOr else, interpolating the imageThe method is characterized by restoring the color image, specifically, interpolating Cb channel images and Cr channel images of a low-resolution image by using a Bicubic algorithm, merging the three channel images and converting the three channel images into an RGB color space.
Experiments prove that compared with other mainstream image interpolation algorithms, the random forest image interpolation method based on the residual error guiding strategy of the embodiment has the advantage that the objective index of the image interpolation result is obviously improved.
In addition, the present application also provides a computer device, comprising:
a memory: for storing a computer program;
a processor: for executing the computer program for implementing the residual guiding strategy based image interpolation model training method as described above, and/or the residual guiding strategy based image interpolation method as described above.
Finally, the present application provides a readable storage medium having stored thereon a computer program for implementing, when being executed by a processor, the residual guiding strategy based image interpolation model training method as described above, and/or the residual guiding strategy based image interpolation method as described above.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above detailed descriptions of the solutions provided in the present application, and the specific examples applied herein are set forth to explain the principles and implementations of the present application, and the above descriptions of the examples are only used to help understand the method and its core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. An image interpolation model training method based on a residual guiding strategy is characterized by comprising the following steps:
acquiring a high-resolution image; down-sampling the high-resolution image to obtain a low-resolution image; generating a pre-interpolation image according to the low-resolution image;
performing difference on the high-resolution image and the pre-interpolation image to obtain an initial residual error;
training a random forest by using the pre-interpolation image and the initial residual error; in the training process, the random forest grows synchronously according to layers, the initial residual is used as a first-level residual, a mapping relation between the pre-interpolation image and the current-level residual is learned at any level of the random forest to generate an estimated residual, and the current-level residual and the estimated residual are subjected to subtraction to obtain a next-level residual;
and when the training termination condition is reached, outputting the random forest determined by the mapping relation of each level as an image interpolation model.
2. A method as recited in claim 1, wherein the random forest is divided into groups of random forests, the training of the random forests using the pre-interpolated images and the initial residuals comprising:
generating a feature vector of the pre-interpolated image;
grouping the feature vectors according to a fixed point distribution mode, wherein the grouping number of the feature vectors is equal to that of the random forest;
and when training the random forest, training a group of random forests by using each group of the feature vectors and corresponding residual vectors in the initial residual.
3. The method of claim 2, wherein the generating the feature vector for the pre-interpolated image comprises:
filtering the pre-interpolation image by utilizing a one-dimensional first-order gradient operator and a one-dimensional second-order gradient operator to generate four corresponding characteristic images; and sampling the four characteristic images to obtain a characteristic vector of each sampling position.
4. The method according to claim 3, wherein the four characteristic images are sampled in a manner specifically as follows: sampling at intervals with step length of 1;
correspondingly, the number of the fixed point distribution modes is 4, and the grouping number of the feature vectors and the grouping number of the random forests are both 4.
5. The method according to claim 1, characterized in that the image interpolation model comprises in particular K levels of the random forest; the pre-interpolation image of the first-level random forest is an image generated by using a preset interpolation algorithm, and for any K ∈ [2, K ], the pre-interpolation image of the kth-level random forest is an image obtained by sequentially carrying out interpolation on the previous K-1-level random forest.
6. A method as claimed in claim 5, wherein the high resolution images of different levels of random forests are different in the image interpolation model.
7. A method as claimed in any one of claims 1 to 6, wherein the random forest is grown in layers simultaneously, comprising:
judging whether unprocessed target nodes exist at any level of the random forest;
if the residual vector exists, generating a first linear transformation from the feature vector contained in the target node to the residual vector contained in the target node, and further generating a second linear transformation from the feature vector contained in the target node to a target residual vector, wherein the target residual vector is a residual vector intersected with the target node;
if not, judging whether the splitting termination condition is reached;
if yes, determining that the target node belongs to a leaf node, recording second linear transformation of the target node, and finally performing second linear transformation of all leaf nodes, namely mapping relation between the pre-interpolation image and sum of all levels of residual errors;
if not, determining that the target node belongs to the internal node, and entering the next level through node splitting; in the node splitting process, splitting parameters are randomly selected, the target node is split, the optimal splitting parameter is determined according to the error reduction amount before and after splitting, and the optimal splitting parameter of the target node is recorded.
8. An image interpolation method based on a residual guiding strategy is characterized by comprising the following steps:
acquiring a low-resolution image to be interpolated;
generating a pre-interpolation image according to the low-resolution image;
inputting the pre-interpolation image into a trained random forest; generating an estimated residual error at any level of the random forest according to a mapping relation between a pre-interpolation image learned in a training process and a residual error of a current level;
and generating an interpolation image of the low-resolution image according to the estimation residual and the pre-interpolation image of each level.
9. A computer device, comprising:
a memory: for storing a computer program;
a processor: for executing the computer program to implement the residual guiding strategy based image interpolation model training method according to any one of claims 1 to 7 and/or the residual guiding strategy based image interpolation method according to claim 8.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program, which when being executed by a processor is adapted to implement the residual guiding strategy-based image interpolation model training method according to any one of claims 1 to 7 and/or the residual guiding strategy-based image interpolation method according to claim 8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110830807.8A CN113538242A (en) | 2021-07-22 | 2021-07-22 | Image interpolation model training method based on residual error guide strategy |
PCT/CN2021/125577 WO2023000526A1 (en) | 2021-07-22 | 2021-10-22 | Image interpolation model training method based on residual-guided policy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110830807.8A CN113538242A (en) | 2021-07-22 | 2021-07-22 | Image interpolation model training method based on residual error guide strategy |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113538242A true CN113538242A (en) | 2021-10-22 |
Family
ID=78120492
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110830807.8A Pending CN113538242A (en) | 2021-07-22 | 2021-07-22 | Image interpolation model training method based on residual error guide strategy |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113538242A (en) |
WO (1) | WO2023000526A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106910161A (en) * | 2017-01-24 | 2017-06-30 | 华南理工大学 | A kind of single image super resolution ratio reconstruction method based on depth convolutional neural networks |
CN108447020A (en) * | 2018-03-12 | 2018-08-24 | 南京信息工程大学 | A kind of face super-resolution reconstruction method based on profound convolutional neural networks |
US20200151852A1 (en) * | 2018-11-09 | 2020-05-14 | Hong Kong Applied Science And Technology Research Institute Co., Ltd. | Systems and methods for super-resolution synthesis based on weighted results from a random forest classifier |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101107256B1 (en) * | 2007-03-27 | 2012-01-19 | 삼성전자주식회사 | Method and apparatus for adaptively converting frame rate based on motion vector and display device with adaptive frame rate conversion capability |
CN109564677B (en) * | 2018-11-09 | 2022-09-27 | 香港应用科技研究院有限公司 | Super-resolution synthesis system and method based on random forest classifier weighting result |
CN109741255A (en) * | 2018-12-12 | 2019-05-10 | 深圳先进技术研究院 | PET image super-resolution reconstruction method, device, equipment and medium based on decision tree |
-
2021
- 2021-07-22 CN CN202110830807.8A patent/CN113538242A/en active Pending
- 2021-10-22 WO PCT/CN2021/125577 patent/WO2023000526A1/en unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106910161A (en) * | 2017-01-24 | 2017-06-30 | 华南理工大学 | A kind of single image super resolution ratio reconstruction method based on depth convolutional neural networks |
CN108447020A (en) * | 2018-03-12 | 2018-08-24 | 南京信息工程大学 | A kind of face super-resolution reconstruction method based on profound convolutional neural networks |
US20200151852A1 (en) * | 2018-11-09 | 2020-05-14 | Hong Kong Applied Science And Technology Research Institute Co., Ltd. | Systems and methods for super-resolution synthesis based on weighted results from a random forest classifier |
Non-Patent Citations (5)
Title |
---|
RUN SU 等: ":Single image super-resolution via a progressive mixture model", 《2020 IEEE INTERNATIONAL CONFERENCE ONIMAGE PROCESSING(ICIP)》, pages 508 - 512 * |
RUN SU等: "Single image super-resolution via a progressive mixture model", 《2020 IEEE INTERNATIONAL CONFERENCE ONIMAGE PROCESSING(ICIP)》, pages 508 - 512 * |
刘振鹏等: "FS-CRF:基于 特征切分与级联随机森林的异常点检测模型", 《计算机科学》, vol. 47, no. 8, pages 185 - 188 * |
刘振鹏等: "FS-CRF:基于特征切分与级联随机森林的异常点检测模型", 《计算机科学》, vol. 47, no. 8, pages 185 - 188 * |
涂盼盼等: "一种基于数学形态学的齿轮边缘检测方法研究", 《机电工程》, vol. 36, no. 08, pages 839 - 850 * |
Also Published As
Publication number | Publication date |
---|---|
WO2023000526A1 (en) | 2023-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Van den Oord et al. | Conditional image generation with pixelcnn decoders | |
Song et al. | Efficient residual dense block search for image super-resolution | |
Furuta et al. | Pixelrl: Fully convolutional network with reinforcement learning for image processing | |
US6683998B2 (en) | Apparatus and method of building an electronic database for resolution synthesis | |
KR20210016587A (en) | Method and apparatus for generating displacement maps of pairs of input data sets | |
CN111046939A (en) | CNN (CNN) class activation graph generation method based on attention | |
Wittich et al. | Appearance based deep domain adaptation for the classification of aerial images | |
WO2022195285A1 (en) | Image processing using machine learning | |
Li et al. | Dual-stage approach toward hyperspectral image super-resolution | |
Ramji et al. | Soft computing based color image demosaicing for medical Image processing | |
Kwak et al. | Injecting 3d perception of controllable nerf-gan into stylegan for editable portrait image synthesis | |
Bhowmik et al. | Training-free, single-image super-resolution using a dynamic convolutional network | |
Salmona et al. | Deoldify: A review and implementation of an automatic colorization method | |
CN116630464A (en) | Image style migration method and device based on stable diffusion | |
Chen et al. | Improving the perceptual quality of 2d animation interpolation | |
JP7161107B2 (en) | generator and computer program | |
KR20210019835A (en) | Apparatus and method for generating super resolution inmage using orientation adaptive parallel neural networks | |
KR20230073751A (en) | System and method for generating images of the same style based on layout | |
Greaves et al. | Multi-frame video super-resolution using convolutional neural networks | |
Schirrmeister et al. | When less is more: Simplifying inputs aids neural network understanding | |
JP4235306B2 (en) | Low complexity, low memory consumption reverse dither method | |
CN113538242A (en) | Image interpolation model training method based on residual error guide strategy | |
Cosmo et al. | Multiple sequential regularized extreme learning machines for single image super resolution | |
CN116091893A (en) | Method and system for deconvolution of seismic image based on U-net network | |
CN113096020B (en) | Calligraphy font creation method for generating confrontation network based on average mode |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |