CN112102334A - Test data generation method based on segmentation variation - Google Patents

Test data generation method based on segmentation variation Download PDF

Info

Publication number
CN112102334A
CN112102334A CN202010966125.5A CN202010966125A CN112102334A CN 112102334 A CN112102334 A CN 112102334A CN 202010966125 A CN202010966125 A CN 202010966125A CN 112102334 A CN112102334 A CN 112102334A
Authority
CN
China
Prior art keywords
image
test data
segmentation
seed
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010966125.5A
Other languages
Chinese (zh)
Other versions
CN112102334B (en
Inventor
张智轶
王璞
周玉倩
陈兵
陶传奇
黄志球
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202010966125.5A priority Critical patent/CN112102334B/en
Publication of CN112102334A publication Critical patent/CN112102334A/en
Application granted granted Critical
Publication of CN112102334B publication Critical patent/CN112102334B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30212Military

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a test data generation method based on segmentation variation, which comprises the steps of firstly collecting a certain amount of seed image test data according to tested software, determining a characteristic region of the seed image test data needing to be varied by combining with domain knowledge, and calling the characteristic region as an image assembly, then segmenting the image assembly by using a region-based image segmentation method and combining with an energy function value taking value, then carrying out variation on the segmented image assembly by using variation rules (including pixel color transformation, pixel value addition of 1 minus 1 and the like) to obtain a new image assembly after variation, and finally combining the new image assembly with the segmented seed image test data to obtain new image test data.

Description

Test data generation method based on segmentation variation
Technical Field
The invention relates to a test data generation method in a software test technology, in particular to a test data generation method based on segmentation variation, and belongs to the technical field of computers.
Background
At present, military image identification information service can effectively assist users to quickly identify image information, but for an evaluation organization, the military information service has the problems that test images are few, part of test data possibly depends on training data of manufacturers, so that the reliability of test results is affected, and the defect finding capability is not high. How to obtain sufficient and reliable test data becomes a key issue.
Disclosure of Invention
For an evaluation organization, the test images of military information services are few, and part of test data may also depend on the training data of a manufacturer, so that the reliability of the test result is affected, and the defect finding capability is not high.
The invention adopts the following technical scheme for solving the technical problems:
a test data generation method based on segmentation variation comprises the following steps:
1) randomly selecting a plurality of image test data in a factory test data set of the software to be tested as seed image test data;
2) determining a characteristic region needing variation in the seed image test data according to the corresponding domain knowledge of the tested software, and segmenting the characteristic region from the seed image test data to be recorded as an image component;
3) carrying out mutation on the image assembly in the step 2) by using a mutation rule to generate a new image assembly;
4) and combining the new image assembly in the step 3) into the seed image test data segmented in the step 2) by utilizing an image splicing technology to form new image test data.
Further, the seed image test data is segmented by using an image segmentation method based on the region in the step 2).
Further, the mutation rule comprises the change of image pixels, the addition of an additional connected region, the size enlargement and reduction of an image component and the affine transformation of the image component.
Further, the change of the image pixel includes a pixel color change, a pixel gray value change, a pixel value increase or decrease.
Further, in 2), the seed image test data is segmented by using an image segmentation method based on a region and an energy function, specifically:
2.1) establishing a gray level histogram of a target and a background in the seed image test data;
2.2) repeatedly using the image segmentation method based on the region to carry out image segmentation on the seed image test data for a plurality of times to obtain a segmentation result set X ═ { X ═ X1,X2,…,XMIn which XmA segmentation result of an mth image segmentation on the seed image test data using a region-based image segmentation method, wherein M is 1,2, …, M;
2.3) establishing an energy function, selecting a segmentation result with the minimum energy function value in X and recording the segmentation result as an image component, wherein the energy function is established as follows:
E(Xm)=E1(Xm)+λ·E2(Xm)
wherein λ is a predetermined weight factor, λ is 0 ≦ λ<1,
Figure BDA0002682388760000024
Figure BDA0002682388760000021
Figure BDA0002682388760000022
xpIs XmThe label of any of the pixels p in the image,
Figure BDA0002682388760000023
o and B represent the gray level histogram distribution of the object and background, respectively, lpIs the gray value of the pixel p, Pr (l)pI O) denotes l under the conditions determined by OpProbability of belonging to O, Pr (l)pI B) denotes l under the conditions determined by BpProbability of belonging to B, E2(Xm)=∑(p,q)∈NE(p,q)(p,q),E(p,q)(p,q)=αexp((lp-lq)2) N is XmSet of pairs of adjacent pixels in, q being the adjacent pixel of p of the pixel, lqIs the gray value of the pixel q, and α is a preset scale factor.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects: for an evaluation organization, the test images of military information services are few, and part of test data may also depend on the training data of a manufacturer, so that the reliability of the test result is affected, and the defect finding capability is not high.
Drawings
FIG. 1 is a schematic diagram of a process framework of the present invention;
FIG. 2 is seed image test data;
FIG. 3 is a graph of the result of the segmentation variation in FIG. 2.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
the invention designs a test data generation method based on segmentation variation, which realizes the generation of new image test data by performing segmentation variation on seed image test data by utilizing a segmentation variation technology as shown in figure 1, and comprises the following steps:
1) randomly selecting a plurality of image test data from the original factory test data set of the tested software as seed image test data, as shown in fig. 2.
2) And (3) segmentation variation, wherein in the segmentation variation, firstly, a characteristic region which needs to be varied in the seed image test data is determined according to the corresponding domain knowledge of the tested software, and the characteristic region is called as an image assembly. (for example, in military image recognition software, the seed image test data is an airplane picture, and then the appearance characteristics of the airplane, that is, the field knowledge, such as the types of wings (single wing, double wing) or the types of propellers, are the characteristic regions to be varied, that is, the wing part and the propeller part).
Before image segmentation, we first establish a gray histogram of an object and a background (such as an airplane picture, the airplane is the object, and the rest is the background), where the gray histogram is a function about gray level distribution, and is a statistic of the gray level distribution in an image, and is to count the occurrence frequency of all pixels in a digital image according to the size of gray values, that is, the distribution of the object and background colors. And then, carrying out image segmentation on the seed test data image by using an image segmentation method based on the region, and obtaining an image assembly after segmentation.
In order to optimize the segmentation result (considering that the segmentation result in the actual segmentation cannot be completely accurate, that is, it is possible to segment a small portion of the background region all together), we perform multiple segmentation (taking the overall execution efficiency into consideration, the number of segmentation is 10 times), and obtain an image component set X ═ X after segmentation1,X2,…,X10For each image component Xm(m is 1,2, …,10), judging the overall attribution condition of all pixels (i.e. judging whether the pixels in the image assembly belong to the target or the background), selecting the image assembly with the pixels generally belonging to the target and the maximum overall probability (here, the probability refers to the probability that any one pixel in the image assembly belongs to the target) as the image assembly needing mutation finally, and mutating the image assembly to generate a new image assembly. The variation rules comprise pixel color transformation, pixel gray value transformation, pixel value 1 addition or 1 subtraction, extra connected region addition, image component size enlargement and reduction, image component affine transformation and the like, and are realized by using an opencv library in python.
3) And (3) generating new image test data, namely splicing the newly generated image component in the step (2) back to the segmented seed image test data (realized by using an opencv library in python and combining an image splicing technology) to form the new image test data. Taking an airplane as an example, a piece of seed image test data t containing the airplane is taken from a seed image test data set, a characteristic region needing to be subjected to variation in the t, such as a wing part, is determined by combining domain knowledge, an image assembly corresponding to the characteristic region is segmented from the t by using an image segmentation technology and combining an energy function value taking condition, then the image assembly is subjected to variation processing by using a variation rule to obtain a variation image assembly, and finally the variation image assembly is subjected to image splicing with the segmented t to generate new image test data, as shown in fig. 3 (an FA-18 empennage part (double empennages) is varied into a single empennage to generate a new F16-A test data image).
How to judge the overall attribution of all pixels in each image component is explained below.
Generally, to determine whether a pixel belongs to an object or a background, two points need to be considered, namely, the label (expressed as a gray value) of the pixel itself, and if the probability that the label is equal to the label of the object (i.e. the gray value of the pixel belongs to the gray histogram distribution of the object) is higher, the probability that the label belongs to the object is higher than the probability that the label belongs to the background. The second is which part the surrounding pixels belong to, if all the surrounding pixels belong to the target, then it is also likely to belong to the target, and the more similar the label of the surrounding pixels, the greater this likelihood. The two factors are integrated to establish an energy function, and the smaller the value of the energy function is, the higher the probability that the image component pixel belongs to the target as a whole is.
For image component XmLet us assume p to be XmAny one of the pixels, xpFor pixel p, and the label of the object is 1, and the label of the background is 0, xpThe values are as follows:
Figure BDA0002682388760000041
the energy function is established as follows:
E(Xm)=E1(Xm)+λ·E2(Xm)
wherein λIs E1And E2The weight factor between them determines the influence of both on the value of the overall energy function, and generally, λ is more than or equal to 0<1, λ may be any value within this range, e.g., λ ═ 0, then only the first factor is considered.
Function E1Representing the first factor, i.e. the determinant of the pixel itself to its attribution, the specific expression is as follows:
Figure BDA0002682388760000042
Figure BDA0002682388760000043
where O and B represent the gray level histogram distribution of the object and background, respectively, lpIs the gray value of pixel p. Pr (l)pI O) indicates that when O is determined, lpThe probability of belonging to O, i.e. the probability of the pixel p belonging to the object. Similarly, Pr (l)pI B) means when B is determined, lpThe probability of belonging to B, i.e. the probability of the pixel p belonging to the background. It can be seen that when Pr) lpI O) is greater than Pr (l)pI.e. the probability that the pixel p belongs to the target is greater than the probability that it belongs to the background, E)p(xp) The smaller, and thus E1(Xm) The smaller.
Function E2Representing the second factor, namely the determinant of the pixel surrounding the pixel to its attribution, the specific expression is as follows:
Figure BDA0002682388760000044
E(p,q)(p,q)∝exp((lp-lq)2)
wherein N is an image component XmSet of adjacent pixel pairs in the pixel array, p and q being XmTwo adjacent pixels in (l)pAnd lqGray scale values of p and q, respectively, the symbol ". alpha." indicates that the two ends of the equation are in direct proportion, exp indicates that e is used as the baseIs used as the exponential function of (1). It can be seen that the more similar the pixels p and q (the closer the grey values of the two are), exp ((l)p-lq)2) The smaller, and thus E2(Xm) The smaller. In the present invention, E(p,q)(p,q)=αexp((lp-lq)2) And alpha is a preset scale factor.
The technical means used in the present invention will be specifically described below.
The specific flow of the test data generation algorithm based on the segmentation variation is as follows:
1) input of algorithm
A. A seed image test data set G;
B. mutation rule Mu: pixel color transformation, pixel gray value transformation, pixel value increase or decrease, additional connected region addition, image component size enlargement and reduction, and image component affine transformation.
C. Domain knowledge K.
D. A set of feature regions C.
2) And (3) outputting an algorithm: a new set of image test data TS.
3) Initializing, initializing TS and C to be null.
4) Randomly selecting a seed image test data G from G0According to G0And determining the characteristic region needing to be mutated by the domain knowledge K, adding the characteristic region into the characteristic region set C, and under the condition that the test reinforcement requirement is not met:
if C is not null: randomly selecting a characteristic region C from C0C is divided by using a region-based image segmentation method and combining an energy function0Is divided from G and then is subjected to mutation rule MuRandomly selecting a variation rule pair C0Performing mutation to obtain a mutated characteristic region D0D is0And the segmented seed test data image G0Image splicing is carried out to obtain new image test data TS0Will TS0Add to TS.
If C is null: and (5) the generation fails, and the user is prompted that the feature-free region is used for the generation of the segmentation variation.
Image segmentation technology
Image segmentation is a technique and a process for dividing an image into a plurality of specific regions with unique properties and extracting a region of interest. It is a key step from image processing to image analysis. The existing image segmentation methods mainly include the following categories: a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a particular theory-based segmentation method, and the like. From a mathematical point of view, image segmentation is the process of dividing a digital image into mutually disjoint regions. The process of image segmentation is also a labeling process, i.e. pixels belonging to the same region are assigned the same number.
In the invention, we combine energy function and image segmentation technology to segment seed image test data, and we adopt one of segmentation methods based on region: a region growing method based on the region gray difference. The specific operation is as follows:
1) and scanning the image line by line to find out pixels which are not affiliated yet.
2) Its field pixels are examined centered on the pixel, i.e. the pixels in the field are compared with it one by one, and if the gray difference is smaller than the threshold T, they are merged.
3) Centering on the newly merged pixel, go back to step 2 to check the neighborhood of the new pixel until the region cannot be expanded any further.
4) And (5) repeating the steps 1,2 and 3 until the pixels without attributions cannot be found, and ending the whole growing process.
Combining the image segmentation method and the energy function, the segmentation result when the energy function takes the minimum value is obtained as the final segmentation result, i.e. the "image component" to be mutated.
Image splicing technology
The image stitching technology is a technology for stitching a plurality of images with overlapped parts (which may be obtained at different times, different viewing angles or different sensors) into a seamless panoramic image or a high-resolution image. A key branch of image splicing technology during image registration.
In the invention, an image registration method based on characteristic points is adopted: and constructing a transformation matrix between the image sequences through the matching point pairs, thereby completing the splicing of the panoramic images. The specific operation is as follows:
1) feature points in each image are detected.
2) A match between the feature points is calculated.
3) An initial value of an inter-image transformation matrix is calculated.
4) And (5) iteratively refining the H transformation matrix.
5) And guiding to match. And defining a search area near the epipolar line by using the estimated H, and further determining the correspondence of the characteristic points.
6) Iterations 4) and 5) are repeated until the number of corresponding points is stable.
After an image assembly obtained after image segmentation is mutated by using a mutation rule, the image assembly is subjected to image splicing with the residual seed image test data after segmentation, so that new image test data is obtained.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can understand that the modifications or substitutions within the technical scope of the present invention are included in the scope of the present invention, and therefore, the scope of the present invention should be subject to the protection scope of the claims.

Claims (5)

1. A test data generation method based on segmentation variation is characterized in that a segmentation variation technology is utilized to perform variation on seed image test data so as to generate new test data; the method comprises the following steps:
1) randomly selecting a plurality of image test data in a factory test data set of the software to be tested as seed image test data;
2) determining a characteristic region needing variation in the seed image test data according to the corresponding domain knowledge of the tested software, and segmenting the characteristic region from the seed image test data to be recorded as an image component;
3) carrying out mutation on the image assembly in the step 2) by using a mutation rule to generate a new image assembly;
4) and combining the new image assembly in the step 3) into the seed test data image segmented in the step 2) by utilizing an image splicing technology to form new image test data.
2. The method as claimed in claim 1, wherein the step 2) comprises segmenting the seed image test data by using a region-based image segmentation method.
3. The method of claim 1, wherein the mutation rules include changes in image pixels, addition of extra connected regions, scaling up and down of image components, and affine transformation of image components.
4. The method of claim 3, wherein the change in the image pixel comprises a pixel color change, a pixel gray value change, a pixel value increase or decrease.
5. The method for generating test data based on segmentation variation as claimed in claim 1, wherein 2) the seed image test data is segmented by using a region-based image segmentation method and an energy function, specifically:
2.1) establishing a gray level histogram of a target and a background in the seed image test data;
2.2) repeatedly using the image segmentation method based on the region to carry out image segmentation on the seed image test data for a plurality of times to obtain a segmentation result set X ═ { X ═ X1,X2,...,XMIn which XmA segmentation result of an mth image segmentation on the seed image test data using a region-based image segmentation method, wherein M is 1, 2.
2.3) establishing an energy function, selecting a segmentation result with the minimum energy function value in X and recording the segmentation result as an image component, wherein the energy function is established as follows:
E(Xm)=E1(Xm)+λ·E2(Xm)
in the formula, lambda is a preset weight factor, lambda is more than or equal to 0 and less than 1,
Figure FDA0002682388750000011
Figure FDA0002682388750000012
xpis XmThe label of any of the pixels p in the image,
Figure FDA0002682388750000013
o and B represent the gray level histogram distribution of the object and background, respectively, lpIs the gray value of the pixel p, Pr (l)pI O) denotes l under the conditions determined by OpProbability of belonging to O, Pr (l)pI B) denotes l under the conditions determined by BpProbability of belonging to B, E2(Xm)=∑(p,q)∈NE(p,q)(p,q),E(p,q)(p,q)=αexp((lp-lq)2) N is XmSet of pairs of adjacent pixels in, q being the adjacent pixel of p of the pixel, lqIs the gray value of the pixel q, and α is a preset scale factor.
CN202010966125.5A 2020-09-15 2020-09-15 Test data generation method based on segmentation variation Active CN112102334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010966125.5A CN112102334B (en) 2020-09-15 2020-09-15 Test data generation method based on segmentation variation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010966125.5A CN112102334B (en) 2020-09-15 2020-09-15 Test data generation method based on segmentation variation

Publications (2)

Publication Number Publication Date
CN112102334A true CN112102334A (en) 2020-12-18
CN112102334B CN112102334B (en) 2024-05-17

Family

ID=73759025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010966125.5A Active CN112102334B (en) 2020-09-15 2020-09-15 Test data generation method based on segmentation variation

Country Status (1)

Country Link
CN (1) CN112102334B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561909A (en) * 2020-12-28 2021-03-26 南京航空航天大学 Image countermeasure sample generation method based on fusion variation
CN112561909B (en) * 2020-12-28 2024-05-28 南京航空航天大学 Fusion variation-based image countermeasure sample generation method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101847263A (en) * 2010-06-04 2010-09-29 西安电子科技大学 Unsupervised image division method based on multi-target immune cluster integration
CN108257133A (en) * 2016-12-28 2018-07-06 南宁市浩发科技有限公司 A kind of image object dividing method
CN110348277A (en) * 2018-11-30 2019-10-18 浙江农林大学 A kind of tree species image-recognizing method based under natural background

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101847263A (en) * 2010-06-04 2010-09-29 西安电子科技大学 Unsupervised image division method based on multi-target immune cluster integration
CN108257133A (en) * 2016-12-28 2018-07-06 南宁市浩发科技有限公司 A kind of image object dividing method
CN110348277A (en) * 2018-11-30 2019-10-18 浙江农林大学 A kind of tree species image-recognizing method based under natural background

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561909A (en) * 2020-12-28 2021-03-26 南京航空航天大学 Image countermeasure sample generation method based on fusion variation
CN112561909B (en) * 2020-12-28 2024-05-28 南京航空航天大学 Fusion variation-based image countermeasure sample generation method

Also Published As

Publication number Publication date
CN112102334B (en) 2024-05-17

Similar Documents

Publication Publication Date Title
Abdel-Basset et al. A hybrid COVID-19 detection model using an improved marine predators algorithm and a ranking-based diversity reduction strategy
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN108229504B (en) Image analysis method and device
KR101165359B1 (en) Apparatus and method for analyzing relation with image and image or video
US8553980B2 (en) Method and apparatus extracting feature points and image based localization method using extracted feature points
US11586863B2 (en) Image classification method and device
CN111860414B (en) Method for detecting deep video based on multi-feature fusion
CN108629286B (en) Remote sensing airport target detection method based on subjective perception significance model
CN110633758A (en) Method for detecting and locating cancer region aiming at small sample or sample unbalance
EP2733666B1 (en) Method for superpixel life cycle management
CN108985190B (en) Target identification method and device, electronic equipment and storage medium
CN108550131B (en) SAR image vehicle detection method based on feature fusion sparse representation model
CN111259812B (en) Inland ship re-identification method and equipment based on transfer learning and storage medium
Zhang et al. An objective quality of experience (QoE) assessment index for retargeted images
US8867851B2 (en) Sparse coding based superpixel representation using hierarchical codebook constructing and indexing
Gotardo et al. Range image segmentation by surface extraction using an improved robust estimator
CN112330625B (en) Immunohistochemical nuclear staining section cell positioning multi-domain co-adaptation training method
Jin et al. Adaptive propagation-based color-sampling for alpha matting
CN110135428B (en) Image segmentation processing method and device
CN111723852A (en) Robust training method for target detection network
Janardhana Rao et al. Hybridized cuckoo search with multi-verse optimization-based patch matching and deep learning concept for enhancing video inpainting
CN113128518B (en) Sift mismatch detection method based on twin convolution network and feature mixing
Chen et al. Image segmentation based on mathematical morphological operator
CN107184224B (en) Lung nodule diagnosis method based on bimodal extreme learning machine
JP2013080389A (en) Vanishing point estimation method, vanishing point estimation device, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant