CN110347600B - Convolutional neural network-oriented variation coverage testing method and computer storage medium - Google Patents

Convolutional neural network-oriented variation coverage testing method and computer storage medium Download PDF

Info

Publication number
CN110347600B
CN110347600B CN201910623892.3A CN201910623892A CN110347600B CN 110347600 B CN110347600 B CN 110347600B CN 201910623892 A CN201910623892 A CN 201910623892A CN 110347600 B CN110347600 B CN 110347600B
Authority
CN
China
Prior art keywords
neural network
model
convolutional neural
test
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910623892.3A
Other languages
Chinese (zh)
Other versions
CN110347600A (en
Inventor
姚奕
刘佳洛
赵潇
黄松
吴开舜
邓超
陈文科
刘伟豪
刘峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Army Engineering University of PLA
Original Assignee
Army Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Army Engineering University of PLA filed Critical Army Engineering University of PLA
Priority to CN201910623892.3A priority Critical patent/CN110347600B/en
Publication of CN110347600A publication Critical patent/CN110347600A/en
Application granted granted Critical
Publication of CN110347600B publication Critical patent/CN110347600B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3676Test management for coverage analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a convolutional neural network-oriented variation coverage test method and a computer storage medium, wherein the methodThe method comprises the following steps: 1) Setting n mutation operators, and respectively injecting the n mutation operators into a convolutional neural network program P to be tested to obtain a mutation program set { P 1 ,P 2 ,P 3 ,…,P n }; 2) Using a training data set D versus a variant program set { P 1 ,P 2 ,P 3 ,…,P n Training is carried out to obtain a variation model set { M } 1 ,M 2 ,M 3 ,…,M n }; 3) Using test data set T to pair original model M and variation model set { M 1 ,M 2 ,M 3 ,…,M n Testing is carried out; 4) And comparing the test accuracy rates of all the models, and selecting the model with the highest accuracy rate. The invention solves the defect that the traditional test method is difficult to ensure the test sufficiency of the convolutional neural network application program, can effectively improve the test sufficiency of the convolutional neural network, is more effective in testing the neural network model, can find out a local optimal model according to the test accuracy rate, and effectively ensures the quality and the safety of the convolutional neural network application program.

Description

Convolutional neural network-oriented variation coverage testing method and computer storage medium
Technical Field
The present invention relates to software testing methods and computer storage media, and more particularly, to a convolutional neural network-oriented mutation coverage testing method and computer storage media.
Background
The practical application of the convolutional neural network in the fields of image classification and recognition, natural language processing and the like has achieved great success, and the convolutional neural network is also introduced in many fields which are critical to safety. However, due to some errors occurring in the recent convolutional neural network system, the security and reliability of convolutional neural network applications are becoming more and more concerned. Current testing methods mainly have a white-box differential testing algorithm for systematically generating antagonistic instances covering all neurons in the network. For the test sufficiency of the convolutional neural network, the existing software test sufficiency method and criterion cannot be directly applied to the test of the convolutional neural network, because the convolutional neural network has the following properties:
the first characteristic: and (4) data sensitivity. The control logic and the operation rule of the traditional software are realized by software developers through coding, and the convolutional neural network has the greatest characteristic of data sensitivity. Their control logic and operating rules are learned from a training data set, and changes in the training data may cause changes in the software under test. The test cases which are constructed in the test process and can trigger the software operation defects are likely to be used for retraining the model, so that the tested software is likely to change, the change of the control logic caused by the change is unknown, and the retesting is needed;
the second characteristic: is not intelligible. A deep neural network, each layer representing a feature, and the number of layers being large, the developer does not know what feature each layer represents. Although the convolutional neural network has high accuracy in many application scenes, the characteristics in the training data are difficult to explain to play a key role, and the control logic and the operation rule of software are not obtained;
the characteristics are three: and (4) parameterizing the tested program. In conventional software testing, the tested object is understandable code. The tested object tested by the convolutional neural network application program is a series of weight parameters and bias parameters. The measured object is more abstract, and more challenges are brought to a tester.
Conventional software testing sufficiency criteria are mostly based on control flow because the control logic and operating rules of conventional software are programmed by programmers. For convolutional neural networks, the control logic and operation rules of the convolutional neural networks are learned from a training data set, so that the traditional test sufficiency criterion is difficult to apply when testing the sufficiency of convolutional neural network application programs.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a convolutional neural network-oriented variation coverage test method and a computer storage medium, solves the problem that the traditional test method is difficult to ensure the test sufficiency of a convolutional neural network application program, can effectively improve the test sufficiency of a convolutional neural network, is more effective in testing a neural network model, and can find out a local optimal model according to the test accuracy.
The technical scheme is as follows: the invention relates to a convolutional neural network-oriented variation coverage test method, which comprises the following steps of:
(1) Setting n mutation operators, and respectively injecting the n mutation operators into the convolutional neural network program P to be tested to obtain a mutation program set { P 1 ,P 2 ,P 3 ,…,P n };
(2) Using the training data set D versus the variant program set { P 1 ,P 2 ,P 3 ,…,P n Training is carried out to obtain a variation model set { M } 1 ,M 2 ,M 3 ,…,M n };
(3) Using the test data set T to the original model M and the variant model set { M } 1 ,M 2 ,M 3 ,…,M n Testing is carried out;
(4) And comparing the test accuracy rates of all the models, and selecting the model with the highest accuracy rate.
Further, the number of mutation operators in the step (1) is 9, and the mutation operators comprise changing an activation function, changing a pooling mode, reducing the number of convolution layers, increasing the number of convolution kernels, reducing the number of convolution kernels, increasing the size of the convolution kernels, reducing the size of the convolution kernels, increasing the number of full-connection layers and reducing the number of full-connection layers.
Further, the change activation function is to change the original activation function to an elu activation function.
Furthermore, the changing of the pooling mode is to change the original pooling mode to the same pooling mode.
Further, the increasing the convolution kernel size is to change the original convolution kernel size to 7 × 7.
Further, the increasing the convolution kernel size is to change the original convolution kernel size to 3 × 3.
The computer storage medium of the present invention stores thereon a computer program, which when executed by a computer processor implements the above-described convolutional neural network-oriented variant coverage test method.
Has the advantages that: the invention provides a model coverage method based on the variation coverage test, which can improve the test sufficiency of the test convolutional neural network, so that the test neural network model is more effective, thereby ensuring the quality and the safety of the convolutional neural network application program, and verifying whether the design of the neural network model is reasonable, namely whether developers of the convolutional neural network select a local optimal model (in the models), and can play a better role in improving the test sufficiency of the models.
Drawings
FIG. 1 is a flow chart of the method of the present embodiment;
FIG. 2 is a graph of test accuracy for each model as a function of iteration number.
Detailed Description
The method flow of the embodiment of the invention is shown in fig. 1, and the convolutional neural network program P to be tested, the training data set D and the test data set T are input, and the method comprises the following steps:
step 1: respectively injecting the set n mutation operators into a convolutional neural network program P to be tested to obtain a series of mutation programs P';
step 2: training the series of variation programs p' with the training dataset D respectively will obtain a series of variation models { M } 1 ,M 2 ,M 3 ,…,M n };
And 3, step 3: using the test data set T to the variation model set M' (from the original model and the variation model { M obtained) 1 ,M 2 ,M 3 ,…,M n Composed of) } to carry out the test;
and 4, step 4: and comparing the experimental results, observing whether the testing accuracy of the original model is the highest in the models, and selecting the model with the highest testing accuracy.
By the method, whether the convolutional neural network model to be tested is the local optimum of the constructed variant model set or not is judged, namely whether the accuracy is the highest in the constructed variant model set or not is judged.
In the embodiment, the method is used in a convolutional neural network LeNet-5 model, and the data set is a common Mnist data set. A total of 6 types of mutation operators (9 mutation operators) are designed for the program, the 6 types of mutation operators are respectively injected into the convolutional neural network program P to be tested, the original training set is used for training to obtain 9 mutation models, and the 9 mutation models and the original model are collectively called as a mutation model set. The model with the highest classification accuracy is selected from the 10 models as a local optimal model, and the testing sufficiency of the convolutional neural network is improved through the method.
The convolutional neural network model is possibly influenced by a plurality of parameters, 9 mutation operators are screened for the convolutional neural network model, and the convolutional neural network model is directly changed by injecting mutation into the convolutional neural network program P to be tested. The 9 mutation operators are shown in table 1.
TABLE 1 mutation operator of convolutional neural network
Mutation operator Description of the preferred embodiment
Changing the activation function (CAF) Change original relu into elu
Change ofPooling method (CMP) The original vaild is changed into the same
Reducing the number of convolution layers (RCL) Reduction of one convolution layer
Increasing the number of convolution kernels (ACK) Doubling the number of original convolution kernels
Reducing the number of convolution kernels (RCK) Reduce the number of the original convolution kernels by one time
Increasing convolution kernel size (ECKS) The original convolution kernel size was changed to 7 x 7
Reducing convolution kernel size (CCKS) The original convolution kernel size was changed to 3 x 3
Increasing the number of full connection layers (AFCL) Adding a full connection layer
Reducing the number of full connection layers (RFCL) Reducing a full connection layer
Injecting the different mutation operators into the original convolutional neural network training program to obtain the corresponding mutation CNN program { P } 1 ′,P 2 ′,…,P n '}. Then, the test set T of the original model M is run with each mutation program { P } 1 ′,P 2 ′,…,P n ' on the table, the corresponding variation model { M } can be obtained 1 ′,M′ 2 ,…,M′ n }。
For the variant convolutional neural network model, any test case T belongs to T and can be correctly classified by the original convolutional neural network model M, but cannot be correctly classified by the variant convolutional neural network model M ', and the test case T kills the variant M'. Traditional variation scoring variation tests refer to killing the variant/all variants. However, the variation score index of the traditional software is not suitable for the variation test of the convolutional neural network system. This is because when performing mutation test on the convolutional neural network model, it is very easy to kill variants M' in the test case T e T because the number of the test sets T is relatively large. Based on the above reasons, for the mutation test of the convolutional neural network, we inject the obtained mutation operator into the convolutional neural network training program. Retraining the executive using the training set data to generate a corresponding variant convolutional neural network model { M' 1 ,M′ 2 ,…,M′ n }. Then, each variant model { M 'is executed from test set T of original model M' 1 ,M′ 2 ,…,M′ n And analyzed. The more variation models that can be obtained using this method, the more adequate the test. Meanwhile, the test accuracy rates of all models can be compared, and the model with the highest accuracy rate is selected and is a local optimal model.
The invention is prepared from { M, M' 1 ,M′ 2 ,…,M′ n And the measured data are regarded as the same type of model, called as a variation model type, and are used as the measured object to improve the coverage rate of the measured object and improve the test sufficiency of the model to some extent. For traditional software, a reliable test case set can be constructed by utilizing a large number of test methods (equivalence class, boundary value and transmutation test), so that the statement coverage rate, branch coverage rate and condition coverage rate of software codes are improved, and the test sufficiency is improved. For convolutional neural network models, it is sometimes possible for a test case to "cover" the entire model. Therefore, for the convolutional neural network model, a reliable set of test cases may not exist. However, the invention uses the existing theory of 'reliable test case set' for referenceThe concept of a set of variant models is proposed, i.e. if a set of variant models is sufficient for a set of variant operators, the set is called a set of reliable variant models. Therefore, the testing sufficiency of each target convolutional neural network model can be improved based on the reliable variation model set. And testing whether the original model is locally optimal or not, and if not, indicating that the original model is insufficiently tested on the model coverage. By our method we can select a specific optimal model from this class of models and improve test sufficiency by improving model coverage.
TABLE 2 test accuracy for each model
Corresponding model Rate of accuracy of test
Original model (origin) 98.06%
Changing the activation function (CAF) 98.00%
Changing the manner of pooling (CMP) 98.01%
Reducing the number of layers of convolution (RCL) 96.26%
Increasing the number of convolution kernels (ACK) 98.65%
Reducing the number of convolution kernels (RCK) 97.65%
Increasing convolution kernel size (ECKS) 98.00%
Reducing convolution kernel size (CCKS) 98.20%
Increasing the number of full connection layers (AFCL) 99.03%
Reducing the number of full connection layers (RFCL) 97.06%
The test accuracy of each model is shown in table 2. It can be shown that the model obtained after increasing the number of full connection layers (AFCL) is a local optimal model in such models, which indicates that the original model test is insufficient. Meanwhile, the model coverage of the original model is improved by the method, namely in the process of constructing the original model, the model obtained after increasing the number of fully-connected layers is selected instead of the original model. By increasing the number of fully-connected layers compared with other mutation operators, the accuracy of the mutation model is increased. The change curve of each model test accuracy rate along with the number of iterations is shown in fig. 2, and the test accuracy rate of the variation model formed by adding the 9 variation model classes and the original LeNet-5 model after all test samples are iterated for 20 times can be observed. As can be seen from fig. 2, the model obtained after increasing the number of fully connected layers is a local optimal model in such models, which indicates that the original model test is insufficient. Meanwhile, the model coverage of the original model is improved by the method, namely in the process of constructing the original model, the model obtained by increasing the number of fully-connected layers is selected instead of the original model. By increasing the number of fully-connected layers compared with other mutation operators, the accuracy of the mutation model is increased. Through the experiment, the test sufficiency of the original model is improved, and the validity of the model coverage criterion is further verified.
The invention can be applied to AI software such as face recognition, text analysis, image classification and the like, and can improve the test sufficiency of the convolutional neural network model in the process, thereby improving the accuracy of the software and ensuring the quality and the safety of the software. For example, when convolutional neural network image classification software is tested, the method can effectively improve the test sufficiency of the convolutional neural network model, and meanwhile, a local optimal model of the convolutional neural network can be obtained, so that the accuracy and reliability of the software are improved.
The embodiments of the present invention, if implemented in the form of software functional modules and sold or used as independent products, may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. The storage medium includes various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
Accordingly, embodiments of the present invention also provide a computer storage medium having a computer program stored thereon. When the computer program is executed by a processor, the method for testing the variation coverage of the convolutional neural network can be realized. For example, the computer storage medium is a computer-readable storage medium.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

Claims (2)

1. A convolutional neural network-oriented variation coverage test method is characterized by comprising the following steps:
(1) Setting n mutation operators, and respectively injecting the n mutation operators into a convolutional neural network program P to be detected to obtainVariant set { P 1 ,P 2 ,P 3 ,…,P n };
(2) Using the training data set D versus the variant program set { P 1 ,P 2 ,P 3 ,…,P n Training is carried out to obtain a variation model set { M } 1 ,M 2 ,M 3 ,…,M n };
(3) Using test data set T to pair original model M and variation model set { M 1 ,M 2 ,M 3 ,…,M n Testing is carried out;
(4) Comparing the test accuracy rates of all the models, and selecting the model with the highest accuracy rate;
the number of the mutation operators in the step (1) is 9, and the mutation operators comprise changing an activation function, changing a pooling mode, reducing the number of convolution layers, increasing the number of convolution kernels, reducing the number of convolution kernels, increasing the size of the convolution kernels, reducing the size of the convolution kernels, increasing the number of full-connection layers and reducing the number of full-connection layers;
the change activation function is to change the original activation function into an elu activation function;
the change of the pooling mode is to change the original pooling mode into the same pooling mode;
the increasing convolution kernel size is to change the original convolution kernel size to 7 x 7;
the increasing convolution kernel size is to change the original convolution kernel size to 3 x 3.
2. A computer storage medium having a computer program stored thereon, characterized in that: the computer program, when executed by a computer processor, implements the method of claim 1.
CN201910623892.3A 2019-07-11 2019-07-11 Convolutional neural network-oriented variation coverage testing method and computer storage medium Active CN110347600B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910623892.3A CN110347600B (en) 2019-07-11 2019-07-11 Convolutional neural network-oriented variation coverage testing method and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910623892.3A CN110347600B (en) 2019-07-11 2019-07-11 Convolutional neural network-oriented variation coverage testing method and computer storage medium

Publications (2)

Publication Number Publication Date
CN110347600A CN110347600A (en) 2019-10-18
CN110347600B true CN110347600B (en) 2023-04-07

Family

ID=68175695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910623892.3A Active CN110347600B (en) 2019-07-11 2019-07-11 Convolutional neural network-oriented variation coverage testing method and computer storage medium

Country Status (1)

Country Link
CN (1) CN110347600B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113051150A (en) * 2019-12-27 2021-06-29 中国人民解放军陆军工程大学 Metamorphic test method and system for image classifier
CN111881033A (en) * 2020-07-23 2020-11-03 深圳慕智科技有限公司 Deep learning model quality evaluation method based on operation environment error analysis
CN111858340A (en) * 2020-07-23 2020-10-30 深圳慕智科技有限公司 Deep neural network test data generation method based on stability transformation
CN113268423A (en) * 2021-05-24 2021-08-17 南京工业大学 Deep learning mutation operator reduction method
CN113485932A (en) * 2021-07-16 2021-10-08 深圳市网联安瑞网络科技有限公司 Deep learning code defect detection method, system, product, equipment and terminal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105302719A (en) * 2015-10-26 2016-02-03 北京科技大学 Mutation test method and apparatus
WO2017177661A1 (en) * 2016-04-15 2017-10-19 乐视控股(北京)有限公司 Convolutional neural network-based video retrieval method and system
CN108009525A (en) * 2017-12-25 2018-05-08 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
CN108664391A (en) * 2018-03-13 2018-10-16 北京邮电大学 A kind of Fault Classification, mutation testing method and apparatus towards program state

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105302719A (en) * 2015-10-26 2016-02-03 北京科技大学 Mutation test method and apparatus
WO2017177661A1 (en) * 2016-04-15 2017-10-19 乐视控股(北京)有限公司 Convolutional neural network-based video retrieval method and system
CN108009525A (en) * 2017-12-25 2018-05-08 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
CN108664391A (en) * 2018-03-13 2018-10-16 北京邮电大学 A kind of Fault Classification, mutation testing method and apparatus towards program state

Also Published As

Publication number Publication date
CN110347600A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN110347600B (en) Convolutional neural network-oriented variation coverage testing method and computer storage medium
Wegener et al. Verifying timing constraints of real-time systems by means of evolutionary testing
Martín et al. MOCDroid: multi-objective evolutionary classifier for Android malware detection
CN117951701A (en) Method for determining flaws and vulnerabilities in software code
Kanewala et al. Techniques for testing scientific programs without an oracle
CN111931179B (en) Cloud malicious program detection system and method based on deep learning
Wittkopp et al. A2log: Attentive augmented log anomaly detection
Iqbal et al. Extending learning classifier system with cyclic graphs for scalability on complex, large-scale boolean problems
Le Thi My Hanh et al. Mutation-based test data generation for simulink models using genetic algorithm and simulated annealing
Rabheru et al. DeepTective: Detection of PHP vulnerabilities using hybrid graph neural networks
Zhan et al. The state problem for test generation in simulink
Chen et al. Failure detection and localization in component based systems by online tracking
Taylor et al. Using behaviour inference to optimise regression test sets
CN107247663B (en) Redundancy variant identification method
Nazari et al. Using cgan to deal with class imbalance and small sample size in cybersecurity problems
Kumar et al. Notice of Retraction: Generation of efficient test data using path selection strategy with elitist GA in regression testing
Zhu et al. Discovering boundary values of feature-based machine learning classifiers through exploratory datamorphic testing
Berend Distribution awareness for AI system testing
CN114880637B (en) Account risk verification method and device, computer equipment and storage medium
US11734612B2 (en) Obtaining a generated dataset with a predetermined bias for evaluating algorithmic fairness of a machine learning model
CN112069508B (en) Method, system, device and medium for positioning vulnerability API (application program interface) parameters of machine learning framework
Amin et al. Improving software reuse prediction using feature selection algorithms
Leotta et al. How do implementation bugs affect the results of machine learning algorithms?
CN117827667A (en) Test coverage rate improving device
Masamba et al. Supervised learning for coverage-directed test selection in simulation-based verification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant