CN113706477B - Defect category identification method, device, equipment and medium - Google Patents

Defect category identification method, device, equipment and medium Download PDF

Info

Publication number
CN113706477B
CN113706477B CN202110912056.4A CN202110912056A CN113706477B CN 113706477 B CN113706477 B CN 113706477B CN 202110912056 A CN202110912056 A CN 202110912056A CN 113706477 B CN113706477 B CN 113706477B
Authority
CN
China
Prior art keywords
defect
network
defect type
image
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110912056.4A
Other languages
Chinese (zh)
Other versions
CN113706477A (en
Inventor
陈晓炬
杜松
王邦军
杨怀宇
李磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xurui Software Technology Co ltd
Original Assignee
Nanjing Xurui Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Xurui Software Technology Co ltd filed Critical Nanjing Xurui Software Technology Co ltd
Priority to CN202110912056.4A priority Critical patent/CN113706477B/en
Publication of CN113706477A publication Critical patent/CN113706477A/en
Application granted granted Critical
Publication of CN113706477B publication Critical patent/CN113706477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a defect type identification method, device, equipment and medium. The method comprises the following steps: acquiring an image to be processed; inputting an image to be processed into a first network of a pre-trained defect type recognition model, and determining a feature vector of the image to be processed, wherein the feature vector comprises a plurality of features for representing the whole and/or part of the image to be processed; and inputting the feature vector into a second network of defect type models, and determining the defect type of the image to be processed, wherein the defect type comprises surface defects and internal defects. According to the method and the device for identifying the defect type, the newly added defect type and the old defect type in the image to be processed can be distinguished conveniently, and therefore accuracy of defect type identification is improved.

Description

Defect category identification method, device, equipment and medium
Technical Field
The application belongs to the field of industrial vision, and particularly relates to a defect type identification method, device, equipment and medium.
Background
In industry, for example, in a production line working environment, new defect types are continuously generated, and because there may be a situation that the similarity between the new defect type and the old defect type is large, and if the sample images of the old defect type are stored in the edge device, the new defect type and the old defect type are difficult to distinguish when the defect type of the image is identified, and therefore the accuracy of defect type identification cannot be improved.
To address the above problems, the prior art generally includes two methods, one is a fine tuning method, which has no training for the old defect class samples, so that the trained model may have a catastrophic forgetting situation when identifying the old defect class. The other is a combined training method, namely training the model by using all samples containing the new defect type and the old defect type, but the existing model has the limitation in identifying the old defect type due to the phenomenon of unbalanced sample number between the new defect type sample and the old defect type sample, and the training time is long, the training cost is high, and the model is unfavorable for being put into production. Therefore, there is still a problem in the prior art that accuracy of defect type identification cannot be improved.
Disclosure of Invention
The embodiment of the application provides a defect type identification method, device, equipment and medium, which improve the accuracy of defect type identification.
In a first aspect, an embodiment of the present application provides a defect type identifying method, including: in some embodiments of the first aspect, an image to be processed is acquired; inputting an image to be processed into a first network of a pre-trained defect type recognition model, and determining a feature vector of the image to be processed, wherein the feature vector comprises a plurality of features for representing the whole and/or part of the image to be processed; and inputting the feature vector into a second network of defect type models, and determining the defect type of the image to be processed, wherein the defect type comprises surface defects and internal defects.
In some embodiments of the first aspect, the first network comprises an adaptive aggregation network and the second network comprises a bias correction network.
In some embodiments of the first aspect, before inputting the image to be processed into the first network of pre-trained defect class identification models, the method further comprises: acquiring a training sample set, wherein the training sample set comprises a plurality of sample image groups, and each sample image group comprises a sample image and a corresponding label defect category thereof; and training a preset defect type recognition model by using a sample image group in the training sample set to obtain the defect type recognition model. In some embodiments of the first aspect, the training sample set includes a first training sample set and a second training sample set, wherein the first training sample set includes a plurality of first label defect class groupings corresponding to a plurality of preset proportions one to one, each first label defect class grouping includes a plurality of first sample image groups, each first sample image group includes a first sample image and its corresponding first label defect class;
the second training sample set comprises a plurality of second label defect category groups, each second label defect category group comprises a plurality of second sample image groups with the same preset proportion, wherein each second sample image group comprises a second sample image and a corresponding second label defect category thereof.
In some embodiments of the first aspect, inputting the first sample image set into a first network in a preset defect class identification model, determining a reference feature vector for each first sample image, wherein the reference feature vector comprises a plurality of features for characterizing the entirety and/or part of the first sample image; inputting the reference feature vector and the second sample image group into a second network in a preset defect type identification model, and determining the respective reference defect types of the first sample image and the second sample image, wherein the reference defect types comprise reference surface defects and reference internal defects; determining a loss function value of a preset defect type recognition model according to a reference defect type of a target sample image and a label defect type of the target sample image, wherein the target sample image is any one of a sample image group; and under the condition that the loss function value does not meet the training stop condition, adjusting the model parameters of the defect type recognition model, and training the defect type recognition model after parameter adjustment by using the sample image group until the loss function value meets the preset training condition, so as to obtain the defect type recognition model.
In a second aspect, an embodiment of the present application provides a defect type identifying apparatus, including: the acquisition module is used for acquiring the image to be processed; the determining module is used for inputting the image to be processed into a first network of a pre-trained defect type recognition model and determining a feature vector of the image to be processed, wherein the feature vector comprises a plurality of features for representing the whole and/or part of the image to be processed; the determining module is further used for inputting the feature vector into a second network of defect type models, and determining defect types of the images to be processed, wherein the defect types comprise surface defects and internal defects.
In some embodiments of the second aspect, the first network comprises an adaptive aggregation network and the second network comprises a bias correction network. In some embodiments of the second aspect, the obtaining module is configured to obtain a training sample set, where the training sample set includes a plurality of sample image groups, each sample image group including a sample image and its corresponding tag defect class; the device also comprises a training module: the training module is used for training a preset defect type recognition model by utilizing the sample image group in the training sample set to obtain the defect type recognition model.
In a third aspect, there is provided a defect class identifying apparatus comprising: a memory for storing computer program instructions; a processor for reading and executing computer program instructions stored in a memory to perform the defect classification identification method provided in any of the optional embodiments of the first and second aspects.
In a fourth aspect, there is provided a computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement the defect classification method provided by any of the alternative embodiments of the first and second aspects.
After the image to be processed is acquired, the acquired image to be processed is input into a first network of a pre-trained defect type recognition model to determine the feature vector of the image to be processed, and the feature vector is input into a second network of the pre-trained defect type recognition model to obtain the defect type of the image to be processed, so that the newly added defect type and the old defect type in the image to be processed can be distinguished conveniently, and the accuracy of defect type recognition is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
FIG. 1 is a schematic diagram of a training model in a defect class identification method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a training model flow in another defect type identification method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a model structure of a first network in a defect classification recognition model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of training parameters of a first network in a defect class identification model according to an embodiment of the present application;
fig. 5 is a flowchart of a defect type identifying method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a defect type identifying device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a defect type identifying apparatus according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application are described in detail below to make the objects, technical solutions and advantages of the present application more apparent, and to further describe the present application in conjunction with the accompanying drawings and the detailed embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative of the application and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by showing examples of the present application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
In order to solve the problem that in the prior art, new defect types and old defect types are not easy to distinguish, and therefore defect type identification accuracy cannot be improved, the embodiment of the application provides a defect type identification method, device, equipment and medium.
It should be noted that, in the defect type recognition method provided in the embodiment of the present application, the image to be processed needs to be processed by using the pre-trained defect type recognition model, so that the defect type recognition model needs to be trained before the image to be processed is input into the first network of the pre-trained defect type recognition image. Accordingly, a specific implementation of the training method for a defect classification recognition model provided in the embodiments of the present application will be described below with reference to the accompanying drawings.
The training method for the defect type recognition model provided by the embodiment of the application can be realized by the following steps:
1. a sample set is obtained.
The training sample set comprises a plurality of sample image groups, and each sample image group comprises a label defect category corresponding to a sample image.
In one embodiment, as shown in fig. 1, the acquiring a training sample set may specifically include the following steps:
s110, acquiring a plurality of sample images.
The sample image may be an image currently acquired by a camera provided in the electronic device, or may be an image stored in the electronic device. Correspondingly, the sample image can be acquired through a camera of the electronic device, or can be acquired directly from an image database of the electronic device. The electronic device may be a device having an image capturing function.
S120, labeling label defect types corresponding to the sample images one by one.
Wherein the label defect categories include surface defects and internal defects. Wherein the surface defects are characterized as defects appearing on the surface of the object and the internal defects are characterized as defects of the internal structure of the object. Taking the example of obtaining a plurality of sample images of the liquid crystal panel, the label defect type may be, for example, a surface defect such as a scratch defect or a scratch defect appearing on the surface of the liquid crystal panel, or an internal defect such as a color particle defect or a bubble defect existing in the internal structure of the liquid crystal panel.
In one example, the label defect type of the labeled sample image may be a device label or a manual label.
S130, determining a training sample set according to the acquired sample images and the labeled label defect types corresponding to each sample image.
The training sample set comprises a plurality of sample image groups, and each sample image group comprises a sample image and a corresponding label defect category thereof. Specifically, each acquired sample image and the corresponding label defect category marked by the person or the equipment are combined to obtain a plurality of sample image groups, so that a training sample set is determined.
In addition, in order to achieve the effect of easily distinguishing the new defect type from the old defect type and improving the accuracy of defect type identification, the training sample set may include all samples of the new defect type and some samples of the old defect type.
Therefore, label defect types corresponding to the plurality of sample images one by one are obtained by labeling the plurality of sample images. After each sample image and the corresponding label defect category are obtained, the plurality of sample images and the corresponding label defect categories are further integrated, a training set can be obtained, subsequent model training is facilitated, and then a related model can be accurately obtained.
In one embodiment, the training sample set may be subdivided into a first training set and a second training set. The first training sample set comprises a plurality of first label defect category groups corresponding to a plurality of preset proportions one by one, each first label defect category group comprises a plurality of first sample image groups, and each first sample image group comprises a first sample image and a first label defect category corresponding to the first sample image group;
the second training sample set comprises a plurality of second label defect category groups, each second label defect category group comprises a plurality of second sample image groups with the same preset proportion, wherein each second sample image group comprises a second sample image and a corresponding second label defect category thereof.
In one example, taking a liquid crystal panel image as an example, according to 1:2:3, respectively obtaining sample images of scratch defects, scratch defects and bubble defects according to the preset proportion, wherein the sample images can be, for example, 100 sample images of scratch defects according to the label defect category, 200 sample images of scratch defects according to the label defect category, and 300 sample images of bubble defects according to the label defect category. The sample images with 100 label defect types as scratch defects can be grouped into a first label defect type, and each label defect type is a sample image with scratch defects and a label defect type corresponding to the sample image, namely scratch defects, and can be used as a first sample group. A sample image of 200 label defects as scratch defects may be grouped for another first defect class, and so on.
In another embodiment, taking the example of acquiring the image of the liquid crystal panel, sample images of the label defect types of scratch defect, scratch defect and bubble defect may be acquired respectively according to the same preset ratio. For example, there may be 100 sample images of label defect type as scratch defect, 100 sample images of label defect as scratch defect, and 100 sample images of label defect type as bubble defect. The determining manners of the second tag defect class group and the second sample group may be according to the determining manners of the first tag defect class group and the first sample group described above, and will not be described herein again.
In addition, in one embodiment, the second training sample set may acquire sample images of each of the second tag defect categories from the first training sample set at the same preset ratio.
Therefore, the first network for training the preset defect type recognition model can be based on the first training sample set, namely the unbalanced training sample set, and the second network for training the preset defect type recognition model can be based on the second training sample set, namely the balanced training sample set, so that the preset defect type recognition model can be trained better, a more accurate defect type recognition model can be obtained, and the accuracy of defect type recognition can be improved.
2. And training a preset defect type recognition model by using a sample image group in the training sample set to obtain the defect type recognition model.
As shown in fig. 2, this step may include the steps of:
s210, inputting the first sample image group into a first network in a preset defect type identification model, and determining a reference feature vector of the first sample image.
The first sample image group includes a first sample image and a first label defect category corresponding to the first sample image, the above-mentioned sample image includes a first sample image, and an acquisition mode of the first sample image is consistent with an acquisition mode of the sample image, which is not described herein. The first network in the preset defect type identification model may be a network that effectively maintains stability and plasticity of defect type identification, i.e. effectively identifies a new defect type, and may include an adaptive aggregation network, for example. The reference feature vector comprises a plurality of features for characterizing the whole and/or part of the first sample image. The reference feature vector may include, for example, a plurality of features such as contrast, brightness, etc. of the entire sample image, and contrast, brightness, size, position, color, etc. of the target defect.
Specifically, a first sample image group is input into a first network in a preset defect type identification model, and a plurality of features are extracted from the first sample image through the first network to determine a reference feature vector of the first sample image.
Therefore, the first network, such as a self-adaptive aggregation network, in the preset defect type identification model can accurately distinguish the newly added defect type from the old defect type under the condition that the similarity between the newly added defect type and the old defect type is large, and the defect type identification accuracy is improved. For example, the scratch defect has a relatively large similarity to the scratch defect, and the bubble defect has a relatively large similarity to the color particle defect. Therefore, the difference between defect types can be effectively amplified through the self-adaptive aggregation network, so that the defect types such as scratch defects, bubble defects, color particle defects and the like can be accurately distinguished.
In particular, the structure and principles of the adaptive aggregation network are described in detail below. It should be noted that the adaptive aggregation network includes a dual-layer feature extraction structure and an adaptive aggregation weight learning strategy. The double-layer feature extraction structure of the self-adaptive aggregation network comprises a stabilizing module, a plastic module and a neuron-level scaling weight. As shown in fig. 3, each residual block in the network can be split into two block components: one is a plastic block, the parameters of which are fully trainable for adapting to the new defect class, and the other is a stable block, the parameters of which are partially fixed for maintaining the identification of the old defect class. Wherein x is a feature map, alpha s For stabilizing adaptive weights of blocks, alpha p Is the adaptive weight of the plastic block. Since there are fewer parameters that can be learned in the stabilization block, there are more parameters that can be learned in the plastic block. Let p and s represent the network parameters of the plastic block and the stabilization block, respectively. p contains all convolution weights, while s contains only the neuron-level scalable weights.
Scalable weights can be applied to the network model θ for the old defect class base Namely, a model obtained by training from the previous 0 to i-1 old defect categories. Network model θ due to old defect class base The number of network parameters s is thus much smaller than the number of p, which is an existing model. For example, when inθ base When 3*3 neurons are used, the number of netable parameters s is only 1/(3*3) of the number of network parameters per full.
The scaling weights at the neuron level involved in the adaptive aggregation network are, for the stable block, to learn its network parameters at stage 0 and freeze the learned network parameters at other N stages. In these N stages, the weight parameters within the block are scaled using a small set of scaling weights at the neuron level.
In order to preserve the internal structural pattern of neurons and adapt the knowledge of the whole block to the newly added class data. Let the network k layer contain R weights as Simply referred to as W k . For W k The stabilization block needs to learn R scaling weights additionally, denoted S k . At the same time, let the input feature map and the output feature map of the k layer be X respectively k-1 And X k Will shrink the weight S k Acting on W k It can be expressed as:
X k =(W k ⊙S k )X k-1 (1)
wherein, as indicated by element-wise multiplication.
Assuming that the overall network has K layers in total, therefore, the scaling weights at all neuron levels can be expressed by equation (2):
and also known in fig. 3, is the feature extraction and aggregation process of the network, which spans all residual layers in the adaptive neural network.
Recording deviceA feature extraction transform function representing a residual block at k layers. Given a batch of training data sets x [0] Through the k layer residual block, respectively through the stabilizing block and the following layer residual blockAfter the plastic block is formed, the characteristic diagram is shown in the following formula (3):
wherein,characteristic diagrams of the stabilizer and the moldable mass in the kth layer, respectively, +.>The feature extraction transform functions of the stabilization block and the plastic block at the k-th layer, respectively.
Order theAnd->The aggregate weights of the stabilizer block and the moldable block in the kth layer are represented, respectively. The output profile of the k-th layer is therefore shown in equation (4) below:
in addition, it should be appreciated that the adaptive aggregation network needs to optimize two sets of learnable parameters at each incremental stage: (a) Neuron-level scaling weights for the stable block and convolution weights for the moldable block; (b) feature set weight α. The former belongs to the network weight parameter, and the latter belongs to the super parameter. In the present invention, we express the whole optimization process as a two-layer optimization process.
That is, in the adaptive aggregation network, the network parameters s, p are trained based on the aggregation weight α as the super parameter. Conversely, the aggregate weight α may be updated as the network parameters s, p are temporarily fixed. Thus, optimality of [ s, p ] places constraints on α and vice versa.
Ideally, in the ith incremental stage, the goal of model training and learning is to learn the best aggregate weights α and network parameters s, p, minimizing the classification loss of the training sample set. Currently, the training set for the adaptive aggregation network is a first training sample set. The overall loss of the double-layer optimization strategy can be expressed as follows in equation (5) and equation (6):
wherein L (·) represents a loss function, which may be a cross entropy loss, ε 0:i-1 ∪ε i Representing a first training sample set. Alpha i Is the i-th incremental stage aggregation weight. s is(s) i ,,p i The network parameters of the stabilization block and the moldable block at the i-th incremental stage, respectively.The estimated values of the network parameters of the stabilization block and the ductile block at the i-th incremental stage, respectively.
In the double-layer optimization process, the process of training the aggregate weight alpha by using the temporary fixed network parameters s, p can be called an uplink stage; the process of training out s, p from the aggregate weights α is referred to as the downstream phase.
As shown in FIG. 4, in training the aggregate weights, a second training sample set is used to adaptively learn and update alpha i The device is used for balancing the stabilizing block and the plastic block; in training network parameters, a first training sample set is used to train the extracted network parameters s i ,p i ]。
Therefore, the parameters of the self-adaptive aggregation network can be trained based on the training set, so that a more accurate self-adaptive aggregation network can be obtained.
S220, inputting the reference feature vector and the second sample image group into a second network in a preset defect type identification model, and determining the respective reference defect types of the first sample image and the second sample image.
The second network may be a network that weakens the defect difference by a linear structure in the case where the new defect type and the old defect type are greatly different, for example, may be a bias correction network. The reference defect class includes reference surface defects and reference internal defects. Taking an example of obtaining an image of a liquid crystal panel, the reference defect type contained in the image may be, for example, a scratch defect, or the like, which is a reference surface defect appearing on the surface of the object. It may also be a reference internal defect in which an internal structure of the object appears, such as a color particle defect, a bubble defect, or the like.
Specifically, the reference feature vector and the second sample image group are input into a second network in a preset defect type identification model, parameters in the second network are estimated through the input second sample image group, and the second network subjected to parameter estimation can identify respective reference defect types of the first sample image and the second sample image based on a plurality of features of the first sample image contained in the reference feature vector and a plurality of features of the second sample image acquired in the process of training the second network.
Therefore, the difference between defect categories can be weakened through the second network of the defect category identification model, so that the defect category which is inaccurate in defect category identification due to the fact that the first network tends to sample the defect category with large data volume under the condition that the difference between the newly added defect category and the old defect category is large is avoided, and the accuracy of defect category identification is improved.
Furthermore, it can be seen that, under the condition that the difference between the new defect type and the old defect type is smaller and is not easy to distinguish, the difference between the new defect type and the old defect type is amplified by the characteristic strengthening characteristic of the first network of the defect type identification model, so as to achieve the purpose of accurately distinguishing the new defect type from the old defect type. And the second network of the defect type identification model is used for compensating the situation of inaccurate identification caused by the first network under the condition that the difference between the newly added defect type and the old defect type is large, so that the newly added defect type and the old defect type are accurately distinguished, and the accuracy of defect type identification is improved.
In addition, it should be noted that the first network and the second network of the preset defect type identification model are connected according to the full connection layer of the classifier. Specifically, after a first network of a preset defect type recognition model outputs a reference feature vector, a bias correction network added after a full connection layer of a classifier is utilized to perform defect type recognition. The training process of the bias correction network is to freeze the adaptive aggregation network and classifier after the infrastructure has been trained to estimate the bias parameters using a second set of training samples.
Since the old class of data has a small sample size, the bias correction network is designed as a simple and parameter-less linear model to correct the bias caused in the adaptive aggregation network. Correcting the newly added defect class (i, …, i+m) by preserving the output logic (0, …, i-1) of the old defect class and applying a linear model, wherein i, m are both positive integers, as shown in the following formula (7):
wherein Out k Output logic characterized as offset correction network, alpha and beta being bias parameters of the newly added defect class, O k Is the output of class k. The bias parameters (α, β) are shared by all the newly added defect categories, allowing their estimation by the validation set, the second training sample set. When the bias parameters are optimized, the adaptive aggregation network and classifier are frozen. The classification loss may be calculated using a softmax function to optimize the bias parameters as shown in equation (8) below:
Wherein delta y=k The preset coefficient of the kth defect class is within the range of [ -1,1]。
S230, determining a loss function value of a preset defect type recognition model according to the reference defect type of the target sample image and the label defect type of the target sample image.
Wherein the target sample image is any one of the sample image groups. Specifically, a loss function value of a preset defect class model is determined based on a reference defect class finally obtained by the target sample image and a label defect class manually marked before.
S240, when the loss function value does not meet the training stop condition, the model parameters of the defect type recognition model are adjusted, and the defect type recognition model after the adjustment of the sample image group training parameters is utilized to obtain the defect type recognition model.
In order to obtain a trained defect type recognition model, under the condition that the loss function value does not meet the training stop condition, the model parameters of the defect type recognition model are adjusted, and the defect type recognition model with the adjusted parameters is trained by using the sample image group until the loss value function meets the training stop condition, so that an accurate defect type recognition model is obtained.
Based on the defect type recognition model obtained in the above embodiment, the present application further provides a specific embodiment mode of the defect type recognition method, and specifically details are described with reference to fig. 5.
Fig. 5 is a flowchart of a defect type identification method according to an embodiment of the present application.
As shown in fig. 5, the method may be performed by a defect type recognition model or the defect type recognition method may include the following steps:
s510, acquiring an image to be processed.
The image to be processed can be an image currently acquired by a camera arranged in the electronic equipment, or can be an image to be processed stored in the electronic equipment. Correspondingly, the method for acquiring the image to be processed can be acquired through the camera of the electronic equipment, and can also be directly acquired from the image library of the electronic equipment. Wherein the electronic device is characterized as a device having a camera function.
S520, inputting the image to be processed into a first network of a pre-trained defect type recognition model, and determining the feature vector of the image to be processed.
Wherein the feature vector comprises a plurality of features for characterizing the whole and/or part of the image to be processed. The feature vector may include, for example, the contrast, brightness, etc. of the entire sample image, as well as a plurality of features of the contrast, brightness, size, location, color, etc. of the target defect.
Therefore, the new defect type and the old defect type can be accurately distinguished according to the first network in the pre-trained defect type identification model under the condition that the similarity between the new defect type and the old defect type is large, and the accuracy of defect type identification is improved. For example, the scratch defect has a relatively large similarity to the scratch defect, and the bubble defect has a relatively large similarity to the color particle defect. Therefore, the difference between defect categories can be effectively amplified through the first network to accurately distinguish the newly added defect category from the old defect category
S530, inputting the feature vector into a second network of the defect type recognition model, and determining the defect type of the image to be processed.
Wherein the defect categories include surface defects and internal defects. Surface defects are characterized as defects that appear on the surface of an object, and internal defects are characterized as defects of the internal structure of the object.
In one embodiment, taking a plurality of sample images of the liquid crystal panel as an example, the defect type may be a surface defect such as scratch defect or scratch defect appearing on the surface of the liquid crystal panel, or an internal defect such as a color particle defect or bubble defect existing in the internal structure of the liquid crystal panel.
In some embodiments, the first network of defect class identification models comprises an adaptive aggregation network and the second network of defect class identification models comprises a bias correction network.
After the image to be processed is acquired, the acquired image to be processed is input into a first network of a pre-trained defect type recognition model to determine the feature vector of the image to be processed, and the feature vector is input into a second network of the pre-trained defect type recognition model to obtain the defect type of the image to be processed, so that the newly added defect type and the old defect type in the image to be processed can be distinguished conveniently, and the accuracy of defect type recognition is improved.
Based on the same inventive concept, the embodiment of the application also provides a defect type identification device. Described in detail with reference to FIG. 6
Fig. 6 is a schematic structural diagram of a defect type identifying device according to an embodiment of the present application.
As shown in fig. 6, the defect class identification device 600 may include: an acquisition module 610 and a determination module 620.
An acquiring module 610, configured to acquire an image to be processed;
a determining module 620, configured to input the image to be processed into a first network of pre-trained defect class identification models, and determine a feature vector of the image to be processed, where the feature vector includes a plurality of features for characterizing an entirety and/or a part of the image to be processed;
the determining module 620 is further configured to input the feature vector into the second network of defect class models, and determine a defect class of the image to be processed, where the defect class includes a surface defect and an internal defect.
In some embodiments, the first network comprises an adaptive aggregation network and the second network comprises a bias correction network.
In some embodiments, the obtaining module is configured to obtain a training sample set, where the training sample set includes a plurality of sample image groups, each sample image group including a sample image and its corresponding tag defect class;
The device also comprises a training module:
the training module is used for training a preset defect type recognition model by utilizing the sample image group in the training sample set to obtain the defect type recognition model.
In some embodiments, the training sample set includes a first training sample set and a second training sample set, where the first training sample set includes a plurality of first label defect class groupings corresponding to a plurality of preset proportions one to one, each first label defect class grouping includes a plurality of first sample image groups, each first sample image group includes a first sample image and its corresponding first label defect class;
the second training sample set comprises a plurality of second label defect category groups, each second label defect category group comprises a plurality of second sample image groups with the same preset proportion, wherein each second sample image group comprises a second sample image and a corresponding second label defect category thereof.
In some embodiments, the training module is specifically configured to:
for each sample image group, the following steps are performed:
inputting a first sample image group into a first network in a preset defect type identification model, and determining a reference feature vector of each first sample image, wherein the reference feature vector comprises a plurality of features for representing the whole and/or part of the first sample image;
Inputting the reference feature vector and the second sample image group into a second network in a preset defect type identification model, and determining the respective reference defect types of the first sample image and the second sample image, wherein the reference defect types comprise reference surface defects and reference internal defects;
determining a loss function value of a preset defect type recognition model according to a reference defect type of a target sample image and a label defect type of the target sample image, wherein the target sample image is any one of a sample image group;
and under the condition that the loss function value does not meet the training stop condition, adjusting the model parameters of the defect type recognition model, and training the defect type recognition model after parameter adjustment by using the sample image group until the loss function value meets the preset training condition, so as to obtain the defect type recognition model.
After the image to be processed is acquired, the acquired image to be processed is input into a first network of a pre-trained defect type recognition model to determine the feature vector of the image to be processed, and the feature vector is input into a second network of the pre-trained defect type recognition model to obtain the defect type of the image to be processed, so that the newly added defect type and the old defect type in the image to be processed can be distinguished conveniently, and the accuracy of defect type recognition is improved.
Each module in the defect type identifying device provided in the embodiment of the present application may implement the method steps in the embodiment shown in fig. 5, and may achieve the technical effects corresponding to the method steps, which are not described herein for brevity.
Fig. 7 is a schematic structural diagram of a defect type identifying apparatus according to an embodiment of the present application.
As shown in fig. 7, the defect type identifying device 700 in the present embodiment includes an input device 701, an input interface 702, a central processor 703, a memory 704, an output interface 705, and an output device 706. The input interface 702, the central processing unit 703, the memory 704, and the output interface 705 are connected to each other through a bus 710, and the input device 701 and the output device 706 are connected to the bus 710 through the input interface 702 and the output interface 705, respectively, and further connected to other components of the defect type identifying device 700.
Specifically, the input device 701 receives input information from the outside, and transmits the input information to the central processor 703 through the input interface 702; the central processor 703 processes the input information based on computer executable instructions stored in the memory 704 to generate output information, temporarily or permanently stores the output information in the memory 704, and then transmits the output information to the output device 706 through the output interface 705; the output device 706 outputs the output information to the outside of the defect class identification device 700 for use by the user.
In one embodiment, the defect class identification device 700 shown in fig. 7 includes: a memory 704 for storing a program; the processor 703 is configured to execute a program stored in the memory, so as to perform a method of the embodiment shown in fig. 5 provided in the embodiment of the present application.
Embodiments of the present application also provide a computer-readable storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement the method of embodiment 5 provided in the embodiments of the present application.
It should be clear that the present application is not limited to the particular arrangements and processes described above and illustrated in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and one skilled in the art can make various changes, modifications, and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an application specific integrated circuit (Application SpecificIntegrated Circuit, ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor Memory devices, read-Only Memory (ROM), flash Memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be different from the order in the embodiments, or several steps may be performed simultaneously.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, which are intended to be included in the scope of the present application.

Claims (10)

1. A defect type identifying method, comprising:
acquiring an image to be processed;
inputting the image to be processed into a first network of a pre-trained defect type recognition model, and determining a feature vector of the image to be processed, wherein the feature vector comprises a plurality of features for representing the whole and/or part of the image to be processed;
Inputting the feature vector into a second network of the defect type recognition model, and determining the defect type of the image to be processed, wherein the defect type comprises surface defects and internal defects;
the first network of defect type identification models comprises k residual blocks, each residual block comprises a stable block and a plastic block, the stable block is used for identifying old defect type samples, and the plastic block is used for adapting to newly added defect type samples;
the image to be processed is input into a first network of a pre-trained defect type recognition model, and the feature vector of the image to be processed is determined, so that the following formula is satisfied:
wherein,characteristic vectors of the stabilizer and the moldable mass in the kth layer, respectively, +.>Extracting transformation functions of the characteristics of the stabilizing block and the plastic block at the kth layer respectively;
and->Representing the aggregate weights, x, of the stabilizer and the moldable masses, respectively, in the kth layer [k] The feature vector is the feature vector of the image to be processed;
the loss function of the first network of the defect class identification model satisfies the following formula:
wherein L (·) represents a loss function, which may be a cross entropy loss, ε 0:i-1 ∪ε i Representing a first training sample set, alpha i Is the i-th increment stage aggregation weight, s i ,p i The network parameters of the stabilization block and the plastic block at the i-th incremental stage,the estimated values of the network parameters of the stabilizing block and the plastic block in the ith increment stage are respectively;
the second network of defect class identification models is capable of retaining the output logic (0, …, i-1) of the old defect class samples and applying a linear model to correct the newly added defect class samples (i, …, i+m), where i, m are both positive integers, the second network of defect class identification models satisfying the following formula:
wherein Out k Output logic of the second network characterized by the defect class identification model, alpha, beta being the deviation parameter of the newly added defect class sample, O k Is the output of the k-th class, and the deviation parameters alpha, beta are shared by all the newly added defect class samples and can be estimated by training the sample set.
2. The method of claim 1, wherein the first network comprises an adaptive aggregation network and the second network comprises a bias correction network.
3. The method of claim 1, wherein prior to inputting the image to be processed into the first network of pre-trained defect class identification models, the method further comprises:
Acquiring a training sample set, wherein the training sample set comprises a plurality of sample image groups, and each sample image group comprises a sample image and a corresponding label defect category thereof;
and training a preset defect type recognition model by using the sample image group in the training sample set to obtain the defect type recognition model.
4. The method of claim 3, wherein the training sample set comprises a first training sample set and a second training sample set, wherein the first training sample set comprises a plurality of first label defect class groupings in one-to-one correspondence with a plurality of preset proportions, each of the first label defect class groupings comprising a plurality of first sample image groups, each of the first sample image groups comprising a first sample image and its corresponding first label defect class;
the second training sample set comprises a plurality of second label defect category groups, each second label defect category group comprises a plurality of second sample image groups with the same preset proportion, and each second sample image group comprises a second sample image and a second label defect category corresponding to the second sample image group.
5. The method of claim 4, wherein training a preset defect class identification model using the set of sample images in the training sample set to obtain a trained defect class identification model, comprises:
For each sample image group, the following steps are performed:
inputting the first sample image group into a first network in a preset defect type identification model, and determining a reference feature vector of each first sample image, wherein the reference feature vector comprises a plurality of features for representing the whole and/or part of the first sample image;
inputting the reference feature vector and the second sample image group into a second network in a preset defect type identification model, and determining the reference defect type of each of the first sample image and the second sample image, wherein the reference defect type comprises a reference surface defect and a reference internal defect;
determining a loss function value of the preset defect type recognition model according to a reference defect type of a target sample image and a label defect type of the target sample image, wherein the target sample image is any one of the sample image groups;
and under the condition that the loss function value does not meet the training stop condition, adjusting the model parameters of the defect type recognition model, and utilizing the defect type recognition model after the sample image group training parameters are adjusted until the loss function value meets the preset training condition, so as to obtain the defect type recognition model.
6. A defect type identifying apparatus, comprising:
the acquisition module is used for acquiring the image to be processed;
a determining module, configured to input the image to be processed into a first network of a pre-trained defect class identification model, and determine a feature vector of the image to be processed, where the feature vector includes a plurality of features for characterizing an entirety and/or a part of the image to be processed;
the determining module is further configured to input the feature vector into a second network of the defect type recognition model, and determine a defect type of the image to be processed, where the defect type includes a surface defect and an internal defect;
the first network of defect type identification models comprises k residual blocks, each residual block comprises a stable block and a plastic block, the stable block is used for identifying old defect type samples, and the plastic block is used for adapting to newly added defect type samples;
the image to be processed is input into a first network of a pre-trained defect type recognition model, and the feature vector of the image to be processed is determined, so that the following formula is satisfied:
wherein,characteristic vectors of the stabilizer and the moldable mass in the kth layer, respectively, +. >Extracting transformation functions of the characteristics of the stabilizing block and the plastic block at the kth layer respectively;
and->Representing the aggregate weights, x, of the stabilizer and the moldable masses, respectively, in the kth layer [k] The feature vector is the feature vector of the image to be processed;
the loss function of the first network of the defect class identification model satisfies the following formula:
wherein L (·) represents a loss function, which may be a cross entropy loss, ε 0:i-1 ∪ε i Representing a first training sample set, alpha i Is the i-th increment stage aggregation weight, s i ,p i The network parameters of the stabilization block and the plastic block at the i-th incremental stage,the estimated values of the network parameters of the stabilizing block and the plastic block in the ith increment stage are respectively;
the second network of defect class identification models is capable of retaining the output logic (0, …, i-1) of the old defect class samples and applying a linear model to correct the newly added defect class samples (i, …, i+m), where i, m are both positive integers, the second network of defect class identification models satisfying the following formula:
wherein Out k Output logic of the second network characterized by the defect class identification model, alpha, beta being the deviation parameter of the newly added defect class sample, O k Is the output of the k-th class, the deviation parameters alpha, beta are shared by all the newly added defect class samples and can pass through the training samples The set is estimated.
7. The apparatus of claim 6, wherein the first network comprises an adaptive aggregation network and the second network comprises a bias correction network.
8. The apparatus according to claim 6, comprising:
the acquisition module is used for acquiring a training sample set, wherein the training sample set comprises a plurality of sample image groups, and each sample image group comprises a sample image and a corresponding label defect category thereof;
the device further comprises a training module:
the training module is used for training a preset defect type recognition model by using the sample image group in the training sample set to obtain the defect type recognition model.
9. A defect class identification device, the device comprising: a processor and a memory storing computer program instructions;
the processor reads and executes the computer program instructions to implement the defect class identification method according to any of claims 1-5.
10. A computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement the defect classification method of any of claims 1-5.
CN202110912056.4A 2021-08-10 2021-08-10 Defect category identification method, device, equipment and medium Active CN113706477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110912056.4A CN113706477B (en) 2021-08-10 2021-08-10 Defect category identification method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110912056.4A CN113706477B (en) 2021-08-10 2021-08-10 Defect category identification method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN113706477A CN113706477A (en) 2021-11-26
CN113706477B true CN113706477B (en) 2024-02-13

Family

ID=78652065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110912056.4A Active CN113706477B (en) 2021-08-10 2021-08-10 Defect category identification method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN113706477B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024020994A1 (en) * 2022-07-29 2024-02-01 宁德时代新能源科技股份有限公司 Training method and training device for defect detection model of battery cell

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583489A (en) * 2018-11-22 2019-04-05 中国科学院自动化研究所 Defect classifying identification method, device, computer equipment and storage medium
CN109671071A (en) * 2018-12-19 2019-04-23 南京市测绘勘察研究院股份有限公司 A kind of underground piping defect location and grade determination method based on deep learning
CN110097543A (en) * 2019-04-25 2019-08-06 东北大学 Surfaces of Hot Rolled Strip defect inspection method based on production confrontation network
CN112036517A (en) * 2020-11-05 2020-12-04 中科创达软件股份有限公司 Image defect classification method and device and electronic equipment
CA3053894A1 (en) * 2019-07-19 2021-01-19 Inspectorio Inc. Defect prediction using historical inspection data
CN113155851A (en) * 2021-04-30 2021-07-23 西安交通大学 Copper-clad plate surface defect visual online detection method and device based on deep learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846841A (en) * 2018-07-02 2018-11-20 北京百度网讯科技有限公司 Display screen quality determining method, device, electronic equipment and storage medium
US11256967B2 (en) * 2020-01-27 2022-02-22 Kla Corporation Characterization system and method with guided defect discovery

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583489A (en) * 2018-11-22 2019-04-05 中国科学院自动化研究所 Defect classifying identification method, device, computer equipment and storage medium
CN109671071A (en) * 2018-12-19 2019-04-23 南京市测绘勘察研究院股份有限公司 A kind of underground piping defect location and grade determination method based on deep learning
CN110097543A (en) * 2019-04-25 2019-08-06 东北大学 Surfaces of Hot Rolled Strip defect inspection method based on production confrontation network
CA3053894A1 (en) * 2019-07-19 2021-01-19 Inspectorio Inc. Defect prediction using historical inspection data
CN112036517A (en) * 2020-11-05 2020-12-04 中科创达软件股份有限公司 Image defect classification method and device and electronic equipment
CN113155851A (en) * 2021-04-30 2021-07-23 西安交通大学 Copper-clad plate surface defect visual online detection method and device based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于迁移学习与深度森林的晶圆图缺陷识别;沈宗礼;余建波;;浙江大学学报(工学版)(06);全文 *
空瓶检测机器人瓶底缺陷检测方法研究;范涛;朱青;王耀南;周显恩;刘远强;;电子测量与仪器学报(09);全文 *

Also Published As

Publication number Publication date
CN113706477A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN111563706A (en) Multivariable logistics freight volume prediction method based on LSTM network
CN111882055B (en) Method for constructing target detection self-adaptive model based on cycleGAN and pseudo label
CN111079780B (en) Training method for space diagram convolution network, electronic equipment and storage medium
CN108875933B (en) Over-limit learning machine classification method and system for unsupervised sparse parameter learning
CN111239137B (en) Grain quality detection method based on transfer learning and adaptive deep convolution neural network
CN109034175B (en) Image processing method, device and equipment
CN113706477B (en) Defect category identification method, device, equipment and medium
CN110598753A (en) Defect identification method based on active learning
CN111274821B (en) Named entity identification data labeling quality assessment method and device
CN115146761A (en) Defect detection model training method and related device
CN116129219A (en) SAR target class increment recognition method based on knowledge robust-rebalancing network
CN114386527B (en) Category regularization method and system for domain adaptive target detection
CN108875962A (en) Core ridge regression on-line study method based on fixed budget
CN116091389A (en) Image detection method based on classification model, electronic equipment and medium
CN112598082B (en) Method and system for predicting generalized error of image identification model based on non-check set
CN114646328A (en) Method, device, equipment and medium for determining path information
CN113869463A (en) Long tail noise learning method based on cross enhancement matching
CN113487577A (en) GRU-CNN combined model-based rapid Gamma adjustment method, system and application
CN113076823A (en) Training method of age prediction model, age prediction method and related device
CN111639542A (en) License plate recognition method, device, equipment and medium
CN113313179B (en) Noise image classification method based on l2p norm robust least square method
CN117422960B (en) Image recognition continuous learning method based on meta learning
CN115082955B (en) Deep learning global optimization method, recognition method, device and medium
CN114882298B (en) Optimization method and device for confrontation complementary learning model
CN113591781B (en) Image processing method and system based on service robot cloud platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant