CN113095400A - Deep learning model training method for machine vision defect detection - Google Patents
Deep learning model training method for machine vision defect detection Download PDFInfo
- Publication number
- CN113095400A CN113095400A CN202110384472.1A CN202110384472A CN113095400A CN 113095400 A CN113095400 A CN 113095400A CN 202110384472 A CN202110384472 A CN 202110384472A CN 113095400 A CN113095400 A CN 113095400A
- Authority
- CN
- China
- Prior art keywords
- defect
- image
- images
- appearance
- product
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007547 defect Effects 0.000 title claims abstract description 112
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000012549 training Methods 0.000 title claims abstract description 27
- 238000001514 detection method Methods 0.000 title claims abstract description 25
- 238000013136 deep learning model Methods 0.000 title claims abstract description 17
- 230000003321 amplification Effects 0.000 claims abstract description 19
- 238000003199 nucleic acid amplification method Methods 0.000 claims abstract description 19
- 230000004927 fusion Effects 0.000 claims abstract description 3
- 238000012545 processing Methods 0.000 claims description 12
- 235000002566 Capsicum Nutrition 0.000 claims description 3
- 239000006002 Pepper Substances 0.000 claims description 3
- 235000016761 Piper aduncum Nutrition 0.000 claims description 3
- 235000017804 Piper guineense Nutrition 0.000 claims description 3
- 235000008184 Piper nigrum Nutrition 0.000 claims description 3
- 150000003839 salts Chemical class 0.000 claims description 3
- 238000007500 overflow downdraw method Methods 0.000 claims description 2
- 244000203593 Piper nigrum Species 0.000 claims 1
- 230000002950 deficient Effects 0.000 abstract description 18
- 230000000694 effects Effects 0.000 abstract description 2
- 239000002023 wood Substances 0.000 description 26
- 238000013135 deep learning Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 241000722363 Piper Species 0.000 description 2
- 230000003416 augmentation Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a deep learning model training method for machine vision defect detection, which is based on a small amount of defect product appearance images and a large amount of normal product appearance images to perform model training and adopts a negative sample amplification method to perform sample amplification on the small amount of defect product appearance images, and specifically comprises the following steps: intercepting a defect part in the appearance image of the defect product to form a defect image; carrying out amplification treatment on the defect images to form a large number of artificial defect images; fusing the artificial defect image to a random position in the normal product appearance image in an image fusion mode to form an artificial defect product appearance image; and (4) listing the appearance image of the artificial defect product into a training sample set for model training. The invention provides a negative sample amplification method, which can be used for rapidly amplifying a negative sample under the condition that the negative sample is scarce, is particularly suitable for amplifying the appearance defect sample of a disordered product, greatly improves the model training effect and improves the detection rate of the defective product.
Description
Technical Field
The invention relates to the technical field of deep learning, in particular to a deep learning model training method for machine vision defect detection.
Background
In an industrial process, defect detection of a workpiece or product with defects is an essential step, namely, whether the workpiece or the product has defects and the type of the defects are identified. In the traditional industry, the identification of the surface defects of the workpiece still stays in the manual detection stage, the workpiece is influenced by personal factors of workers, the efficiency and the quality are difficult to guarantee, and the traditional manual detection method cannot meet the industrial requirements.
Many manufacturers invest a lot of talents, material resources and financial resources in quality detection to research how to improve the surface defect detection of products. The machine vision technology is a technology for automatically detecting the appearance defects of products by using a computer image processing technology. Machine vision technology has matured more and the cost has decreased to a level that small manufacturers can afford. The machine vision detection system can realize the accuracy and the accuracy which cannot be achieved due to manpower. Therefore, various manufacturers can meet the detection requirement by using a surface defect detection method based on machine vision. The method can overcome the defects of low sampling rate, low accuracy, poor real-time performance, low efficiency and the like of products caused by a manual detection method.
The traditional machine vision technology generally uses an image processing algorithm, such as a threshold segmentation method, a form learning algorithm, a connected region extraction method and the like to extract an image region which possibly contains product defects, and then uses a Bayesian network, a support vector machine and other algorithms to classify the image so as to distinguish whether the image is a defect. The traditional machine vision technology has the defects that an end-to-end algorithm is not adopted, namely, the characteristics of each defect are analyzed, and then a machine learning algorithm is used for carrying out repeated tests and parameter adjustment to obtain a good-performance algorithm. This process requires a lot of manpower and a high experience requirement for the algorithm engineer.
Deep learning is a technology which is developed vigorously in recent years, and the accuracy of image recognition and target detection is greatly improved. The application of deep learning techniques to machine vision has become a new trend in recent years. Deep learning can realize an end-to-end algorithm, and an elaborate image processing algorithm does not need to be designed. The deep learning model can be trained by only preparing enough training samples. The input to the deep learning model is an image and the output is a classification of an object or a location of a target in the image.
However, applying deep learning techniques to defect detection faces a problem of insufficient training samples. Modern industrial technology is mature, and products with defective appearance account for less of all products. Most of the product surface is defect-free. For example, in the wood type subset of the open source product appearance data set MVTEC anomoly DETECTION data set, there are 246 normal wood appearance images without defects and only 60 defective wood appearance images, and the images are classified into 5 types of color defects, wormhole defects, drip defects, scratch defects, and blend defects. And for detecting the defects by using the deep learning model, a large number of product appearance images containing the defects are used as training samples. Even if a defective product appearance image is present, its normal product appearance image as a background is different from other normal product appearance images. Obviously, if only the product appearance image containing the defects is used for training, the problems that the training samples are few and all the normal product appearance information is not contained are faced. The insufficient number of product appearance images containing defects is a prominent problem in applying deep learning models to the field of industrial vision.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a negative sample amplification method and a deep learning model training method for machine vision defect detection based on the negative sample amplification method.
The invention provides a negative sample amplification method, which comprises the following steps: firstly, extracting defect information in a small number of negative samples; then, amplifying the defect information to obtain artificial defect information; and finally, randomly integrating the artificial defect information into a large number of positive samples to form a large number of artificial negative samples.
Further, each negative sample defect information has defect label data corresponding thereto.
Further, the defect information amplification processing mode includes but is not limited to one or more of zooming in, zooming out, rotating, gaussian blurring, mean blurring, USM sharpening, laplacian sharpening, and adding salt and pepper noise.
The invention also discloses a deep learning model training method for machine vision defect detection, which is based on a small amount of defect product appearance images and a large amount of normal product appearance images to perform model training and adopts the negative sample amplification method to perform sample amplification on the small amount of defect product appearance images, and the method specifically comprises the following steps:
s1, intercepting the defect part in the appearance image of the defect product to form a defect image;
s2, carrying out amplification processing on the defect images to form a large number of artificial defect images;
s3, fusing the artificial defect image to a random position in the normal product appearance image in an image fusion mode to form an artificial defect product appearance image;
and S4, placing the appearance image of the artificial defect product into a training sample set for model training.
Further, in the step S3, the image fusion method is adopted to copy the pixels marked as the defective part in the defective image directly into the normal product appearance image.
The invention provides a negative sample amplification method, which can be used for rapidly amplifying a negative sample under the condition that the negative sample is scarce, is particularly suitable for amplifying the appearance defect sample of a disordered product, greatly improves the model training effect and improves the detection rate of the defective product.
Drawings
FIG. 1 is a flow chart of a deep learning model training method for machine vision defect detection;
FIG. 2a is an image of the appearance of a normal wood material in example 1;
FIG. 2b is an image of the appearance of defective wood in example 1;
FIG. 2c is a defect marking image of the appearance of the defective wood of example 1, wherein white pixel locations are indicated as defects; the corresponding position of the black pixel is represented as a normal appearance;
FIG. 3 is a defect image of example 1, wherein a defect is cut out from FIG. 2 b;
FIG. 4a is the image of FIG. 3 after a 200% zoom operation;
FIG. 4b is the image of FIG. 3 after a zoom out 50% operation;
FIG. 4c is the image of FIG. 3 after the Gaussian blur processing operation;
FIG. 4d is the image of FIG. 4c after a 200% zoom operation;
FIG. 4e is the image of FIG. 4c after a zoom out operation of 50%;
FIG. 5a is the image of the appearance of the artificially defective wood of FIG. 4a after being blended with the image of the appearance of the normal wood;
FIG. 5b is the image of the appearance of the artificially defective wood of FIG. 3 after being blended into the image of the appearance of the normal wood;
FIG. 5c is the image of the appearance of the artificially defective wood of FIG. 4b after being blended into the image of the appearance of the normal wood;
fig. 6 is various appearance images of a normal one of the wood materials.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments. The embodiments of the present invention have been presented for purposes of illustration and description, and are not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Example 1
The present embodiment specifically explains the technical solution (main flow is shown in fig. 1) of the present invention with reference to the wood type sub data set in the open source machine vision data set MVTEC anormaly DETECTION data set.
First, a normal wood appearance image and a defect mark image of the defect wood appearance image are acquired from an open source machine vision data set MVTEC anomoly DETECTION data set, see fig. 2a, fig. 2b and fig. 2c, respectively.
Secondly, intercepting a defect at a certain position in the appearance image of the defective wood shown in the figure 2b to obtain a defect image, and marking the defect in the defect image pixel by pixel. In this embodiment, a pixel at the same position in the defect image is represented by white as a defect, and black represents that the pixel at the same position in the defect image is a normal appearance, and a defect mark image is formed and stored in a special defect data file, as shown in fig. 3.
And then, amplifying the intercepted defect image to form a large number of artificial defect images. For example, a conventional image processing operation with 200% magnification is performed on FIG. 3, resulting in FIG. 4 a; performing a 50% reduced conventional image processing operation on FIG. 3, resulting in FIG. 4 b; performing a gaussian blur processing operation on fig. 3 to obtain fig. 4 c; the 200% enlargement operation is performed on FIG. 4c, resulting in FIG. 4 d; the zoom out 50% operation is performed on FIG. 4c, resulting in FIG. 4 e. If the processing such as rotation of different parameters, USM sharpening, adding salt and pepper noise and the like is performed on the images of FIGS. 4a-e, more artificial defect images can be obtained. Of course, when the enlarging, reducing and rotating operations are performed on the defective image, it is necessary to perform the same operations on the product defect mark data in synchronization.
Then, a random position in a normal wood appearance image is selected, and the pixels marked as defective parts shown in fig. 3, 4a and 4b are copied into the normal wood appearance image, so as to generate an artificially defective wood appearance image, as shown in fig. 5a, 5b and 5 c. When generating the artificial defect wood appearance image, the defect data files corresponding to the artificial defect images should be generated at the same time.
The above example only shows that the negative sample augmentation process is performed on one defect image in fig. 2b, and similarly, the negative sample augmentation process can be performed on two other defect images in fig. 2b and on different defect images of other samples in the data set, so as to obtain a large number of artificial defect wood appearance images, and the artificial defect wood appearance images do not contain the same information but contain new information. Fig. 6 shows various appearance images of a normal kind of wood, and it can be seen that even the normal appearance image has various textures. Deep learning models need to learn the features of these normal textures in order to identify similar images as normal appearances. These normal appearance images and defects can be combined in various ways, and the deep learning model needs to learn the difference between normal texture and defects. Therefore, for such application scenarios, the artificial defect product appearance image generated based on the present invention does not contain the same information, but contains information useful for deep learning models.
Finally, the training method provided by the invention is applied to the wood type subdata set in the open source machine vision data set MVTEC ANOMALY DETECTION DATASET, and 9 defective wood appearance images are taken as test images. And (5) using Mask RCNN as a deep learning model, and completing model training under the same parameter configuration. Based on the invention, a complete deep learning model is trained, all the 9 images can identify defects, and the average identification accuracy rate taking the defective pixels as a unit reaches more than 90%. And under the condition that the negative sample is not amplified and the model training is simply carried out by using the existing sample, only 3 images of the model in 9 test images can identify the defects.
It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by one of ordinary skill in the art and related arts based on the embodiments of the present invention without any creative effort, shall fall within the protection scope of the present invention.
Claims (5)
1. A negative sample amplification method is characterized by comprising the following steps: firstly, extracting defect information in a small number of negative samples; then, amplifying the defect information to obtain artificial defect information; and finally, randomly integrating the artificial defect information into a large number of positive samples to form a large number of artificial negative samples.
2. The negative examples amplification method of claim 1, wherein each negative examples defect information has corresponding defect label data.
3. The negative sample amplification method of claim 1 or 2, wherein the defect information amplification processing manner includes but is not limited to one or more of zooming in, zooming out, rotating, Gaussian blurring, mean blurring, USM sharpening, Laplace sharpening, and adding salt and pepper noise.
4. A deep learning model training method for machine vision defect detection is based on a small number of defect product appearance images and a large number of normal product appearance images for model training, and is characterized in that the negative sample amplification method of any one of claims 1 to 3 is adopted for sample amplification of the small number of defect product appearance images, and the method specifically comprises the following steps:
s1, intercepting the defect part in the appearance image of the defect product to form a defect image;
s2, carrying out amplification processing on the defect images to form a large number of artificial defect images;
s3, fusing the artificial defect image to a random position in the normal product appearance image in an image fusion mode to form an artificial defect product appearance image;
and S4, placing the appearance image of the artificial defect product into a training sample set for model training.
5. The method of claim 4, wherein in step S3, the image fusion method is to copy the pixels marked as defect parts in the defect image directly into the normal product appearance image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110384472.1A CN113095400A (en) | 2021-04-09 | 2021-04-09 | Deep learning model training method for machine vision defect detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110384472.1A CN113095400A (en) | 2021-04-09 | 2021-04-09 | Deep learning model training method for machine vision defect detection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113095400A true CN113095400A (en) | 2021-07-09 |
Family
ID=76676051
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110384472.1A Pending CN113095400A (en) | 2021-04-09 | 2021-04-09 | Deep learning model training method for machine vision defect detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113095400A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023241276A1 (en) * | 2022-06-15 | 2023-12-21 | 华为云计算技术有限公司 | Image editing method and related device |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110599453A (en) * | 2019-08-08 | 2019-12-20 | 武汉精立电子技术有限公司 | Panel defect detection method and device based on image fusion and equipment terminal |
CN110852373A (en) * | 2019-11-08 | 2020-02-28 | 深圳市深视创新科技有限公司 | Defect-free sample deep learning network training method based on vision |
CN110853035A (en) * | 2020-01-15 | 2020-02-28 | 征图新视(江苏)科技股份有限公司 | Sample generation method based on deep learning in industrial visual inspection |
CN111047576A (en) * | 2019-12-12 | 2020-04-21 | 珠海博明视觉科技有限公司 | Surface defect sample generation tool |
CN111507349A (en) * | 2020-04-15 | 2020-08-07 | 深源恒际科技有限公司 | Dynamic data enhancement method in OCR (optical character recognition) model training |
CN111709948A (en) * | 2020-08-19 | 2020-09-25 | 深兰人工智能芯片研究院(江苏)有限公司 | Method and device for detecting defects of container |
CN111814850A (en) * | 2020-06-22 | 2020-10-23 | 浙江大华技术股份有限公司 | Defect detection model training method, defect detection method and related device |
CN111950630A (en) * | 2020-08-12 | 2020-11-17 | 深圳市烨嘉为技术有限公司 | Small sample industrial product defect classification method based on two-stage transfer learning |
CN112581462A (en) * | 2020-12-25 | 2021-03-30 | 北京邮电大学 | Method and device for detecting appearance defects of industrial products and storage medium |
-
2021
- 2021-04-09 CN CN202110384472.1A patent/CN113095400A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110599453A (en) * | 2019-08-08 | 2019-12-20 | 武汉精立电子技术有限公司 | Panel defect detection method and device based on image fusion and equipment terminal |
CN110852373A (en) * | 2019-11-08 | 2020-02-28 | 深圳市深视创新科技有限公司 | Defect-free sample deep learning network training method based on vision |
CN111047576A (en) * | 2019-12-12 | 2020-04-21 | 珠海博明视觉科技有限公司 | Surface defect sample generation tool |
CN110853035A (en) * | 2020-01-15 | 2020-02-28 | 征图新视(江苏)科技股份有限公司 | Sample generation method based on deep learning in industrial visual inspection |
CN111507349A (en) * | 2020-04-15 | 2020-08-07 | 深源恒际科技有限公司 | Dynamic data enhancement method in OCR (optical character recognition) model training |
CN111814850A (en) * | 2020-06-22 | 2020-10-23 | 浙江大华技术股份有限公司 | Defect detection model training method, defect detection method and related device |
CN111950630A (en) * | 2020-08-12 | 2020-11-17 | 深圳市烨嘉为技术有限公司 | Small sample industrial product defect classification method based on two-stage transfer learning |
CN111709948A (en) * | 2020-08-19 | 2020-09-25 | 深兰人工智能芯片研究院(江苏)有限公司 | Method and device for detecting defects of container |
CN112581462A (en) * | 2020-12-25 | 2021-03-30 | 北京邮电大学 | Method and device for detecting appearance defects of industrial products and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023241276A1 (en) * | 2022-06-15 | 2023-12-21 | 华为云计算技术有限公司 | Image editing method and related device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109767422B (en) | Pipeline detection and identification method based on deep learning, storage medium and robot | |
CN110008956B (en) | Invoice key information positioning method, invoice key information positioning device, computer equipment and storage medium | |
CN113706490B (en) | Wafer defect detection method | |
CN111582294A (en) | Method for constructing convolutional neural network model for surface defect detection and application thereof | |
KR102600475B1 (en) | Deep learning-based data augmentation method for product defect detection learning | |
CN115908415B (en) | Edge-based defect detection method, device, equipment and storage medium | |
CN111754538B (en) | Threshold segmentation method for USB surface defect detection | |
CN114926407A (en) | Steel surface defect detection system based on deep learning | |
CN110599453A (en) | Panel defect detection method and device based on image fusion and equipment terminal | |
CN114170227A (en) | Product surface defect detection method, device, equipment and storage medium | |
CN113537037A (en) | Pavement disease identification method, system, electronic device and storage medium | |
CN113095400A (en) | Deep learning model training method for machine vision defect detection | |
CN111159150A (en) | Data expansion method and device | |
CN116342589B (en) | Cross-field scratch defect continuity detection method and system | |
CN111611866B (en) | Flame detection and identification method and system based on YCrCb and LAB color spaces | |
WO2024045963A1 (en) | Appearance defect detection method, electronic device, and storage medium | |
CN116433978A (en) | Automatic generation and automatic labeling method and device for high-quality flaw image | |
CN111667509A (en) | Method and system for automatically tracking moving target under condition that target is similar to background color | |
CN116824135A (en) | Atmospheric natural environment test industrial product identification and segmentation method based on machine vision | |
WO2023280081A1 (en) | Apparatus and method for identifying empty tray | |
CN113870236A (en) | Composite material defect nondestructive inspection method based on deep learning algorithm | |
CN113570524A (en) | Restoration method for high-reflection noise depth image | |
CN112270683A (en) | IHC digital preview image identification and organization foreground segmentation method and system | |
Robin et al. | An empiric model of face detection based on RGB skin tone color | |
CN111797838A (en) | Blind denoising system, method and device for picture documents |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210709 |