CN114782406A - RESNEXT50 deep segmentation network-based automobile gluing visual detection method - Google Patents
RESNEXT50 deep segmentation network-based automobile gluing visual detection method Download PDFInfo
- Publication number
- CN114782406A CN114782406A CN202210558935.6A CN202210558935A CN114782406A CN 114782406 A CN114782406 A CN 114782406A CN 202210558935 A CN202210558935 A CN 202210558935A CN 114782406 A CN114782406 A CN 114782406A
- Authority
- CN
- China
- Prior art keywords
- network
- resnext50
- unet
- gluing
- visual detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 45
- 238000004026 adhesive bonding Methods 0.000 title claims abstract description 39
- 230000011218 segmentation Effects 0.000 title claims abstract description 32
- 230000000007 visual effect Effects 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 claims abstract description 19
- 238000005070 sampling Methods 0.000 claims abstract description 14
- 230000008569 process Effects 0.000 claims abstract description 13
- 238000012549 training Methods 0.000 claims abstract description 9
- 239000002390 adhesive tape Substances 0.000 abstract description 9
- 230000000694 effects Effects 0.000 abstract description 4
- 238000004519 manufacturing process Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000003908 quality control method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010073 coating (rubber) Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000003292 glue Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
- 238000004078 waterproofing Methods 0.000 description 1
- 238000003466 welding Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of automatic visual detection, and discloses an automobile gluing visual detection method based on RESNEXT50 deep segmentation network, which comprises the following specific steps: s1, for the adjustment of the number of the repeated convolution groups of each convolution layer of Resnext50, 32 groups are improved into 16 groups, so that the network structure is more simplified; s2, the connection between the lower sampling coding module and the upper sampling coding module of the same layer Unet adopts a copy splicing mode; and S3, when the Unet input and output layer processes the large image, corresponding Patch segmentation is carried out according to the training scale. The invention discloses a vision detection algorithm based on RESNEXT50 deep segmentation network for the vision detection of automobile gluing quality, which shows higher accuracy in the automatic vision detection process of automobile gluing and has good identification effect on poorer adhesive tape images; the improved network has higher arithmetic reasoning speed and can meet the detection beat of more severe online gluing automatic visual detection.
Description
Technical Field
The invention belongs to the technical field of automatic visual detection, and particularly relates to an RESNEXT50 deep segmentation network-based automobile gluing visual detection method.
Background
With the increase of the application of the gluing process in the automobile production flow, quality control in the gluing process becomes more important. The gluing process plays a number of important roles in the automobile production process, such as waterproofing and dust prevention, welding assistance, structural connection and the like. The prior gluing quality mainly depends on manual inspection, but because the gluing process is wider and wider, and the accuracy and the efficiency of the manual inspection are limited, erroneous judgment or missing judgment often occurs. Therefore, the level of automatic and intelligent detection of the gluing quality in the automobile production flow is higher and higher.
Aiming at the intelligent and automatic detection of the gluing quality in the automobile production process, a machine vision method is mainly adopted at present, one method is that a glue gun drives a vision acquisition device to acquire an adhesive tape image in the gluing process and the acquired image is analyzed by a vision detection method so as to realize online gluing quality detection, and the other method is that after the gluing is finished, the vision acquisition device is used for integrally shooting a glued strip and the acquired image is analyzed by the vision detection method so as to realize offline gluing quality detection. In the process, the visual detection method adopted for the adhesive tape image plays an important role in the accuracy and stability of the automatic detection. At present, most of mainstream visual detection methods are traditional image segmentation methods or simple machine learning methods, the quality of the acquired images is poor due to factors such as illumination and shooting angles of the acquired images under the condition of more automatic gluing detection, the mainstream visual detection methods have higher probability to generate misjudgment, the accuracy of automatic detection is reduced, and the production quality of automobiles is influenced.
Disclosure of Invention
The invention aims to provide an automobile gluing visual detection method based on RESNEXT50 deep segmentation network, which solves the problems in the background technology.
In order to achieve the above purpose, the invention provides the following technical scheme: a vehicle gluing visual detection method based on RESNEXT50 deep segmentation network comprises the following specific steps:
s1, for the adjustment of the number of the repeated convolution groups of each convolution layer of Resnext50, 32 groups are improved into 16 groups, so that the network structure is more simplified;
s2, the connection between the lower sampling coding module and the upper sampling coding module of the same layer Unet adopts a copy splicing mode;
s3, when the large image is processed by the Unet input and output layer, corresponding Patch segmentation is carried out according to the training scale;
s4, the ultra-shallow network architecture of Unet-L4 is adopted, and the training and reasoning speed is accelerated.
Preferably, the conventional RESNEXT50 in step S1 is in the form of network 32 × 4d, and the modified RESNEXT50 is in the form of network 16 × 4 d.
Preferably, the network of the reduced RESNEXT50 in the step of S1 generates a Unet semantic segmentation network for the bakbone, the network is a U-type semantic segmentation network, and is composed of a down-sampling encoding part and an up-sampling decoding part, and after the encoding and decoding of the input image are completed, the result image of the semantic segmentation is output, which has the following beneficial effects:
the invention discloses a vision detection algorithm based on RESNEXT50 deep segmentation network for automobile gluing quality vision detection, which is a deep segmentation network for improving and optimizing automobile gluing detection, and the realization is firstly to adjust the structural parameters of RESNEXT50 deep segmentation network, so that the operation speed is improved under the condition of not reducing the accuracy rate; secondly, forming a new semantic segmentation U-shaped deep network by taking the improved RESNEXT50 network as a Backbone according to a Unet-L4 network model;
the technology shows higher accuracy in the automatic visual detection process of automobile gluing and has good identification effect on poor adhesive tape images; the improved network has higher arithmetic reasoning speed and can meet the detection beat of more severe online gluing automatic visual detection.
Drawings
FIG. 1 is a diagram of the basic architecture of the Unet-L4 (4-layer U-shaped network) of the present invention;
FIG. 2 is a flow chart of segmentation and recognition of an input rubber strip image according to the present invention;
FIG. 3 is a graph of the working accuracy versus time of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
RESNEXT50 deep network improvements
The deep segmentation network adjusts the network structure parameters for gluing detection purposes of the traditional RESNEXT50(32 x 4d) network, and fuses the adjusted network as a backhaul into an improved U-shaped semantic segmentation network with a Unet structure, so that the aim of identifying and segmenting the adhesive tape part in an image is fulfilled, and the final detection effect is achieved. As shown in table 1:
table 1 structural parameter adjustment for RESNEXT50 networks
The improvement of the invention on RESNEXT50 mainly lies in the adjustment of the number of repeated convolution groups for each convolution layer (Conv), namely 32 groups are improved into 16 groups, so that the network structure is more simplified. Experiments show that if the network model is used for current rubber coating image analysis, the accuracy rate is basically the same as that of the original RESNEXT50 network, but the number of repeated convolution groups of each convolution layer is halved, so that the network reasoning speed is improved by about 30%, and as shown in FIG. 3, the network model can be used for faster image analysis beats or tolerating higher-resolution image input under the same image analysis beat so as to improve the detection accuracy.
Unet semantic segmentation network improvements
The invention uses the improved RESNEXT50 network as BackBone to generate Unet semantic segmentation network, which is a U-type semantic segmentation network, and the network is composed of down-sampling coding and up-sampling decoding, and outputs the result image of semantic segmentation after finishing the coding and decoding of the input image.
As shown in fig. 1, the U-type network adopts the Unet-L4 (4-layer U-type network) basic architecture, a shallow network structure is adopted due to the small number of samples of the application, and a copy and splice (copy and splice) mode is adopted for the connection (skip connection) between the down-sampling coding module and the up-sampling coding module at the same layer. As shown in fig. 1, to ensure that the image scales are consistent, the large image may be subjected to corresponding Patch segmentation according to the training scale, so as to ensure the final network reasoning effect.
After the network is constructed, the network needs to be applied to an automatic visual inspection system for automobile gluing, so as to realize segmentation and identification of an input adhesive tape image, as shown in fig. 2, the steps are as follows:
s1, deploying automatic gluing detection equipment on site, collecting an adhesive tape image, and inputting the image into a training server for later use.
And S2, marking the input image by using image marking software to form a marked sample set.
And S3, because the input image and the corresponding labeled sample are generally fewer in number, sample enhancement needs to be carried out, including horizontal and vertical turning, random cutting, random rotation, deformation scaling and the like.
And S4, dividing the input image and the labeled sample set after image enhancement into a training set, a testing set and a verification set, inputting the training set, the testing set and the verification set into the network to train, and converging to obtain a network model.
And S5, inputting the network model into image analysis software of the automatic gluing detection equipment.
And S6, starting the automatic gluing detection equipment. If the automatic detection device is an online gluing automatic detection device, an adhesive tape image is shot in the gluing process and is immediately analyzed to obtain a judgment result; if the automatic detection equipment is offline gluing automatic detection equipment, the adhesive tape image is integrally shot after the gluing is finished, and the judgment result is obtained through instant analysis.
And S7, after the automatic gluing detection equipment obtains the judgment result, the judgment result is sent to a product quality control end through an industrial communication protocol according to the requirement of a production line, so that the aim of gluing quality control is fulfilled.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (3)
1. A vehicle gluing visual detection method based on RESNEXT50 deep segmentation network is characterized by comprising the following specific steps:
s1, for the adjustment of the number of the repeated convolution groups of each convolution layer of Resnext50, 32 groups are improved into 16 groups, so that the network structure is more simplified;
s2, the connection between the lower sampling coding module and the upper sampling coding module of the same layer Unet adopts a copy splicing mode;
s3, when the Unet input and output layer processes the large image, corresponding Patch segmentation is carried out according to the training scale;
s4, the ultra-shallow network architecture of Unet-L4 is adopted, and the training and reasoning speed is accelerated.
2. The automobile gluing visual detection method based on RESNEXT50 deep segmentation network of claim 1, wherein: in step S1, the conventional RESNEXT50 is in the form of network 32 × 4d, and the improved RESNEXT50 is in the form of network 16 × 4 d.
3. The automobile gluing visual detection method based on RESNEXT50 deep segmentation network of claim 1, wherein: the reduced RESNEXT50 network in the step S1 generates a Unet semantic segmentation network for the BackBone, wherein the network is a U-shaped semantic segmentation network and consists of a down-sampling coding part and an up-sampling decoding part, and a semantic segmentation result image is output after the coding and the decoding of an input image are finished.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210558935.6A CN114782406A (en) | 2022-05-21 | 2022-05-21 | RESNEXT50 deep segmentation network-based automobile gluing visual detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210558935.6A CN114782406A (en) | 2022-05-21 | 2022-05-21 | RESNEXT50 deep segmentation network-based automobile gluing visual detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114782406A true CN114782406A (en) | 2022-07-22 |
Family
ID=82408105
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210558935.6A Pending CN114782406A (en) | 2022-05-21 | 2022-05-21 | RESNEXT50 deep segmentation network-based automobile gluing visual detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114782406A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117392053A (en) * | 2023-08-28 | 2024-01-12 | 上海贝特威自动化科技有限公司 | Car gluing visual detection method based on BiSeNetV2 bilateral segmentation network |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106093057A (en) * | 2016-06-14 | 2016-11-09 | 上海贝特威自动化科技有限公司 | A kind of online gluing detection method |
CN108921225A (en) * | 2018-07-10 | 2018-11-30 | 深圳市商汤科技有限公司 | A kind of image processing method and device, computer equipment and storage medium |
CN112037161A (en) * | 2019-05-17 | 2020-12-04 | 上海贝特威自动化科技有限公司 | Gluing analysis method based on area automatic threshold analysis |
CN112329800A (en) * | 2020-12-03 | 2021-02-05 | 河南大学 | Salient object detection method based on global information guiding residual attention |
CN112489054A (en) * | 2020-11-27 | 2021-03-12 | 中北大学 | Remote sensing image semantic segmentation method based on deep learning |
CN112634190A (en) * | 2019-09-24 | 2021-04-09 | 上海汽车集团股份有限公司 | Gluing detection method and device |
CN113344901A (en) * | 2021-06-25 | 2021-09-03 | 北京市商汤科技开发有限公司 | Gluing defect detection method and device, storage medium and electronic equipment |
CN113344932A (en) * | 2021-06-01 | 2021-09-03 | 电子科技大学 | Semi-supervised single-target video segmentation method |
CN113449727A (en) * | 2021-07-19 | 2021-09-28 | 中国电子科技集团公司第二十八研究所 | Camouflage target detection and identification method based on deep neural network |
-
2022
- 2022-05-21 CN CN202210558935.6A patent/CN114782406A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106093057A (en) * | 2016-06-14 | 2016-11-09 | 上海贝特威自动化科技有限公司 | A kind of online gluing detection method |
CN108921225A (en) * | 2018-07-10 | 2018-11-30 | 深圳市商汤科技有限公司 | A kind of image processing method and device, computer equipment and storage medium |
CN112037161A (en) * | 2019-05-17 | 2020-12-04 | 上海贝特威自动化科技有限公司 | Gluing analysis method based on area automatic threshold analysis |
CN112634190A (en) * | 2019-09-24 | 2021-04-09 | 上海汽车集团股份有限公司 | Gluing detection method and device |
CN112489054A (en) * | 2020-11-27 | 2021-03-12 | 中北大学 | Remote sensing image semantic segmentation method based on deep learning |
CN112329800A (en) * | 2020-12-03 | 2021-02-05 | 河南大学 | Salient object detection method based on global information guiding residual attention |
CN113344932A (en) * | 2021-06-01 | 2021-09-03 | 电子科技大学 | Semi-supervised single-target video segmentation method |
CN113344901A (en) * | 2021-06-25 | 2021-09-03 | 北京市商汤科技开发有限公司 | Gluing defect detection method and device, storage medium and electronic equipment |
CN113449727A (en) * | 2021-07-19 | 2021-09-28 | 中国电子科技集团公司第二十八研究所 | Camouflage target detection and identification method based on deep neural network |
Non-Patent Citations (5)
Title |
---|
KAIMING HE 等: "Deep Residual Learning for Image Recognition", 《ARXIV:1512.03385V1》 * |
OLAF RONNEBERGER 等: "U-Net Convolutional Networks for Biomedical Image Segmentation", 《ARXIV:1505.04597》 * |
SAINING XIE 等: "Aggregated Residual Transformations for Deep Neural Networks", 《ARXIV:1611.05431V2》 * |
刘俊宏: "基于深度学习的X射线安检图像违禁品分割算法的研究与实现", 《中国优秀硕士学位论文全文数据库 工程科技I辑》 * |
胡庄稳: "基于机器视觉的自动涂胶智能检测系统研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117392053A (en) * | 2023-08-28 | 2024-01-12 | 上海贝特威自动化科技有限公司 | Car gluing visual detection method based on BiSeNetV2 bilateral segmentation network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111553929B (en) | Mobile phone screen defect segmentation method, device and equipment based on converged network | |
CN110543878B (en) | Pointer instrument reading identification method based on neural network | |
US11983919B2 (en) | Video anomaly detection method based on human-machine cooperation | |
CN108446678B (en) | Dangerous driving behavior identification method based on skeletal features | |
CN110059694A (en) | The intelligent identification Method of lteral data under power industry complex scene | |
CN113469953B (en) | Transmission line insulator defect detection method based on improved YOLOv4 algorithm | |
CN106296612A (en) | Hierarchical monitoring video sharpening system and method for image quality evaluation and weather condition guidance | |
CN110646425B (en) | Tobacco leaf online auxiliary grading method and system | |
CN113486865A (en) | Power transmission line suspended foreign object target detection method based on deep learning | |
CN111179233B (en) | Self-adaptive deviation rectifying method based on laser cutting of two-dimensional parts | |
CN114782406A (en) | RESNEXT50 deep segmentation network-based automobile gluing visual detection method | |
CN106599890A (en) | Transformer substation patrol robot digital type instrument identification algorithm | |
CN113256602A (en) | Unsupervised fan blade defect detection method and system based on self-encoder | |
CN115861190A (en) | Comparison learning-based unsupervised defect detection method for photovoltaic module | |
CN111914948A (en) | Ocean current machine blade attachment self-adaptive identification method based on rough and fine semantic segmentation network | |
CN105389820A (en) | Infrared image definition evaluating method based on cepstrum | |
CN115018790A (en) | Workpiece surface defect detection method based on anomaly detection | |
CN113256563A (en) | Method and system for detecting surface defects of fine product tank based on space attention mechanism | |
CN112102236A (en) | Polycrystalline subfissure detection method based on two deep stages | |
CN109829887B (en) | Image quality evaluation method based on deep neural network | |
CN111079347A (en) | Signal-to-noise ratio estimation method based on deep learning by using constellation diagram | |
CN111160262A (en) | Portrait segmentation method fusing human body key point detection | |
CN111079528A (en) | Primitive drawing checking method and system based on deep learning | |
CN112860870B (en) | Noise data identification method and equipment | |
CN114255464A (en) | Natural scene character detection and identification method based on CRAFT and SCRN-SEED framework |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220722 |