CN117237366B - Method for detecting anti-fog performance of film - Google Patents

Method for detecting anti-fog performance of film Download PDF

Info

Publication number
CN117237366B
CN117237366B CN202311524604.1A CN202311524604A CN117237366B CN 117237366 B CN117237366 B CN 117237366B CN 202311524604 A CN202311524604 A CN 202311524604A CN 117237366 B CN117237366 B CN 117237366B
Authority
CN
China
Prior art keywords
image
fog
film
matching degree
antifogging property
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311524604.1A
Other languages
Chinese (zh)
Other versions
CN117237366A (en
Inventor
吕江鹏
郭涛
焦福星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Kaida Group Co ltd
Original Assignee
Fujian Kaida Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Kaida Group Co ltd filed Critical Fujian Kaida Group Co ltd
Priority to CN202311524604.1A priority Critical patent/CN117237366B/en
Publication of CN117237366A publication Critical patent/CN117237366A/en
Application granted granted Critical
Publication of CN117237366B publication Critical patent/CN117237366B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of film performance detection, and discloses a film anti-fog performance detection method, which comprises the following detection steps: taking a standard logarithmic near vision chart as a background image, training image data of which the anti-fog level is first-order or the film is not fogged at normal temperature, and constructing an image recognition model; acquiring image information of an anti-fog film, and identifying through a constructed image identification model after bilateral filtering pretreatment to obtain the matching degree of the anti-fog film and an anti-fog grade as a first-level graph; performing contour detection on the image, and performing circular fitting after obtaining the contour of the fog drop to obtain a fitting radius; performing binarization processing on the image, and calculating the white pixel area and the black pixel area; and (5) obtaining the anti-fog grade of the film by combining the matching degree, the fitting radius and the area ratio. The method has the advantages of high efficiency, excellent precision, capability of replacing manual continuous detection for a long time, and the like.

Description

Method for detecting anti-fog performance of film
Technical Field
The invention relates to the technical field of film performance detection, in particular to a film anti-fog performance detection method.
Background
The temperature difference between the package content and the external environment can generate fog, so that the antifogging functional film represented by cold antifogging and hot antifogging is widely developed. The antifogging agent in the antifogging film has a surface activity effect, so that the surface of the film becomes hydrophilic, and the contact angle between the film and water drops is reduced, so that the antifogging performance is obtained. The anti-fog performance test lacks a long-time detection method, is easy to fatigue after long-time detection, is inaccurate only by naked eyes to judge the anti-fog grade, and cannot accurately record an anti-fog failure curve.
Disclosure of Invention
The invention aims to provide a method for detecting the antifogging property of a film, which can continuously detect the film for a long time and has high efficiency and high precision.
In order to achieve the aim of the invention, the invention adopts the following technical scheme:
a method for detecting the antifogging property of a film comprises the following steps:
s1, training image data of a film which is at a first-level anti-fog level or in a normal temperature state and does not generate fog by taking a standard logarithmic near vision chart in GB/T11533-2011 annex B as a background image, and constructing an image recognition model;
s2, acquiring image information of the anti-fog film, and identifying through a constructed image identification model after bilateral filtering pretreatment to acquire the matching degree of the anti-fog film and the image with the anti-fog grade of one level;
s3, performing contour detection on the image, and performing circular fitting after obtaining the contour of the fog drops to obtain a fitting radius;
s4, performing binarization processing on the image, and calculating the white pixel area and the black pixel area, wherein the white pixel area is divided by the sum of the white pixel area and the black pixel area to obtain the white area ratio;
s5, the film anti-fog grade is obtained by combining the matching degree, the fitting radius and the white area ratio.
Preferably, the training method in step S1 is as follows: selecting a Darknet-53 network structure, setting a step length value to be 2, adopting a downsampling mode, using the first 52 layers as feature extraction, and using the last layer to output a predicted value; after training is completed, an image recognition model for recognizing the anti-fog background image is obtained.
Preferably, the bilateral filtering parameter range in the step S2 is set as follows: the standard deviation of the color is 1-100, and the standard deviation of the space distance is 3-8.
Preferably, the contour detection method in step S3 is as follows: and (3) using Canny edge detection, wherein the operator kernel size is 3 or 5, the operator low threshold is 1-254, the operator high threshold is 151-1000, and the contour area is 1-50.
Preferably, the binarization processing in the step S4 uses an OTUS thresholding method.
Preferably, the method for determining the anti-fog level in the step S5 is as follows: when the matching degree is 60-100 and the fitting radius is 0, the matching degree corresponds to one level in the plastic film antifogging property test standard GB/T31726-2015; when the matching degree is 60-100 and the fitting radius is 15-20, the second grade in the plastic film antifogging property test standard GB/T31726-2015 is corresponding; when the matching degree is 20-60, the fitting radius is 10-15, and the white area ratio is 0.5, the three stages in the plastic film antifogging property test standard GB/T31726-2015 are corresponding; when the matching degree is 0-20, the fitting radius is 5-10, and the white area ratio is 0.5-0.7, the four stages in the plastic film antifogging property test standard GB/T31726-2015 are corresponding; when the matching degree is 0-20, the fitting radius is 10-15, and the white area ratio is 0.7-1, five grades in the plastic film antifogging property test standard GB/T31726-2015 are corresponding.
The beneficial effects are that:
the invention obtains the anti-fog level of the film based on the comprehensive matching degree, the fitting radius and the area ratio of the image data processing technology, not only improves the working efficiency of the anti-fog performance detection operation, has excellent precision, but also can reduce the labor cost, can replace the manual continuous detection for a long time, and overcomes the technical defects of easy fatigue of long-time detection, inaccurate anti-fog level determination only by naked eyes, high labor cost and low efficiency existing in manual identification.
Drawings
FIG. 1 is a diagram of an image recognition model of the present invention.
FIG. 2 is a schematic diagram of the object detection training process of the present invention.
FIG. 3 is a graph of image pixel value averaging filter front-to-back contrast.
Fig. 4 is a comparison chart before and after the bilateral filtering pretreatment of the present invention.
Fig. 5 is a graph of fog drop profile detection and circle fit in accordance with the present invention.
Fig. 6 is a binarized image of the present invention with an anti-fog level of four.
Fig. 7 is a binarized image of the present invention having an anti-fog level of five.
FIG. 8 is a flow chart of the anti-fog level recognition of the present invention.
Detailed Description
The embodiment provides a method for detecting the antifogging property of a film, which comprises the following steps:
s1, training image data corresponding to a film with a first-level anti-fog grade in GB/T31726-2015 or a film which does not fog in a normal temperature state by taking a standard logarithmic near vision chart in GB/T11533-2011 annex B as a background image, and constructing an image recognition model. The specific training method comprises the following steps:
1. data preparation: firstly, collecting and labeling image data of which the anti-fog grade is first grade or the film is not fogged at normal temperature, and generating a label file.
2. Model architecture: a YOLOv3 deep convolutional neural network is used as a model architecture, a Darknet-53 network structure is selected, a step size value is set to be 2, a downsampling mode is adopted, the first 52 layers are used for feature extraction, and the last layer is used for outputting a predicted value. After training is completed, an image recognition model for recognizing the anti-fog background image is obtained.
The YOLOv3 adopts a new network structure dark-53 in the aspect of basic image feature extraction, the structure contains 53 convolution layers, and a deeper network hierarchy is formed by referring to the method of residual network.
As in fig. 1: the whole model structure comprises 5 groups of residual components, taking 256×256 input as an example, 256×256×32 is output through one convolution layer of 3×3×32; the output is 128×128×64 through a convolution layer of 3×3×64 stride=2; through 1 incomplete component, the output is 128×128×64; then the output is 64×64×128 through a convolution layer of 3×3×128 stride=2; the output after passing through two incomplete components is 64×64×128, and the output after passing through the last group of incomplete components is 8×8×1024 according to the step, which is a specific framework of the Darknet-53.
3. Training process: for each training sample, the input image is first subjected to feature extraction by a convolution layer. Then, an anchor frame (a priori frame) is used on the feature map for target detection. For each anchor frame, a match is made according to IoU (cross-over ratio) between it and the true tag frame, and the corresponding positioning error and classification error are calculated. Finally, the weights of the network are updated by a back propagation algorithm so that it can better predict the target object.
Firstly, automatic lattice making is carried out to form an S multiplied by S square area, a square central point cell of a frame letter E is taken to be responsible for detecting an eye chart 'E' word, as shown in figure 2, then information such as the letter E in the frame and the background outside the frame is converted into matrix information, and five-dimensional tensor is formed by (x, y, w, h) in a binding box and IoU of confidence to train. Where x, y represents the coordinates of the center point, w and h represent the width and length, and IoU represents the intersection ratio of the predicted and actual frames.
S2, acquiring image information of the anti-fog film, and identifying through a constructed image identification model after bilateral filtering pretreatment to acquire the matching degree of the anti-fog film and the image with the anti-fog grade of one level; the bilateral filtering parameter range is set as follows: the color standard deviation 30 and the spatial distance standard deviation 3.
In order to better perform the subsequent image processing related operation, noise information in the image is often required to be removed before the operation, so that noise interference is prevented from affecting the accuracy of the image. At this point filtering methods can be used to remove noise from the image to smooth the image. As shown in fig. 3, the values of the second row and the second column of the pixels in the left graph of fig. 3 are obviously different from the values of the surrounding pixels, and the values of the surrounding pixels are calculated through mean value filtering and then averaged to remove noise, so that the result of the right graph is obtained.
The principle of filtering is to process the pixel value of the current pixel point into an approximation of the surrounding pixel points. However, the conventional filtering only considers spatial information, such as mean filtering, median filtering, gaussian filtering, block filtering, and the like, and tends to blur edges when processing edge information. And the bilateral filtering integrates the spatial position and the pixel color weight, so that the edge information can be protected when the edge information is processed, and the blurring is not caused.
The bilateral filtering is used for processing flat areas with small change of image pixel values, and the spatial domain weight is dominant, because the bilateral filtering is used for processing spatial information in the same way as other filtering functions, and a Gaussian distribution weighted average method is adopted. The method comprises the steps of presetting a space distance, and obtaining a weighted average value of all pixels in the space distance to obtain the intensity of a pixel center point; when the pixel color domain is at the edge, parameters such as color intensity, similarity degree, depth distance and the like among pixels have larger difference, the weight of the pixel color domain is obviously improved, and the edge information is effectively reserved by combining the weights of the space domain and the pixel color domain. When the boundary is processed by bilateral filtering, the process of combining the spatial domain and pixel color domain weight filtering is comprehensively combined, and the following formula is adopted to carry out image traversal calculation, so as to obtain a filtered image:
wherein: i i Is a filtered image, I j As an input original image, W i As normalization factor, G α For spatial domain weight, G β I is the center pixel of the convolution template, and j is the convolution template pixel.
S3, performing contour detection on the image, and performing circular fitting after obtaining the contour of the fog drop to obtain a fitting radius: edge recognition is carried out on the outline of the fog drop by adopting Canny operator edge detection, edge accurate detection at a sub-pixel level is carried out by utilizing an improved Zernike moment, a circular ring area of the outline of the fog drop is extracted, finally circular fitting is carried out by combining with a RANSAC algorithm, a circular outline is generated, a radius is obtained, and measurement is completed, as shown in figure 5.
The Canny edge detection flow is as follows: 1) Gaussian filtering: eliminating noise influence; 2) Obtaining first-order partial derivatives dx and dy of the image in the horizontal and vertical directions by using a sobel/priwit operator, wherein x and y are respectively the horizontal and vertical coordinates of the pixel points of the image; 3) Obtaining a gradient image Grad=sqrt (dx2+dy2), angle=tan-1 (dy/dx) according to dx, dy, wherein Grad is gradient amplitude, and angle is gradient direction; 4) Non-maximum suppression: selecting local maximum value pixel points in the same gradient direction as edges: obtaining local maximum points of the gradient in the corresponding direction according to angle partition and combining interpolation, setting the local maximum points and the points to zero, wherein original image points corresponding to the local maximum points in the gradient image are edges; 5) Double threshold optimization: in order to filter interference of image gray fluctuation, weak edges are filtered, double thresholds are set for further screening of edge points: lowT preferential filtering to obtain connourl; then selecting pixel of > Hight from the contour_L as a seed, and performing 8-link in the contour_L to obtain the final contour_H which is the extracted contour result.
The application uses Canny edge detection, the operator kernel size is 3, the operator low threshold is 180, the operator high threshold is 250, and the contour area is 1-50.
S4, performing binarization processing on the image by using an OTUS threshold method, and calculating the white pixel area and the black pixel area, wherein the white pixel area and the black pixel area are divided to obtain the white area ratio.
S5, the film anti-fog grade is obtained by combining the matching degree, the fitting radius and the white area ratio, as shown in fig. 8.
The method for judging the anti-fog level comprises the following steps:
(1) When the matching degree is 60-100 and the fitting radius is 0, the matching degree corresponds to one level in GB/T31726-2015: the visual acuity of the visual acuity chart is completely consistent with that before the test;
(2) When the matching degree is 60-100 and the fitting radius is 15-20, the matching degree corresponds to the second level in GB/T31726-2015: the transparency is good, a small amount of uneven large water drops exist, and the definition degree of the visual acuity chart with the area of more than 50% is completely consistent with that before the test;
(3) When the matching degree is 20-60, the fitting radius is 10-15, and the white area ratio is 0.5, the three stages in GB/T31726-2015 are corresponded: basically transparent, has more water drops, and the character body of the visual chart is deformed;
(4) When the matching degree is 0-20, the fitting radius is 5-10, and the white area ratio is 0.5-0.7, the four stages in GB/T31726-2015 are corresponding: semitransparent, has many small water drops, a small amount below 0.1 of the visual chart is visible;
(5) When the matching degree is 0-20, the fitting radius is 10-15, and the white area ratio is 0.7-1, five stages in GB/T31726-2015 are corresponding: is completely opaque and completely invisible.
The films with known antifogging grades of primary, secondary, tertiary, quaternary and penta are respectively taken for manual identification and system identification and accuracy statistics, and the following table is adopted:
while the basic principles and main features of the invention and advantages of the invention have been shown and described, it will be understood by those skilled in the art that the present invention is not limited by the foregoing embodiments, which are described in the foregoing description merely illustrate the principles of the invention, and various changes and modifications may be made therein without departing from the spirit and scope of the invention as defined in the appended claims and their equivalents.

Claims (4)

1. The method for detecting the antifogging property of the film is characterized by comprising the following steps of:
s1, training image data of a film which is at a first-level anti-fog level or in a normal temperature state and does not generate fog by taking a standard logarithmic near vision chart in GB/T11533-2011 annex B as a background image, and constructing an image recognition model;
s2, acquiring image information of the anti-fog film, and identifying through a constructed image identification model after bilateral filtering pretreatment to acquire the matching degree of the anti-fog film and the image with the anti-fog grade of one level;
s3, performing contour detection on the image, and performing circular fitting after obtaining the contour of the fog drops to obtain a fitting radius;
s4, performing binarization processing on the image, and calculating the white pixel area and the black pixel area, wherein the white pixel area is divided by the sum of the white pixel area and the black pixel area to obtain the white area ratio;
s5, comprehensively matching the degree, fitting radius and white area ratio to obtain the anti-fog level of the film;
the training method in the step S1 is as follows: selecting a Darknet-53 network structure, setting a step length value to be 2, adopting a downsampling mode, using the first 52 layers as feature extraction, and using the last layer to output a predicted value; after training is completed, an image recognition model for recognizing the anti-fog background image is obtained;
for each training sample, firstly, extracting the characteristics of an input image through a convolution layer, and then, carrying out target detection on a characteristic image by adopting an anchor frame; for each anchor frame, matching according to IoU between the anchor frame and the real tag frame, and calculating corresponding positioning errors and classification errors; finally, updating the weight of the network through a back propagation algorithm, so that the target object can be predicted better;
when the boundary is processed by bilateral filtering, the process of combining the spatial domain and pixel color domain weight filtering is comprehensively combined, and the following formula is adopted to carry out image traversal calculation, so as to obtain a filtered image:
wherein: i i Is a filtered image, I j As an input original image, W i As normalization factor, G α For spatial domain weight, G β The pixel color domain weight is i is the center pixel of the convolution template, and j is the convolution template pixel;
the method for determining the anti-fog level in the step S5 is as follows:
when the matching degree is 60-100 and the fitting radius is 0, the matching degree corresponds to one level in the plastic film antifogging property test standard GB/T31726-2015;
when the matching degree is 60-100 and the fitting radius is 15-20, the second grade in the plastic film antifogging property test standard GB/T31726-2015 is corresponding;
when the matching degree is 20-60, the fitting radius is 10-15, and the white area ratio is 0.5, the three stages in the plastic film antifogging property test standard GB/T31726-2015 are corresponding;
when the matching degree is 0-20, the fitting radius is 5-10, and the white area ratio is 0.5-0.7, the four stages in the plastic film antifogging property test standard GB/T31726-2015 are corresponding;
when the matching degree is 0-20, the fitting radius is 10-15, and the white area ratio is 0.7-1, five grades in the plastic film antifogging property test standard GB/T31726-2015 are corresponding.
2. The method for detecting antifogging property of a film according to claim 1, wherein: the bilateral filtering parameter range in the step S2 is set as follows: the standard deviation of the color is 1-100, and the standard deviation of the space distance is 3-8.
3. The method for detecting antifogging property of a film according to claim 1, wherein: the contour detection method in the step S3 is as follows: and (3) using Canny edge detection, wherein the operator kernel size is 3 or 5, the operator low threshold is 1-254, the operator high threshold is 151-1000, and the contour area is 1-50.
4. The method for detecting antifogging property of a film according to claim 1, wherein: the binarization processing in the step S4 uses an OTUS thresholding method.
CN202311524604.1A 2023-11-16 2023-11-16 Method for detecting anti-fog performance of film Active CN117237366B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311524604.1A CN117237366B (en) 2023-11-16 2023-11-16 Method for detecting anti-fog performance of film

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311524604.1A CN117237366B (en) 2023-11-16 2023-11-16 Method for detecting anti-fog performance of film

Publications (2)

Publication Number Publication Date
CN117237366A CN117237366A (en) 2023-12-15
CN117237366B true CN117237366B (en) 2024-02-06

Family

ID=89088438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311524604.1A Active CN117237366B (en) 2023-11-16 2023-11-16 Method for detecting anti-fog performance of film

Country Status (1)

Country Link
CN (1) CN117237366B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732536A (en) * 2015-03-18 2015-06-24 广东顺德西安交通大学研究院 Sub-pixel edge detection method based on improved morphology
CN107578418A (en) * 2017-09-08 2018-01-12 华中科技大学 A kind of indoor scene profile testing method of confluent colours and depth information
CN112634256A (en) * 2020-12-30 2021-04-09 杭州三坛医疗科技有限公司 Circle detection and fitting method and device, electronic equipment and storage medium
CN112819772A (en) * 2021-01-28 2021-05-18 南京挥戈智能科技有限公司 High-precision rapid pattern detection and identification method
CN113436212A (en) * 2021-06-22 2021-09-24 广西电网有限责任公司南宁供电局 Extraction method for inner contour of circuit breaker static contact meshing state image detection
CN114627269A (en) * 2022-03-10 2022-06-14 东华大学 Virtual reality security protection monitoring platform based on degree of depth learning target detection
JP2022187308A (en) * 2021-06-07 2022-12-19 シャープディスプレイテクノロジー株式会社 X-ray imaging apparatus and control method of x-ray imaging apparatus
CN116189136A (en) * 2022-11-27 2023-05-30 长春理工大学 Deep learning-based traffic signal lamp detection method in rainy and snowy weather

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732536A (en) * 2015-03-18 2015-06-24 广东顺德西安交通大学研究院 Sub-pixel edge detection method based on improved morphology
CN107578418A (en) * 2017-09-08 2018-01-12 华中科技大学 A kind of indoor scene profile testing method of confluent colours and depth information
CN112634256A (en) * 2020-12-30 2021-04-09 杭州三坛医疗科技有限公司 Circle detection and fitting method and device, electronic equipment and storage medium
CN112819772A (en) * 2021-01-28 2021-05-18 南京挥戈智能科技有限公司 High-precision rapid pattern detection and identification method
JP2022187308A (en) * 2021-06-07 2022-12-19 シャープディスプレイテクノロジー株式会社 X-ray imaging apparatus and control method of x-ray imaging apparatus
CN113436212A (en) * 2021-06-22 2021-09-24 广西电网有限责任公司南宁供电局 Extraction method for inner contour of circuit breaker static contact meshing state image detection
CN114627269A (en) * 2022-03-10 2022-06-14 东华大学 Virtual reality security protection monitoring platform based on degree of depth learning target detection
CN116189136A (en) * 2022-11-27 2023-05-30 长春理工大学 Deep learning-based traffic signal lamp detection method in rainy and snowy weather

Also Published As

Publication number Publication date
CN117237366A (en) 2023-12-15

Similar Documents

Publication Publication Date Title
CN110490914B (en) Image fusion method based on brightness self-adaption and significance detection
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN107657606B (en) Method and device for detecting brightness defect of display device
CN110349126A (en) A kind of Surface Defects in Steel Plate detection method based on convolutional neural networks tape label
CN104794721B (en) A kind of quick optic disk localization method based on multiple dimensioned spot detection
CN107680054A (en) Multisource image anastomosing method under haze environment
CN109840483B (en) Landslide crack detection and identification method and device
CN108682012B (en) 3D curved surface glass surface flatness defect detection method based on line scanning laser
CN112734761B (en) Industrial product image boundary contour extraction method
CN116990323B (en) High-precision printing plate visual detection system
CN114792316B (en) Method for detecting spot welding defects of bottom plate of disc brake shaft
CN112926652B (en) Fish fine granularity image recognition method based on deep learning
CN113313107B (en) Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge
CN116883408B (en) Integrating instrument shell defect detection method based on artificial intelligence
CN116740061B (en) Visual detection method for production quality of explosive beads
CN113221881B (en) Multi-level smart phone screen defect detection method
CN117094914A (en) Smart city road monitoring system based on computer vision
CN110458019B (en) Water surface target detection method for eliminating reflection interference under scarce cognitive sample condition
CN112396580B (en) Method for detecting defects of round part
CN117237366B (en) Method for detecting anti-fog performance of film
CN117058018A (en) Method for repairing suspended impurity vision shielding area facing underwater structure detection
CN116363136A (en) On-line screening method and system for automatic production of motor vehicle parts
CN116524269A (en) Visual recognition detection system
CN113470015B (en) Water body shaking detection and analysis method and system based on image processing
CN113313678A (en) Automatic sperm morphology analysis method based on multi-scale feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant