CN111145177B - Image sample generation method, specific scene target detection method and system thereof - Google Patents

Image sample generation method, specific scene target detection method and system thereof Download PDF

Info

Publication number
CN111145177B
CN111145177B CN202010267813.2A CN202010267813A CN111145177B CN 111145177 B CN111145177 B CN 111145177B CN 202010267813 A CN202010267813 A CN 202010267813A CN 111145177 B CN111145177 B CN 111145177B
Authority
CN
China
Prior art keywords
image
target
security inspection
sample
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010267813.2A
Other languages
Chinese (zh)
Other versions
CN111145177A (en
Inventor
李一清
周凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zhuoyun Intelligent Technology Co ltd
Original Assignee
Zhejiang Zhuoyun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Zhuoyun Intelligent Technology Co ltd filed Critical Zhejiang Zhuoyun Intelligent Technology Co ltd
Priority to CN202010267813.2A priority Critical patent/CN111145177B/en
Publication of CN111145177A publication Critical patent/CN111145177A/en
Application granted granted Critical
Publication of CN111145177B publication Critical patent/CN111145177B/en
Priority to PCT/CN2020/113998 priority patent/WO2021203618A1/en
Priority to US17/910,346 priority patent/US20230162342A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image sample generation method for deep learning, a specific scene target detection method and a specific scene target detection system. The image sample generation method obtains the real-shot security inspection image of the target scene based on security inspection place analysis, obtains the target security inspection image with labels, determines the image to be fused, processes the image to be fused with a new fusion algorithm to obtain a new sample, does not need to shoot a large number of target images under the real scene of real shooting on site, does not need to manually label the real-shot image under the complex environment, has simple algorithm, can flexibly and quickly generate the new sample image with place pertinence, finds that the new sample obtained by the method is almost consistent with the image containing the detection target of real shooting through comparison, particularly shows more vivid effect under a color image mode, has high labeling accuracy, and improves the efficiency and accuracy of the target detection task in the intelligent security inspection method.

Description

Image sample generation method, specific scene target detection method and system thereof
Technical Field
The invention belongs to the technical field of security inspection, and particularly relates to an image sample generation method and system for deep learning and a specific scene target detection method.
Background
X-rays are electromagnetic radiation shorter than the wavelength of visible light, have a greater solid and liquid penetration capacity than visible light, and can even penetrate through a steel sheet of a certain thickness. When X-rays penetrate through the article, the internal structures of the article with different material compositions, different densities and different thicknesses can absorb the X-rays to different degrees, and the more the density and the thickness are, the more the X-rays are absorbed; the smaller the density and thickness, the less the absorption ray, and the pixel value of the generated image represents the density value of the object, so the intensity of the ray transmitted from the object can reflect the internal structure information of the object. Generally, in order to more intuitively understand the material composition of an object to be examined, a system sets the color of a security image obtained by fluoroscopy, the color of an object belonging to an organic substance is set to orange, the color of an inorganic substance is set to blue, and the color of a mixture is set to green, and the specific color difference is determined according to the degree of absorption of X-rays by the object, and the higher the degree of absorption, the darker the color is, and the lighter the color is. Therefore, the acquired X-ray image not only has shape characteristics, but also shows different colors according to materials, and the characteristics can be used for analysis and identification when article identification is carried out. The radiation imaging technology is the mainstream technology in security inspection systems widely used in various countries, and the technology irradiates an object to be detected with rays (such as X-rays), and obtains a radiographic image of the object to be detected through computer processing according to signals received by a detector, and a security inspector distinguishes whether suspicious prohibited articles exist in the image or not according to the shape and color bands of common prohibited articles by observing the X-ray image. The manual interpretation method has low efficiency, high omission factor and high labor cost.
In recent years, with the continuous development of artificial intelligence technology, the deep learning technology has made a breakthrough in the tasks of classification, identification, detection, segmentation, tracking and the like in the field of computer vision. Compared with the traditional machine vision method, the deep convolutional neural network learns useful characteristics from a large amount of data under the training of big data, and has the advantages of high speed, high precision, low cost and the like. However, although deep learning can achieve a great part of the reason that it is superior to the conventional method because it is based on a large amount of data, especially in the field of security inspection, deep learning requires a large amount of data. How to overcome the characteristic that deep learning depends on a data set, the current mainstream method is data enhancement, but the detection performance of the model can be improved without increasing the data volume, a security inspection image in a real scene needs to be restored by a difficult sample influenced by external factors such as the placement angle of a detection target, the background environment and the like, and the detection accuracy and recall rate of contraband detection can be improved by training a detection network, so that the cost required for acquiring data and labeling data is increased.
At present, the sample data with the labeled information mainly comprises a large number of field real shot images, and information labeling is carried out on the field real shot images manually, so that on one hand, the difficulty of obtaining a large number of field real shot images is high, and on the other hand, the problems of low labeling efficiency, high labor cost, large influence of human factors and low accuracy exist, and a large amount of labeled data required by a training model is difficult to generate in a short time. In order to solve the problems, the invention patents with the application numbers of 201910228142.6 and 201911221349.7 of my company provide a development method for simulating real samples with difficulty, and the existing method still has the problems of complex algorithm, inflexible application in different scenes and further improvement of sample effect in practice.
Disclosure of Invention
The invention aims to solve the technical problems of the background art, and provides an image sample generation method for deep learning, a system thereof and a specific scene target detection method, which solve the problems of difficult acquisition and labeling of deep learning training sample data and large data volume, use a simple algorithm to quickly provide an effective training sample for the detection of contraband, and can flexibly adapt to target detection tasks in different scenes to improve the efficiency and accuracy of executing the target detection task in the security inspection process by using the deep learning method.
According to an aspect of the present invention, there is provided an image sample generation method for deep learning, including:
s1: carrying out scene composition analysis on the to-be-detected articles in the security inspection place;
s2: acquiring real-shot security inspection images of the target scene in corresponding composition proportion to form a scene data set;
s3: obtaining a target security inspection image with a label to form a target data set;
s4: the pixel gray values of the ith feature layer in the security inspection images of S2 and S3 are processed as follows:
Figure 478934DEST_PATH_IMAGE001
wherein i = 1, 2, 3;
Figure 429573DEST_PATH_IMAGE002
the processed pixel gray value of the ith characteristic layer is obtained; wherein a [ i ]]MAX _ PIXE L _ VA L [ i ] is the pixel gray scale value of the ith feature layer]The theoretical maximum gray value of the ith characteristic layer;
s5: determining images to be fused, wherein the images to be fused comprise at least one real-shot security check image of the target scene S2 and at least one target image S3, and the number of the images to be fused is counted as N which is an integer greater than or equal to 2;
s6: carrying out size normalization on the image to be fused;
s7: and fusing the images obtained in the step S6 to form a new sample, wherein the fusing method comprises the following steps:
all the pixels corresponding to the N images on the pixel of the new sample (i, j, k)
Figure 312078DEST_PATH_IMAGE003
When the content of the organic acid is more than or equal to the standard,
Figure 664562DEST_PATH_IMAGE004
on the remaining pixel points, the pixel value of the new sample is set to
Figure 544793DEST_PATH_IMAGE005
Wherein, the threshold value of the background color is 0 < 1,
Figure 982728DEST_PATH_IMAGE006
is the first
Figure 403345DEST_PATH_IMAGE006
A picture is printed on the paper, and the picture,
Figure 688964DEST_PATH_IMAGE007
is the pixel gray value of each image to be fused after normalization, amean[j][k]Is the gray value of the jth row and kth column pixels, anorm[i][j][k]The gray value of the ith characteristic layer pixel of the jth row and the kth column;
s8: the S5, S6, S7 are iterated multiple times until a sufficient number of samples are acquired as a sample composition for training.
Preferably, the kind of the target is one or more, and the number of the target is one or more;
preferably, the images in S2 and S3 may be further subjected to data enhancement and then respectively merged into the scene data set and the target data set; the enhancement method comprises a geometric transformation operation and/or a pixel transformation operation.
Preferably, the data in S2 and S3 are preprocessed, and the preprocessing includes, but is not limited to, one or more of pixel gray-scale processing, denoising, background differentiation, and artifact removing.
According to still another aspect of the present invention, there is provided an image sample generation system for deep learning, for implementing the above method, the system including: the system comprises a scene data set, a target data set, a preprocessing module, an image to be fused preprocessing module and an image fusion module, and a sample library is generated.
According to still another aspect of the present invention, there is provided a method for detecting an object in a specific scene, including:
step 1: acquiring a security inspection image of an article, and preprocessing the image;
step 2: extracting image features of the preprocessed image through a preset convolutional neural network;
and step 3: obtaining a target area of a security inspection image through a preset target detection model; the preset target detection model is obtained by training the image sample obtained by the image sample generation method for deep learning;
and 4, step 4: and outputting the detection result of the security inspection image, including the information such as the type and the position of the contraband.
Compared with the prior art, the invention has at least the following beneficial effects: the method comprises the steps of obtaining a live-shot security inspection image of a target scene based on security inspection site analysis, obtaining a target image with labels, determining an image to be fused, and processing the image to be fused by a new fusion algorithm to obtain a new sample, does not need to shoot a large number of live-shot images on site, does not need to label the live-shot images manually, has a simple algorithm, can flexibly and quickly generate a new sample image with site pertinence, has high sample reality and high labeling accuracy, provides a large amount of available sample data with label information for model training, and solves the sample collection problem of difficult-to-obtain articles such as pistols, explosives and the like in the existing contraband identification field. The verification and comparison show that the new sample obtained by the method is almost consistent with the real-shot image containing the detection target, particularly shows a more vivid effect in a color image mode, has high reality and high marking accuracy, provides a large amount of available sample data with marking information for model training, and further improves the efficiency and the accuracy of the target detection task in the intelligent security inspection method.
Drawings
Fig. 1 is a flowchart of an image sample generation method based on deep learning according to an embodiment of the present invention.
Fig. 2 is an X-ray image obtained by the sample generation method according to the embodiment of the present invention.
Fig. 3 is an X-ray image of an authentic article.
Detailed Description
In order to make the technical solutions in one or more embodiments of the present disclosure better understood, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in one or more embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of one or more embodiments of the present disclosure, but not all embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments in one or more of the specification without inventive faculty are intended to fall within the scope of one or more of the specification.
First, the noun terms to which one or more embodiments of the present invention relate are explained.
Contraband: articles that are not legally required to be manufactured, purchased, used, held, stored, transported in and out of the mouth, such as weapons, ammunition, explosive articles (e.g., explosives, detonators, fuse cords, etc.), and the like.
And (4) security inspection images: the security inspection equipment or security inspection machine related to the invention is not limited to the X-ray security inspection equipment by using the image acquired by the security inspection equipment, and the security inspection equipment and/or security inspection machine which can be carried out by an imaging mode are all the protection scope of the invention, such as terahertz imaging equipment and the like.
Example 1: in order to solve the above technical problem, as shown in fig. 1, an image sample generation method based on deep learning according to the present invention includes: s1: and acquiring real-shot security inspection images of the target scene to form a scene data set.
Specifically, the target scene includes objects needing security inspection, such as luggage, express packages, bags, goods and the like, which may appear in places such as airports, train stations, bus stations, government buildings, embassies, conference centers, exhibition centers, hotels, shopping malls, major events, post offices, schools, logistics industries, industrial detection, express transit yards and the like. If the target is contraband (e.g., a gun, an explosive, etc.), the target scenario refers to the container in which the contraband is located, i.e., the location where the contraband can be stored. Generally, the type of a scene is related to a place, for example, baggage in a place such as an airport train station is a main scene, and a scene corresponding to an express transfer station is an express package. As a general phenomenon, even if the express transit stations are located in different geographical locations, the scenes of the transit stations are different, for example, the scenes of the transit stations located in haining are generally express parcels wrapped with clothes, and the scenes of the transit stations located in kunshan are mostly express parcels wrapped with electronic devices.
The imaging effect is different when the scenes are different, and the X-ray security inspection device is taken as an example, and the analysis is as follows in principle: x-rays are electromagnetic radiation shorter than the wavelength of visible light, have a greater solid and liquid penetration capacity than visible light, and can even penetrate through a steel sheet of a certain thickness. When X-rays penetrate through the article, the internal structures of the article with different material compositions, different densities and different thicknesses can absorb the X-rays to different degrees, and the more the density and the thickness are, the more the X-rays are absorbed; the smaller the density and thickness, the less the absorption ray, and the pixel value of the generated image represents the density value of the object, so the intensity of the ray transmitted from the object can reflect the internal structure information of the object. Generally, in order to more intuitively understand the material composition of an object to be examined, a system sets the color of a security image obtained by fluoroscopy, the color of an object belonging to an organic substance is set to orange, the color of an inorganic substance is set to blue, and the color of a mixture is set to green, and the specific color difference is determined according to the degree of absorption of X-rays by the object, and the higher the degree of absorption, the darker the color is, and the lighter the color is. Therefore, the acquired X-ray image not only has shape characteristics, but also shows different colors according to materials, and the characteristics can be used for analysis and identification when article identification is carried out.
By integrating the introduction of the target scene and the imaging effect, the selection of the necessary scene data in the sample can also be emphasized according to different places, for example, a contraband detection network provided for a transit station of costumes serving as main goods, such as cottage, haining and the like, the data taking the express packages with the costumes as the scenes can be used as the sample or the sample can be made by the method of the invention when the detection network is trained, so that when the real-shot security inspection image of the target scene is obtained, the scene composition analysis can be carried out on the to-be-detected articles in the security inspection place, and the target scene image with the corresponding proportion can be selected.
The security image can be acquired by using an X-ray security inspection device or other security inspection devices such as a terahertz security inspection device, and the embodiment does not limit the specific type or model of the security inspection device, and any device capable of obtaining the security image can be used for security inspection.
S2: and obtaining the target image with the label to form a target data set.
Specifically, the number of targets is one or more. The target image is also obtained by shooting through a security check device, and preferably, the scene of the target is not set, and the security check image only contains the target. For example, in the field of security inspection, contraband is a general term of targets in the embodiments of the present invention, a annotating person annotates each target to make the target become a target with an annotation, where the annotation content includes a rectangular frame and a category of the target. Preferably, the more target data the better.
Preferably, the images in S1 and S2 of this embodiment may be further subjected to data enhancement and then respectively merged into the scene data set and the target data set; the enhancement method comprises a geometric transformation operation and/or a pixel transformation operation; the geometric transformation operation comprises one or more of a rotation operation, a scaling operation and a cutting operation; synchronous transformation is carried out on the marking information during geometric transformation as optimization; the pixel transformation operation comprises one or more of noise adding operation, fuzzy transformation, perspective operation, brightness operation and contrast operation. The rotating operation is as follows: and rotating the image clockwise/anticlockwise by a certain angle to reduce the probability of failure in recognition of the image with a dip angle. The scaling operation: in the case of image samples generated by matting, a scaling is input, and then the image of the scaled size is compressed to the original size after the original image is matting. The cutting operation comprises the following steps: the probability of recognition failure due to the fact that the image is missing or shielded is reduced by conducting cropping processing on the cutout image sample. Further, the method of the noise adding operation adopts: and generating a noise matrix according to the mean value and the Gaussian covariance, adding noise to the original image matrix, and judging the legality of the pixel value of each point. The fuzzy transformation method is realized by adopting a blu function of OpenCV, namely, a fuzzy block is added in an original image. The perspective operation comprises the following steps: and transforming four corner points of the original image into new four points according to the input perspective proportion, and then performing perspective on the whole point of the original image according to the corresponding mapping relation of the four points before and after transformation. The method for brightness and contrast operation adopts a method for adjusting the RGB value of each pixel to realize the brightness and contrast operation on the image.
As an embodiment of the present invention, the data in S1 and S2 are preprocessed, and the processing manner includes, but is not limited to, one or more of pixel gray value processing, denoising, background differentiation, and artifact removing. As a preferred embodiment of the present invention, the pixel gray-scale values of the ith feature layer in the data of S1 and S2 are processed as follows:
Figure 802413DEST_PATH_IMAGE001
wherein i = 1, 2, 3;
Figure 727644DEST_PATH_IMAGE002
the processed pixel gray value of the ith characteristic layer is obtained; wherein a [ i ]]MAX _ PIXE L _ VA L [ i ] is the pixel gray scale value of the ith feature layer]Is the theoretical maximum gray scale value of the ith feature layer.
S3: determining images to be fused, wherein the images to be fused comprise at least one real-time security check image of the target scene S1 and at least one target image S2, and the number of the images to be fused is counted as N, wherein N is an integer greater than or equal to 2. As a preferred implementation manner of the embodiment of the present invention, an image to be fused is composed of any one of the scene data set and the target data set, that is, N is 2.
S4: and carrying out size normalization on the image to be fused.
As an embodiment of the present invention, the size normalization is performed on the selected image, and the at least two X-ray images may be the same or different, and the sizes of the at least two X-ray images may be the same or different, which are all within the protection scope of the present invention.
The length and width of the dimension normalized image are set by the size of the minimum circumscribed rectangle frame of the image to be fused, taking w as an example when the X-ray images are twonew= max(w1,w1), hnew= max(h1,h2) Wherein the length and width of the two X-ray images are (w)1,h1),(w2,h2) (ii) a The size normalization process of each image is realized by filling the newly added area of the image with background color, so that the target in the original image is not changed, and the background color is related to the equipment for collecting the X-ray image and can be adjusted according to the specific X-ray image.
S5: and fusing the images obtained in the step S4 to form a new sample.
The specific fusion method is as follows:
all the pixels corresponding to the N images on the pixel of the new sample (i, j, k)
Figure 889635DEST_PATH_IMAGE003
When the content of the organic acid is more than or equal to the standard,
Figure 951132DEST_PATH_IMAGE004
on the remaining pixel points, the pixel value of the new sample is set to
Figure 235483DEST_PATH_IMAGE005
Wherein, the threshold value of the background color is 0 < 1,
Figure 585693DEST_PATH_IMAGE006
is the first
Figure 613692DEST_PATH_IMAGE006
A picture is printed on the paper, and the picture,
Figure 795274DEST_PATH_IMAGE007
is the pixel gray value of each image to be fused after normalization, amean[j][k]Is the gray value of the jth row and kth column pixels, anorm[i][j][k]Is the gray value of the ith characteristic layer pixel of the jth row and kth column.
S6: the S3, S4, S5 are iterated multiple times until a sufficient number of samples are acquired as a sample composition for training.
Preferably, the targeted image composition to be fused can be determined according to different places, and the concept is consistent with the step S1 in the embodiment of the present invention. For example, for a detection network sample of an airport, the composition ratio of a live security image of a target scene in an image to be fused is selected according to the scene ratio in the daily practical situation, for example, 60% of large luggage and 30% of bags and sacks are used as the target scene.
In the method for generating an image sample based on deep learning in embodiment 1 of the present specification, a method for obtaining a live-action security check image of a target scene based on security check location analysis, obtaining a target image with a label, determining an image to be fused, and processing the image to be fused with a new fusion algorithm to obtain a new sample is provided, so that a large number of target images in the live-action real scene do not need to be shot on site, the live-action image in the complex environment does not need to be manually labeled, the algorithm is simple, the method can flexibly and quickly generate a new sample image with site pertinence, and the new sample obtained by the method of the embodiment of the invention shown in the figure 2 is almost consistent with the image containing the detection target actually shot in the figure 3 through comparison, so that the method particularly shows a more vivid effect under a color image, has high reality and high labeling accuracy, and provides a large amount of available sample data with labeled information for model training.
Example 2: an image sample generation system based on deep learning, comprising: the system comprises a scene data set, a target data set, a preprocessing module, an image to be fused preprocessing module and an image fusion module, and a sample library is generated.
Specifically, the scene data set is composed of the live-action security check image of the target scene described in embodiment 1, and the target data set is composed of the target image with the label described in embodiment 1.
The X-ray image of the article can be acquired by using an X-ray security inspection device; the articles comprise luggage, express packages, large goods and the like.
As an embodiment of the present invention, the data in the scene data set and the target data set are preprocessed, and the processing manner includes, but is not limited to, one or more of pixel gray value processing, denoising, background differentiation, and artifact removing. As a preferred embodiment of the present invention, the pixel gray-scale values of the ith feature layer in the data of the scene data set and the target data set are processed as follows:
Figure 984947DEST_PATH_IMAGE008
wherein i = 0, 1, 2;
Figure 822453DEST_PATH_IMAGE002
the processed pixel gray value of the ith characteristic layer is obtained; wherein a [ i ]]MAX _ PIXE L _ VA L [ i ] is the pixel gray scale value of the ith feature layer]Is the theoretical maximum gray scale value of the ith feature layer.
Preferably, data enhancement is performed on images in the scene data set and the target data set, and the enhanced images are also respectively a component of the scene data set and a component of the target data set; the enhancement method comprises a geometric transformation operation and/or a pixel transformation operation; preferably, the geometric transformation operation comprises one or more of a rotation operation, a scaling operation and a clipping operation; the pixel transformation operation comprises one or more of noise adding operation, fuzzy transformation, perspective operation, brightness operation and contrast operation. The rotating operation is as follows: and rotating the image clockwise/anticlockwise by a certain angle to reduce the probability of failure in recognition of the image with a dip angle. The scaling operation: in the case of image samples generated by matting, a scaling is input, and then the image of the scaled size is compressed to the original size after the original image is matting. The cutting operation comprises the following steps: the probability of recognition failure due to the fact that the image is missing or shielded is reduced by conducting cropping processing on the cutout image sample. Further, the method of the noise adding operation adopts: and generating a noise matrix according to the mean value and the Gaussian covariance, adding noise to the original image matrix, and judging the legality of the pixel value of each point. The fuzzy transformation method is realized by adopting a blu function of OpenCV, namely, a fuzzy block is added in an original image. The perspective operation comprises the following steps: and transforming four corner points of the original image into new four points according to the input perspective proportion, and then performing perspective on the whole point of the original image according to the corresponding mapping relation of the four points before and after transformation. The method for brightness and contrast operation adopts a method for adjusting the RGB value of each pixel to realize the brightness and contrast operation on the image.
The image to be fused preprocessing module is used for randomly selecting at least one image in the scene data set and at least one image in the target data set and normalizing the sizes of the images.
The size normalization module is used for performing size normalization on N (N is more than or equal to 2) X-ray images in the original sample at any time; the at least two X-ray images may be the same or different, and the at least two X-ray images may be the same or different in size, which are all within the scope of the present invention. The present embodiment continuously repeats the arbitrary taking until the required number of samples and quality requirement are achieved.
The length and width of the dimension normalized image are set by the size of the minimum circumscribed rectangle frame of the image to be fused, and w is taken as an example when the X-ray image is taken as two images oncenew= max(w1,w1), hnew= max(h1,h2) Wherein the length and width of the two X-ray images are (w)1,h1),(w2,h2) (ii) a The size normalization process of each image is realized by filling the newly added area of the image with background color, so that the target in the original image is not changed, and the background color is related to the equipment for collecting the X-ray image and can be adjusted according to the specific X-ray image.
The image fusion module is used for fusing each position pixel point of the image obtained by the image preprocessing module to be fused, and the fusion method comprises the following steps:
all the pixels corresponding to the N images on the pixel of the new sample (i, j, k)
Figure 388564DEST_PATH_IMAGE003
When the content of the organic acid is more than or equal to the standard,
Figure 690232DEST_PATH_IMAGE004
on the remaining pixel points, the pixel value of the new sample is set to
Figure 985560DEST_PATH_IMAGE005
Wherein, the threshold value of the background color is 0 < 1,
Figure 372679DEST_PATH_IMAGE006
is the first
Figure 742480DEST_PATH_IMAGE006
A picture is printed on the paper, and the picture,
Figure 633076DEST_PATH_IMAGE007
is the pixel gray value of each image to be fused after normalization, amean[j][k]Is the gray value of the jth row and kth column pixels, anorm[i][j][k]Is the gray value of the ith characteristic layer pixel of the jth row and kth column.
The generated sample library includes sample images generated by an image fusion module.
The number of the sample images in the generated sample library is determined by the execution times of the preprocessing module, the image preprocessing module to be fused and the image fusion module.
Example 3: corresponding to the image sample generation method based on the deep learning, according to the embodiment of the invention, the specific scene target detection method is also provided, and comprises the following steps:
step 1: acquiring a security inspection image of an article, and preprocessing the image; the preprocessing method includes, but is not limited to, one or more of image normalization, denoising, background differentiation and artifact removal.
The image is normalized by a predetermined size, for example 500 × 500 in this embodiment.
Denoising the image by using a Gaussian smoothing algorithm, wherein the value of each point of the image after the Gaussian smoothing is obtained by weighting and averaging the value of each point and other pixel values in the field; the specific operation is to scan each pixel in the image by using a template, and replace the value of the central pixel point of the template by the weighted average gray value of the pixels in the field determined by the template. After Gaussian smoothing, removing fine noise on the image, and although edge information in the image is weakened to a certain extent, keeping edges relative to noise; the background difference algorithm extracts the gray value median of the whole image (500 × 500) as the gray value of the background, and then calculates the difference absolute value between the gray value of each pixel point in the image and the background: i issub=|IfgBg | where bg is the median of the entire image, and it is known that a foreign object point is a larger difference than the difference between the background point and the background gray scale value, so the absolute value of the difference IsubRegarding the probability that a pixel belongs to a foreign object point, the larger the value, the more likely the corresponding pixel is to be a foreign object point.
Step 2: extracting image features of the preprocessed image through a preset convolutional neural network;
and step 3: obtaining a target area of a security inspection image through a preset target detection model; the preset target detection model is obtained by training the image sample obtained by the method of the embodiment 1 of the invention.
The training process of the preset target detection model mainly comprises the following steps:
1. collecting image samples obtained by the method of the embodiment 1 of the invention, and constructing a training data set;
2. the preset deep learning network model comprises a feature extraction module, a target detection network and a loss calculation module;
the preset feature extraction module and the target detection network are both convolution neural network models;
3. and training the feature extraction module and the target detection network through a training data set to obtain a trained deep learning target detection model.
The training process comprises: inputting the image sample obtained by the method of embodiment 1 of the invention into a feature extraction module for feature extraction to obtain image features, inputting the image features into a target detection network model to obtain candidate predictions of the image, inputting the candidate predictions into a loss calculation module to calculate a loss function, and training the preset deep learning target detection model through a gradient back propagation algorithm.
And 4, step 4: and outputting the detection result of the security inspection image, including the information such as the type and the position of the contraband.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (5)

1. An image sample generation method for deep learning, comprising:
s1: carrying out scene composition analysis on the to-be-detected articles in the security inspection place;
s2: acquiring real-shot security inspection images of the target scene in corresponding composition proportion to form a scene data set;
s3: obtaining a target security inspection image with a label to form a target data set, wherein the target security inspection image is obtained by shooting through security inspection equipment;
s4: the pixel gray values of the ith feature layer in the security inspection images of S2 and S3 are processed as follows:
Figure 425406DEST_PATH_IMAGE001
wherein i = 1, 2, 3;
Figure 418770DEST_PATH_IMAGE002
the processed pixel gray value of the ith characteristic layer is obtained; wherein a [ i ]]MAX _ PIXE L _ VA L [ i ] is the pixel gray scale value of the ith feature layer]The theoretical maximum gray value of the ith characteristic layer;
s5: determining images to be fused, wherein the images to be fused comprise at least one live-shooting security check image of the target scene S2 and at least one target security check image S3, and the number of the images to be fused is recorded as N which is an integer greater than or equal to 2;
s6: carrying out size normalization on the image to be fused;
s7: and fusing the images obtained in the step S6 to form a new sample, wherein the fusing method comprises the following steps:
all the pixels corresponding to the N images on the (i, j, k) pixel of the new sample have
Figure 173099DEST_PATH_IMAGE003
When the content of the organic acid is more than or equal to the standard,
Figure 910111DEST_PATH_IMAGE004
on the remaining pixel points, the pixel value of the new sample is set to
Figure 699076DEST_PATH_IMAGE005
Wherein, the threshold value of the background color is 0 < 1,
Figure 128920DEST_PATH_IMAGE006
is the first
Figure 370545DEST_PATH_IMAGE006
A picture is formed by
Figure 645669DEST_PATH_IMAGE007
Obtaining the pixel gray value a of each fusion image after normalizationmean[j][k]Is the gray value of the jth row and kth column pixels, anorm[i][j][k]The gray value of the ith characteristic layer pixel of the jth row and the kth column;
s8: the S5, S6, S7 are iterated multiple times until a sufficient number of samples are acquired as a sample composition for training.
2. An image sample generation method for deep learning as claimed in claim 1, wherein the labeled objects in S3 are one or more in kind and one or more in number.
3. The image sample generation method for deep learning as claimed in claim 1, wherein the images in S2 and S3 are subjected to data enhancement and then respectively merged into the scene data set and the target data set; the enhancement method comprises a geometric transformation operation and/or a pixel transformation operation.
4. An image sample generation system for deep learning, the system being configured to implement the method of any of claims 1-3, comprising: the system comprises a scene data generation module, a target data generation module, a preprocessing module, an image preprocessing module to be fused, an image fusion module and a generation sample library, wherein the preprocessing module is used for processing data obtained from the scene data generation module and the target data generation module.
5. A method for detecting an object in a specific scene, comprising:
step 1: acquiring a security inspection image of an article, and preprocessing the image;
step 2: extracting image characteristics of the preprocessed image through a preset convolutional neural network;
and step 3: obtaining a target area of a security inspection image through a preset target detection model; the preset target detection model is obtained by training an image sample obtained by the image sample generation method for deep learning in any one of claims 1 to 3;
and 4, step 4: and outputting the detection result of the security inspection image, including the type and position information of the contraband.
CN202010267813.2A 2020-04-08 2020-04-08 Image sample generation method, specific scene target detection method and system thereof Active CN111145177B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010267813.2A CN111145177B (en) 2020-04-08 2020-04-08 Image sample generation method, specific scene target detection method and system thereof
PCT/CN2020/113998 WO2021203618A1 (en) 2020-04-08 2020-09-08 Image sample generating method and system, and target detection method
US17/910,346 US20230162342A1 (en) 2020-04-08 2020-09-08 Image sample generating method and system, and target detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010267813.2A CN111145177B (en) 2020-04-08 2020-04-08 Image sample generation method, specific scene target detection method and system thereof

Publications (2)

Publication Number Publication Date
CN111145177A CN111145177A (en) 2020-05-12
CN111145177B true CN111145177B (en) 2020-07-31

Family

ID=70528817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010267813.2A Active CN111145177B (en) 2020-04-08 2020-04-08 Image sample generation method, specific scene target detection method and system thereof

Country Status (3)

Country Link
US (1) US20230162342A1 (en)
CN (1) CN111145177B (en)
WO (1) WO2021203618A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145177B (en) * 2020-04-08 2020-07-31 浙江啄云智能科技有限公司 Image sample generation method, specific scene target detection method and system thereof
CN111539957B (en) * 2020-07-07 2023-04-18 浙江啄云智能科技有限公司 Image sample generation method, system and detection method for target detection
CN111709948B (en) * 2020-08-19 2021-03-02 深兰人工智能芯片研究院(江苏)有限公司 Method and device for detecting defects of container
CN112235476A (en) * 2020-09-15 2021-01-15 南京航空航天大学 Test data generation method based on fusion variation
CN112488044A (en) * 2020-12-15 2021-03-12 中国银行股份有限公司 Picture processing method and device
CN112560698B (en) * 2020-12-18 2024-01-16 北京百度网讯科技有限公司 Image processing method, device, equipment and medium
CN115147671A (en) * 2021-03-18 2022-10-04 杭州海康威视系统技术有限公司 Object recognition model training method and device and storage medium
CN114648494B (en) * 2022-02-28 2022-12-06 扬州市苏灵农药化工有限公司 Pesticide suspending agent production control system based on factory digitization
CN114693968A (en) * 2022-03-23 2022-07-01 成都智元汇信息技术股份有限公司 Verification method and system based on intelligent image recognition box performance
CN114495017B (en) * 2022-04-14 2022-08-09 美宜佳控股有限公司 Image processing-based ground sundry detection method, device, equipment and medium
CN114821194B (en) * 2022-05-30 2023-07-25 深圳市科荣软件股份有限公司 Equipment running state identification method and device
CN115019112A (en) * 2022-08-09 2022-09-06 威海凯思信息科技有限公司 Target object detection method and device based on image and electronic equipment
CN116740220B (en) * 2023-08-16 2023-10-13 海马云(天津)信息技术有限公司 Model construction method and device, and photo generation method and device
CN117253144B (en) * 2023-09-07 2024-04-12 建研防火科技有限公司 Fire risk grading management and control method
CN116994002B (en) * 2023-09-25 2023-12-19 杭州安脉盛智能技术有限公司 Image feature extraction method, device, equipment and storage medium
CN117372275A (en) * 2023-11-02 2024-01-09 凯多智能科技(上海)有限公司 Image dataset expansion method and device and electronic equipment
CN117523341A (en) * 2023-11-23 2024-02-06 中船(北京)智能装备科技有限公司 Deep learning training image sample generation method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463196A (en) * 2014-11-11 2015-03-25 中国人民解放军理工大学 Video-based weather phenomenon recognition method
CN108932735A (en) * 2018-07-10 2018-12-04 广州众聚智能科技有限公司 A method of generating deep learning sample
CN109948565A (en) * 2019-03-26 2019-06-28 浙江啄云智能科技有限公司 A kind of not unpacking detection method of the contraband for postal industry
CN109948562A (en) * 2019-03-25 2019-06-28 浙江啄云智能科技有限公司 A kind of safe examination system deep learning sample generating method based on radioscopic image
CN110910467A (en) * 2019-12-03 2020-03-24 浙江啄云智能科技有限公司 X-ray image sample generation method, system and application

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10333912A (en) * 1997-05-28 1998-12-18 Oki Electric Ind Co Ltd Fuzzy rule preparation method and device therefor
US11430140B2 (en) * 2018-09-18 2022-08-30 Caide Systems, Inc. Medical image generation, localizaton, registration system
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN111145177B (en) * 2020-04-08 2020-07-31 浙江啄云智能科技有限公司 Image sample generation method, specific scene target detection method and system thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463196A (en) * 2014-11-11 2015-03-25 中国人民解放军理工大学 Video-based weather phenomenon recognition method
CN108932735A (en) * 2018-07-10 2018-12-04 广州众聚智能科技有限公司 A method of generating deep learning sample
CN109948562A (en) * 2019-03-25 2019-06-28 浙江啄云智能科技有限公司 A kind of safe examination system deep learning sample generating method based on radioscopic image
CN109948565A (en) * 2019-03-26 2019-06-28 浙江啄云智能科技有限公司 A kind of not unpacking detection method of the contraband for postal industry
CN110910467A (en) * 2019-12-03 2020-03-24 浙江啄云智能科技有限公司 X-ray image sample generation method, system and application

Also Published As

Publication number Publication date
CN111145177A (en) 2020-05-12
US20230162342A1 (en) 2023-05-25
WO2021203618A1 (en) 2021-10-14

Similar Documents

Publication Publication Date Title
CN111145177B (en) Image sample generation method, specific scene target detection method and system thereof
Jain An evaluation of deep learning based object detection strategies for threat object detection in baggage security imagery
CN109948562B (en) Security check system deep learning sample generation method based on X-ray image
Rogers et al. Automated x-ray image analysis for cargo security: Critical review and future promise
CN109948565B (en) Method for detecting contraband in postal industry without opening box
US10013615B2 (en) Inspection methods and devices
CN104408482B (en) A kind of High Resolution SAR Images object detection method
CN109376667A (en) Object detection method, device and electronic equipment
CN111539957B (en) Image sample generation method, system and detection method for target detection
Rogers et al. A deep learning framework for the automated inspection of complex dual-energy x-ray cargo imagery
Mery et al. Computer vision for x-ray testing: Imaging, systems, image databases, and algorithms
CN110020647A (en) A kind of contraband object detection method, device and computer equipment
CN110488368A (en) A kind of contraband recognition methods and device based on dual intensity X-ray screening machine
CN108764082A (en) A kind of Aircraft Targets detection method, electronic equipment, storage medium and system
CN110910467B (en) X-ray image sample generation method, system and application
CN105510364A (en) Nondestructive testing system for industrial part flaws based on X rays and detection method thereof
CN111539251B (en) Security check article identification method and system based on deep learning
Zou et al. Dangerous objects detection of X-ray images using convolution neural network
Chumuang et al. Analysis of X-ray for locating the weapon in the vehicle by using scale-invariant features transform
Jaccard et al. Using deep learning on X-ray images to detect threats
CN113569981A (en) Power inspection bird nest detection method based on single-stage target detection network
CN115187842A (en) Target detection method of passive terahertz security inspection image based on mode conversion
CN110992324B (en) Intelligent dangerous goods detection method and system based on X-ray image
Zhu et al. AMOD-Net: Attention-based Multi-Scale Object Detection Network for X-Ray Baggage Security Inspection
CN115081469A (en) Article category identification method, device and equipment based on X-ray security inspection equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant