CN114049540A - Method, device, equipment and medium for detecting marked image based on artificial intelligence - Google Patents
Method, device, equipment and medium for detecting marked image based on artificial intelligence Download PDFInfo
- Publication number
- CN114049540A CN114049540A CN202111433325.5A CN202111433325A CN114049540A CN 114049540 A CN114049540 A CN 114049540A CN 202111433325 A CN202111433325 A CN 202111433325A CN 114049540 A CN114049540 A CN 114049540A
- Authority
- CN
- China
- Prior art keywords
- image
- marked
- area
- annotation
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 45
- 238000001514 detection method Methods 0.000 claims abstract description 110
- 238000012549 training Methods 0.000 claims abstract description 64
- 238000002372 labelling Methods 0.000 claims description 81
- 230000015654 memory Effects 0.000 claims description 28
- 238000012545 processing Methods 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 10
- 230000000903 blocking effect Effects 0.000 claims description 6
- 238000003708 edge detection Methods 0.000 claims description 6
- 238000009499 grossing Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 239000004973 liquid crystal related substance Substances 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 210000000329 smooth muscle myocyte Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2433—Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of artificial intelligence, and provides a method, a device, equipment and a storage medium for detecting an annotated image based on artificial intelligence. The method comprises the following steps: intercepting an image of an annotation region from an original sample image; judging whether the image of the marked area is a clear image, if so, judging whether a marked frame of the image of the marked area is qualified, if so, adding the original sample image to the positive sample set, and if not, adding the original sample image to the negative sample set and generating a training sample set; and training a pre-constructed model based on the training sample set to obtain an annotation detection model, inputting the to-be-detected annotation image into the annotation detection model to obtain a detection result, and feeding the detection result back to the user. The invention can detect whether the marking frame of the sample is qualified. The invention also relates to the technical field of block chains, and the marked area image can be stored in a node of a block chain.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method, a device, equipment and a storage medium for detecting an annotated image based on artificial intelligence.
Background
Most of sample data for training the intelligent recognition model depend on manual marking of the to-be-recognized area on the image. For example, when the OCR recognition model is trained, a character region is manually framed in the sample image, and whether the framed region of the characters in the sample image is qualified or not directly affects the recognition accuracy of the trained OCR model.
At present, most of detection methods for detecting whether sample data is qualified or not are manual spot check, and although in the prior art, a target detection model (SSD) is used to detect whether a sample data labeling box is qualified or not, the method can only detect whether a labeling box exists in the sample data or not, and cannot detect whether the position or size of the labeling box is qualified or not.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus, a device and a storage medium for detecting an annotated image based on artificial intelligence, which aims to solve the technical problem that it is not possible to accurately detect whether the position or size of an annotated frame in sample data is qualified or not in the prior art.
In order to achieve the above object, the present invention provides an annotated image detection method based on artificial intelligence, which comprises:
acquiring a preset number of original sample images, and intercepting an image of an annotation region from the original sample images;
judging whether the image of the marked area is a clear image or not based on the edge information of the image of the marked area, and judging whether the marked frame of the image of the marked area is qualified or not based on the coordinate information of the marked frame of the image of the marked area and the character coordinate information of the image of the marked area when the image of the marked area is judged to be the clear image;
when the marking frame of the marking area image is judged to be qualified, adding the original sample image to a positive sample set, when the marking area image is judged not to be a clear image, or when the marking frame of the marking area image is judged to be unqualified, adding the original sample image to a negative sample set, and generating a training sample set based on the positive sample set and the negative sample set;
training a pre-constructed model based on the training sample set to obtain an annotation detection model, reading an annotation image to be detected input by a user, inputting the annotation image to be detected into the annotation detection model to obtain a detection result, and feeding the detection result back to the user.
Preferably, the intercepting the image of the labeled region from the original sample image includes:
reading coordinate information of a labeling frame based on a pixel value of the labeling frame in the original sample image;
and positioning a target area of the image of the labeling area according to the coordinate information of the labeling frame, and intercepting the image of the labeling area corresponding to the labeling frame from the original sample image based on the target area.
Preferably, the determining whether the labeled region image is a clear image based on the edge information of the labeled region image includes:
performing re-blurring processing on the image of the marked area by using a Gaussian smoothing filter to obtain a re-blurred image;
respectively converting the marked area image and the re-blurred image into gray level images, and respectively performing edge detection on the gray level images to obtain an edge image corresponding to the marked area image and an edge image corresponding to the re-blurred image;
respectively carrying out blocking processing on the edge image corresponding to the marked area image and the edge image corresponding to the re-blurred image to obtain a plurality of sub-blocks, and respectively calculating the variance of each sub-block corresponding to the marked area image and the variance of each sub-block corresponding to the re-blurred image;
calculating the similarity of the marked region image and the re-blurred image based on the variance of each sub-block corresponding to the marked region image and the variance of each sub-block corresponding to the re-blurred image, and judging that the marked region image is a clear image when the similarity is greater than a preset threshold value.
Preferably, the determining whether the labeling frame of the labeling area image is qualified or not based on the coordinate information of the labeling frame of the labeling area image and the character coordinate information of the labeling area image includes:
detecting the boundary of the character area of the image of the marked area by utilizing a maximum stable extremum area algorithm, and reading the coordinate information of the boundary of the character area;
calculating the distance between each frame of the labeling frame and each corresponding boundary of the character area based on the coordinate information of the labeling frame and the coordinate information of the boundary of the character area;
and when the distances are within the range of the preset interval and the boundaries of the character areas are within the range of the marking frame, judging that the marking frame of the marking area image is qualified.
Preferably, the feeding back the detection result to the user includes:
when the detection result is that the to-be-detected marked image is unqualified, feeding back first prompt information to the user, wherein the first prompt information is used for indicating that the to-be-detected marked image is unqualified;
and when the detection result is that the to-be-detected annotated image is qualified, feeding back second prompt information to the user, wherein the second prompt information is used for indicating that the to-be-detected annotated image is qualified.
Preferably, the method further comprises:
when the detection result is that the to-be-detected marked image is unqualified, prohibiting the user from executing the marking operation of the next marked image corresponding to the to-be-detected marked image;
and when the detection result is that the to-be-detected labeled image is qualified, allowing the user to execute the labeling operation of the next labeled image corresponding to the to-be-detected labeled image.
Preferably, the pre-constructed model is a resnet50 model, and the training process of the label detection model includes:
labeling a preset detection label for each original sample image in the training sample set;
inputting each original sample image in the training sample set into a resnet50 model to obtain a prediction label of each original sample image in the training sample set;
reading a preset detection label of each original sample image in the training sample set;
and determining the structural parameters of the label detection model by minimizing the loss value between the prediction label and the preset detection label to obtain the trained label detection model.
In order to achieve the above object, the present invention further provides an artificial intelligence-based annotated image detection apparatus, comprising:
an intercepting module: the method comprises the steps of obtaining a preset number of original sample images, and intercepting an image of an annotation region from the original sample images;
a judging module: the image processing device is used for judging whether the image of the marked area is a clear image or not based on the edge information of the image of the marked area, and judging whether the marked frame of the image of the marked area is qualified or not based on the coordinate information of the marked frame of the image of the marked area and the character coordinate information of the image of the marked area when the image of the marked area is judged to be the clear image;
a generation module: the original sample image is added to a positive sample set when the marking frame of the marking area image is judged to be qualified, the original sample image is added to a negative sample set when the marking area image is judged not to be a clear image or the marking frame of the marking area image is judged to be unqualified, and a training sample set is generated based on the positive sample set and the negative sample set;
a detection module: the method is used for training a pre-constructed model based on the training sample set to obtain an annotation detection model, reading an annotation image to be detected input by a user, inputting the annotation image to be detected into the annotation detection model to obtain a detection result, and feeding the detection result back to the user.
In order to achieve the above object, the present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a program executable by the at least one processor to enable the at least one processor to perform any of the steps of the artificial intelligence based annotation image detection method described above.
To achieve the above object, the present invention also provides a computer-readable storage medium storing an artificial intelligence based annotation image detection program, which when executed by a processor, implements any of the steps of the artificial intelligence based annotation image detection method as described above.
According to the method, the device, the equipment and the storage medium for detecting the marked image based on the artificial intelligence, provided by the invention, the marked image in the original sample image is judged whether to be clear or not, and the marked frame of the marked image is judged whether to be qualified or not, the sample in the original sample image is divided into the positive sample set and the negative sample set to generate the training sample set, and the training sample set is used for training the marked detection model, so that whether the marked frame marked by the marking personnel on the sample data is qualified or not can be detected, the marking personnel can be reminded of the sample correctly, the quality of marking the sample data is improved, and the training precision of the related model is improved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating a preferred embodiment of the method for detecting an annotated image based on artificial intelligence according to the present invention;
FIG. 2 is a block diagram of an embodiment of an apparatus for detecting annotated images based on artificial intelligence according to the present invention;
FIG. 3 is a diagram of an electronic device according to a preferred embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
The invention provides an annotated image detection method based on artificial intelligence. Referring to fig. 1, a schematic method flow chart of an embodiment of the method for detecting an annotated image based on artificial intelligence is shown. The method may be performed by an electronic device, which may be implemented by software and/or hardware. The marked image detection method based on artificial intelligence comprises the following steps:
step S10: the method comprises the steps of obtaining a preset number of original sample images, and intercepting an image of an annotation region from the original sample images.
The sample data for training the OCR recognition model mainly depends on manual work to select character areas in the image by a manual mode, and whether the selected areas in the image are qualified or not directly influences the recognition accuracy of the trained OCR recognition model. The present solution is described by taking as an example the case of detecting whether a frame selected region of sample data for training an OCR recognition model is qualified. The sample data may be images containing text in different fields, such as reimbursement documents, invoice documents, physical examination reports, health records, and the like in the medical field.
In this embodiment, an artificially labeled original sample image may be obtained from a local database or a third-party database, where the original sample image has a labeling frame and characters framed by the labeling frame, and the labeling frame may be a labeling frame with a preset color (e.g., red) to frame the characters in the image. And intercepting a marked region image from the original sample image according to the coordinate information of the marked frame in the original sample image, wherein the marked region image is an image which takes the marked frame as an outer boundary and contains character information framed and selected by the marked frame.
In one embodiment, the intercepting an annotated zone image from the original sample image comprises:
reading coordinate information of a labeling frame based on a pixel value of the labeling frame in the original sample image;
and positioning a target area of the image of the labeling area according to the coordinate information of the labeling frame, and intercepting the image of the labeling area corresponding to the labeling frame from the original sample image based on the target area.
The method comprises the steps of uniformly configuring labeling frames with the same RGB value in an original sample image before labeling, identifying the area of the labeling frame according to the pixel value of the labeling frame, obtaining the horizontal coordinate, the vertical coordinate and the width and the height of the labeling frame in the original sample image by utilizing an OpenCv toolkit, and intercepting a labeling area image corresponding to the labeling frame from the original sample image by utilizing the coordinate information of the labeling frame. It can be understood that the height value of the intercepted image of the labeling area is greater than the height value of the labeling frame, and the width value of the image of the labeling area is greater than the width value of the labeling frame.
Step S20: judging whether the image of the marked area is a clear image or not based on the edge information of the image of the marked area, and judging whether the marked frame of the image of the marked area is qualified or not based on the coordinate information of the marked frame of the image of the marked area and the character coordinate information of the image of the marked area when the image of the marked area is judged to be the clear image.
In this embodiment, after the annotation region image in the original sample image is captured, it is necessary to detect whether the annotation region image is a sharp image, and if the annotation region image is not a sharp image (i.e., a blurred image), it indicates that the original sample image is an unqualified sample image. When the marked region image is detected to be a clear image, whether the marked frame of the marked region image is qualified or not is further detected according to the coordinate information of the marked frame and the character coordinate information of the marked region image, for example, whether the marked frame is overlapped with the character region or not and whether the distance between the marked frame and the character information in the marked frame is too loose or too close is detected.
In one embodiment, the determining whether the annotated zone image is a sharp image based on the edge information of the annotated zone image includes:
performing re-blurring processing on the image of the marked area by using a Gaussian smoothing filter to obtain a re-blurred image;
respectively converting the marked area image and the re-blurred image into gray level images, and respectively performing edge detection on the gray level images to obtain an edge image corresponding to the marked area image and an edge image corresponding to the re-blurred image;
respectively carrying out blocking processing on the edge image corresponding to the marked area image and the edge image corresponding to the re-blurred image to obtain a plurality of sub-blocks, and respectively calculating the variance of each sub-block corresponding to the marked area image and the variance of each sub-block corresponding to the re-blurred image;
calculating the similarity of the marked region image and the re-blurred image based on the variance of each sub-block corresponding to the marked region image and the variance of each sub-block corresponding to the re-blurred image, and judging that the marked region image is a clear image when the similarity is greater than a preset threshold value.
The marked region image is subjected to the re-blurring processing by using the Gaussian smoothing filter, so that a re-blurred image corresponding to the marked region image can be obtained, the marked region image and the re-blurred image are converted into a gray image, the gray value of a point on the image can be converted into 0 or 255, for example, the gray value larger than a threshold value 100 is set as 255, the gray value smaller than 100 is set as 0, the gray value of the image can greatly reduce the data amount in the image, and the character outline is highlighted, so that the further processing of the image is facilitated. And then, edge detection is carried out by adopting a Canny edge detection algorithm to extract an edge image of the gray image, sliding window blocking processing is carried out on the edge image corresponding to the marked area image and the re-blurred image respectively to obtain corresponding sub blocks, the size and the sliding step length of the sliding window during blocking can be set according to actual requirements, and then the variance of each sub block of the marked area image and each sub block of the re-blurred image is calculated respectively, namely the square sum of the gray value of each pixel point of each sub block minus the average gray value of the sub block is divided by the total number of pixels of the sub blocks. Subtracting the variance of each sub-block of the marked region image from the variance of the sub-block corresponding to the re-blurred image, then taking an absolute value, taking the average value of each absolute value as a similarity value, and when the similarity is greater than a preset threshold (for example, 80%), judging that the marked region image is a sharp image.
In one embodiment, the determining whether the annotation frame of the annotation region image is qualified based on the coordinate information of the annotation frame of the annotation region image and the character coordinate information of the annotation region image includes:
detecting the boundary of the character area of the image of the marked area by utilizing a maximum stable extremum area algorithm, and reading the coordinate information of the boundary of the character area;
calculating the distance between each frame of the labeling frame and each corresponding boundary of the character area based on the coordinate information of the labeling frame and the coordinate information of the boundary of the character area;
and when the distances are within the range of the preset interval and the boundaries of the character areas are within the range of the marking frame, judging that the marking frame of the marking area image is qualified.
Because the colors (or gray values) of the character areas are consistent, the character areas of the marked area image can be detected by utilizing a maximum stable extremum area algorithm (MSER algorithm). Reading the abscissa and the ordinate of the boundary of the text area, respectively calculating the distances between the upper border, the lower border, the left border and the right border of the labeling frame, and the upper border, the lower border, the left border and the right border corresponding to the text area, when the distances between the border of the labeling frame and the boundary corresponding to the text area are all within a preset interval range (for example, [5-20] pixels), and when the boundaries of the text area are all within the range of the border of the labeling frame (namely, the text is all within the labeling frame), judging that the labeling frame of the image of the labeling area is qualified, otherwise, judging that the labeling frame of the image of the labeling area is not qualified. For example, if the distance between the label frame and the character is too long, or the distance between the label frame and the character is too close, the label frame is determined to be unqualified.
Step S30: and when judging that the labeling frame of the labeling area image is qualified, adding the original sample image to a positive sample set, when judging that the labeling area image is not a clear image, or when judging that the labeling frame of the labeling area image is not qualified, adding the original sample image to a negative sample set, and generating a training sample set based on the positive sample set and the negative sample set.
In this embodiment, the sample images in the positive sample set are original sample images with clear images and qualified labeling frames, the sample images in the negative sample set are original sample images with blurred images or unqualified labeling frames, a training sample set can be generated according to the positive sample set and the negative sample set, each sample in the training sample set is labeled with a unique label of 1 or 0 to represent "qualified" or "unqualified", and the training sample set is split into a training set and a verification set according to a preset ratio (for example, 8:2) to train the model. Further, the original sample image may also be enhanced by randomly adding noise and color variations.
In one embodiment, the method further comprises: and adjusting the size of each original sample image in the training sample set to be a preset size.
The original sample image width in the training sample set is normalized to 224, the normalization ratio is k (k is 224/original image width), the height of the original sample image is multiplied by k, and then the value is expanded from 0 pixel up and down to 224 pixels, so that a sample image with the size of 224 x 224 is obtained.
Step S40: training a pre-constructed model based on the training sample set to obtain an annotation detection model, reading an annotation image to be detected input by a user, inputting the annotation image to be detected into the annotation detection model to obtain a detection result, and feeding the detection result back to the user.
In this embodiment, the label detection model is obtained by training the resnet50 model according to the training sample set, and the label detection model can detect whether the sample image of the subsequent manual label is qualified. When a user manually marks a sample image for training an OCR recognition model at a terminal, reading a to-be-detected marked image marked by the user, inputting the to-be-detected marked image into a marking detection model, obtaining a detection result and feeding the detection result back to the user. The method can remind the labeling personnel of correct labeling in real time, and the quality of the sample for training the OCR recognition model is improved. Specifically, the training process of the label detection model includes:
labeling a preset detection label for each original sample image in the training sample set;
inputting each original sample image in the training sample set into a resnet50 model to obtain a prediction label of each original sample image in the training sample set;
reading a preset detection label of each original sample image in the training sample set;
and determining the structural parameters of the label detection model by minimizing the loss value between the prediction label and the preset detection label to obtain the trained label detection model.
In one embodiment, the feeding back the detection result to the user includes:
when the detection result is that the to-be-detected marked image is unqualified, feeding back first prompt information to the user, wherein the first prompt information is used for indicating that the to-be-detected marked image is unqualified;
and when the detection result is that the to-be-detected annotated image is qualified, feeding back second prompt information to the user, wherein the second prompt information is used for indicating that the to-be-detected annotated image is qualified.
When the image marked by the user is unqualified, first prompt information is fed back to the user, for example, the image marked by you is unqualified and please label the image again, and when the image marked by the user is qualified, second prompt information is fed back to the user, for example, the image marked is qualified.
In one embodiment, the method further comprises:
when the detection result is that the to-be-detected marked image is unqualified, prohibiting the user from executing the marking operation of the next marked image corresponding to the to-be-detected marked image;
and when the detection result is that the to-be-detected labeled image is qualified, allowing the user to execute the labeling operation of the next labeled image corresponding to the to-be-detected labeled image.
And when the unqualified marked image to be detected is detected, prohibiting the user from marking the next image until the user marks the image with the unqualified mark to be qualified. And when the marked image to be detected is detected to be qualified, allowing the user to mark the next marked image corresponding to the marked image to be detected.
Referring to fig. 2, a functional block diagram of an annotated image detection apparatus 100 based on artificial intelligence according to the present invention is shown.
The annotation image detection apparatus 100 based on artificial intelligence of the present invention can be installed in an electronic device. According to the implemented functions, the artificial intelligence based annotation image detection apparatus 100 can include an intercepting module 110, a judging module 120, a generating module 130 and a detecting module 140. A module according to the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the interception module 110: the method comprises the steps of obtaining a preset number of original sample images, and intercepting an image of an annotation region from the original sample images;
the judging module 120: the image processing device is used for judging whether the image of the marked area is a clear image or not based on the edge information of the image of the marked area, and judging whether the marked frame of the image of the marked area is qualified or not based on the coordinate information of the marked frame of the image of the marked area and the character coordinate information of the image of the marked area when the image of the marked area is judged to be the clear image;
the generation module 130: the original sample image is added to a positive sample set when the marking frame of the marking area image is judged to be qualified, the original sample image is added to a negative sample set when the marking area image is judged not to be a clear image or the marking frame of the marking area image is judged to be unqualified, and a training sample set is generated based on the positive sample set and the negative sample set;
the detection module 140: the method is used for training a pre-constructed model based on the training sample set to obtain an annotation detection model, reading an annotation image to be detected input by a user, inputting the annotation image to be detected into the annotation detection model to obtain a detection result, and feeding the detection result back to the user.
In one embodiment, the intercepting an annotated zone image from the original sample image comprises:
reading coordinate information of a labeling frame based on a pixel value of the labeling frame in the original sample image;
and positioning a target area of the image of the labeling area according to the coordinate information of the labeling frame, and intercepting the image of the labeling area corresponding to the labeling frame from the original sample image based on the target area.
In one embodiment, the determining whether the annotated zone image is a sharp image based on the edge information of the annotated zone image includes:
performing re-blurring processing on the image of the marked area by using a Gaussian smoothing filter to obtain a re-blurred image;
respectively converting the marked area image and the re-blurred image into gray level images, and respectively performing edge detection on the gray level images to obtain an edge image corresponding to the marked area image and an edge image corresponding to the re-blurred image;
respectively carrying out blocking processing on the edge image corresponding to the marked area image and the edge image corresponding to the re-blurred image to obtain a plurality of sub-blocks, and respectively calculating the variance of each sub-block corresponding to the marked area image and the variance of each sub-block corresponding to the re-blurred image;
calculating the similarity of the marked region image and the re-blurred image based on the variance of each sub-block corresponding to the marked region image and the variance of each sub-block corresponding to the re-blurred image, and judging that the marked region image is a clear image when the similarity is greater than a preset threshold value.
In one embodiment, the determining whether the annotation frame of the annotation region image is qualified based on the coordinate information of the annotation frame of the annotation region image and the character coordinate information of the annotation region image includes:
detecting the boundary of the character area of the image of the marked area by utilizing a maximum stable extremum area algorithm, and reading the coordinate information of the boundary of the character area;
calculating the distance between each frame of the labeling frame and each corresponding boundary of the character area based on the coordinate information of the labeling frame and the coordinate information of the boundary of the character area;
and when the distances are within the range of the preset interval and the boundaries of the character areas are within the range of the marking frame, judging that the marking frame of the marking area image is qualified.
In one embodiment, the feeding back the detection result to the user includes:
when the detection result is that the to-be-detected marked image is unqualified, feeding back first prompt information to the user, wherein the first prompt information is used for indicating that the to-be-detected marked image is unqualified;
and when the detection result is that the to-be-detected annotated image is qualified, feeding back second prompt information to the user, wherein the second prompt information is used for indicating that the to-be-detected annotated image is qualified.
In one embodiment, the detection module 140 is further configured to:
when the detection result is that the to-be-detected marked image is unqualified, prohibiting the user from executing the marking operation of the next marked image corresponding to the to-be-detected marked image;
and when the detection result is that the to-be-detected labeled image is qualified, allowing the user to execute the labeling operation of the next labeled image corresponding to the to-be-detected labeled image.
In one embodiment, the pre-constructed model is a resnet50 model, and the training process of the label detection model includes:
labeling a preset detection label for each original sample image in the training sample set;
inputting each original sample image in the training sample set into a resnet50 model to obtain a prediction label of each original sample image in the training sample set;
reading a preset detection label of each original sample image in the training sample set;
and determining the structural parameters of the label detection model by minimizing the loss value between the prediction label and the preset detection label to obtain the trained label detection model.
Fig. 3 is a schematic diagram of an electronic device 1 according to a preferred embodiment of the invention.
The electronic device 1 includes but is not limited to: memory 11, processor 12, display 13, and network interface 14. The electronic device 1 is connected to a network through a network interface 14 to obtain raw data. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System for Mobile communications (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), Wi-Fi, or a communication network.
The memory 11 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 11 may be an internal storage unit of the electronic device 1, such as a hard disk or a memory of the electronic device 1. In other embodiments, the memory 11 may also be an external storage device of the electronic device 1, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like equipped with the electronic device 1. Of course, the memory 11 may also comprise both an internal memory unit and an external memory device of the electronic device 1. In this embodiment, the memory 11 is generally used for storing an operating system installed in the electronic device 1 and various application software, such as a program code of the artificial intelligence based annotation image detection program 10. Further, the memory 11 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 12 is typically used for controlling the overall operation of the electronic device 1, such as performing data interaction or communication related control and processing. In this embodiment, the processor 12 is configured to run the program code stored in the memory 11 or process data, for example, run the program code of the artificial intelligence based annotation image detection program 10.
The display 13 may be referred to as a display screen or display unit. In some embodiments, the display 13 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an Organic Light-Emitting Diode (OLED) touch screen, or the like. The display 13 is used for displaying information processed in the electronic device 1 and for displaying a visual work interface, e.g. displaying the results of data statistics.
The network interface 14 may optionally comprise a standard wired interface, a wireless interface (e.g. WI-FI interface), the network interface 14 typically being used for establishing a communication connection between the electronic device 1 and other electronic devices.
FIG. 3 shows only the electronic device 1 having the components 11-14 and the artificial intelligence based annotation image detection program 10, but it will be understood that not all of the shown components are required and that more or fewer components may alternatively be implemented.
Optionally, the electronic device 1 may further comprise a user interface, the user interface may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface may further comprise a standard wired interface and a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an Organic Light-Emitting Diode (OLED) touch screen, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
The electronic device 1 may further include a Radio Frequency (RF) circuit, a sensor, an audio circuit, and the like, which are not described in detail herein.
In the above embodiment, the processor 12, when executing the artificial intelligence based annotation image detection program 10 stored in the memory 11, can implement the following steps:
acquiring a preset number of original sample images, and intercepting an image of an annotation region from the original sample images;
judging whether the image of the marked area is a clear image or not based on the edge information of the image of the marked area, and judging whether the marked frame of the image of the marked area is qualified or not based on the coordinate information of the marked frame of the image of the marked area and the character coordinate information of the image of the marked area when the image of the marked area is judged to be the clear image;
when the marking frame of the marking area image is judged to be qualified, adding the original sample image to a positive sample set, when the marking area image is judged not to be a clear image, or when the marking frame of the marking area image is judged to be unqualified, adding the original sample image to a negative sample set, and generating a training sample set based on the positive sample set and the negative sample set;
training a pre-constructed model based on the training sample set to obtain an annotation detection model, reading an annotation image to be detected input by a user, inputting the annotation image to be detected into the annotation detection model to obtain a detection result, and feeding the detection result back to the user.
The storage device may be the memory 11 of the electronic device 1, or may be another storage device communicatively connected to the electronic device 1.
For the detailed description of the above steps, please refer to the above description of fig. 2 regarding a functional block diagram of an embodiment of the artificial intelligence based annotation image detection apparatus 100 and fig. 1 regarding a flowchart of an embodiment of an artificial intelligence based annotation image detection method.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium may be non-volatile or volatile. The computer readable storage medium may be any one or any combination of hard disks, multimedia cards, SD cards, flash memory cards, SMCs, Read Only Memories (ROMs), Erasable Programmable Read Only Memories (EPROMs), portable compact disc read only memories (CD-ROMs), USB memories, etc. The computer-readable storage medium includes a storage data area storing data created according to use of a blockchain node and a storage program area storing an artificial intelligence based annotation image detection program 10, and when executed by a processor, the artificial intelligence based annotation image detection program 10 implements operations of:
acquiring a preset number of original sample images, and intercepting an image of an annotation region from the original sample images;
judging whether the image of the marked area is a clear image or not based on the edge information of the image of the marked area, and judging whether the marked frame of the image of the marked area is qualified or not based on the coordinate information of the marked frame of the image of the marked area and the character coordinate information of the image of the marked area when the image of the marked area is judged to be the clear image;
when the marking frame of the marking area image is judged to be qualified, adding the original sample image to a positive sample set, when the marking area image is judged not to be a clear image, or when the marking frame of the marking area image is judged to be unqualified, adding the original sample image to a negative sample set, and generating a training sample set based on the positive sample set and the negative sample set;
training a pre-constructed model based on the training sample set to obtain an annotation detection model, reading an annotation image to be detected input by a user, inputting the annotation image to be detected into the annotation detection model to obtain a detection result, and feeding the detection result back to the user.
The specific implementation of the computer-readable storage medium of the present invention is substantially the same as the above-mentioned specific implementation of the method for detecting an annotated image based on artificial intelligence, and will not be described herein again.
In another embodiment, in order to further ensure the privacy and security of all the appearing data, all the data can be stored in a node of a block chain. Such as the marked image to be detected and the original sample image, and the like, the data can be stored in the block chain node.
It should be noted that the blockchain in the present invention is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
It should be noted that the above-mentioned numbers of the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention essentially or contributing to the prior art can be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above and includes several instructions for enabling a terminal device (such as a mobile phone, a computer, an electronic device, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. An annotated image detection method based on artificial intelligence is applied to electronic equipment, and is characterized in that the method comprises the following steps:
acquiring a preset number of original sample images, and intercepting an image of an annotation region from the original sample images;
judging whether the image of the marked area is a clear image or not based on the edge information of the image of the marked area, and judging whether the marked frame of the image of the marked area is qualified or not based on the coordinate information of the marked frame of the image of the marked area and the character coordinate information of the image of the marked area when the image of the marked area is judged to be the clear image;
when the marking frame of the marking area image is judged to be qualified, adding the original sample image to a positive sample set, when the marking area image is judged not to be a clear image, or when the marking frame of the marking area image is judged to be unqualified, adding the original sample image to a negative sample set, and generating a training sample set based on the positive sample set and the negative sample set;
training a pre-constructed model based on the training sample set to obtain an annotation detection model, reading an annotation image to be detected input by a user, inputting the annotation image to be detected into the annotation detection model to obtain a detection result, and feeding the detection result back to the user.
2. The artificial intelligence based annotation image detection method of claim 1, wherein the intercepting an annotation region image from the original sample image comprises:
reading coordinate information of a labeling frame based on a pixel value of the labeling frame in the original sample image;
and positioning a target area of the image of the labeling area according to the coordinate information of the labeling frame, and intercepting the image of the labeling area corresponding to the labeling frame from the original sample image based on the target area.
3. The method for detecting the annotated image based on artificial intelligence of claim 1, wherein the determining whether the annotated zone image is a sharp image based on the edge information of the annotated zone image comprises:
performing re-blurring processing on the image of the marked area by using a Gaussian smoothing filter to obtain a re-blurred image;
respectively converting the marked area image and the re-blurred image into gray level images, and respectively performing edge detection on the gray level images to obtain an edge image corresponding to the marked area image and an edge image corresponding to the re-blurred image;
respectively carrying out blocking processing on the edge image corresponding to the marked area image and the edge image corresponding to the re-blurred image to obtain a plurality of sub-blocks, and respectively calculating the variance of each sub-block corresponding to the marked area image and the variance of each sub-block corresponding to the re-blurred image;
calculating the similarity of the marked region image and the re-blurred image based on the variance of each sub-block corresponding to the marked region image and the variance of each sub-block corresponding to the re-blurred image, and judging that the marked region image is a clear image when the similarity is greater than a preset threshold value.
4. The method for detecting the annotated image based on artificial intelligence of claim 1, wherein the step of determining whether the annotated frame of the annotated zone image is qualified or not based on the coordinate information of the annotated frame of the annotated zone image and the character coordinate information of the annotated zone image comprises:
detecting the boundary of the character area of the image of the marked area by utilizing a maximum stable extremum area algorithm, and reading the coordinate information of the boundary of the character area;
calculating the distance between each frame of the labeling frame and each corresponding boundary of the character area based on the coordinate information of the labeling frame and the coordinate information of the boundary of the character area;
and when the distances are within the range of the preset interval and the boundaries of the character areas are within the range of the marking frame, judging that the marking frame of the marking area image is qualified.
5. The artificial intelligence based annotation image detection method of claim 1, wherein the feeding back the detection result to the user comprises:
when the detection result is that the to-be-detected marked image is unqualified, feeding back first prompt information to the user, wherein the first prompt information is used for indicating that the to-be-detected marked image is unqualified;
and when the detection result is that the to-be-detected annotated image is qualified, feeding back second prompt information to the user, wherein the second prompt information is used for indicating that the to-be-detected annotated image is qualified.
6. The artificial intelligence based annotated image detection method of claim 5, wherein the method further comprises:
when the detection result is that the to-be-detected marked image is unqualified, prohibiting the user from executing the marking operation of the next marked image corresponding to the to-be-detected marked image;
and when the detection result is that the to-be-detected labeled image is qualified, allowing the user to execute the labeling operation of the next labeled image corresponding to the to-be-detected labeled image.
7. The artificial intelligence based annotation image detection method of any one of claims 1 to 6, wherein the pre-constructed model is a resnet50 model, and the training process of the annotation detection model comprises:
labeling a preset detection label for each original sample image in the training sample set;
inputting each original sample image in the training sample set into a resnet50 model to obtain a prediction label of each original sample image in the training sample set;
reading a preset detection label of each original sample image in the training sample set;
and determining the structural parameters of the label detection model by minimizing the loss value between the prediction label and the preset detection label to obtain the trained label detection model.
8. An apparatus for detecting an annotated image based on artificial intelligence, the apparatus comprising:
an intercepting module: the method comprises the steps of obtaining a preset number of original sample images, and intercepting an image of an annotation region from the original sample images;
a judging module: the image processing device is used for judging whether the image of the marked area is a clear image or not based on the edge information of the image of the marked area, and judging whether the marked frame of the image of the marked area is qualified or not based on the coordinate information of the marked frame of the image of the marked area and the character coordinate information of the image of the marked area when the image of the marked area is judged to be the clear image;
a generation module: the original sample image is added to a positive sample set when the marking frame of the marking area image is judged to be qualified, the original sample image is added to a negative sample set when the marking area image is judged not to be a clear image or the marking frame of the marking area image is judged to be unqualified, and a training sample set is generated based on the positive sample set and the negative sample set;
a detection module: the method is used for training a pre-constructed model based on the training sample set to obtain an annotation detection model, reading an annotation image to be detected input by a user, inputting the annotation image to be detected into the annotation detection model to obtain a detection result, and feeding the detection result back to the user.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a program executable by the at least one processor to enable the at least one processor to perform the artificial intelligence based annotation image detection method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores an artificial intelligence based annotation image detection program which, when executed by a processor, implements the steps of the artificial intelligence based annotation image detection method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111433325.5A CN114049540B (en) | 2021-11-29 | Method, device, equipment and medium for detecting annotation image based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111433325.5A CN114049540B (en) | 2021-11-29 | Method, device, equipment and medium for detecting annotation image based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114049540A true CN114049540A (en) | 2022-02-15 |
CN114049540B CN114049540B (en) | 2024-10-22 |
Family
ID=
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114648672A (en) * | 2022-02-25 | 2022-06-21 | 北京百度网讯科技有限公司 | Method and device for constructing sample image set, electronic equipment and readable storage medium |
CN114820456A (en) * | 2022-03-30 | 2022-07-29 | 图湃(北京)医疗科技有限公司 | Image processing method and device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109902672A (en) * | 2019-01-17 | 2019-06-18 | 平安科技(深圳)有限公司 | Image labeling method and device, storage medium, computer equipment |
CN109934851A (en) * | 2019-03-28 | 2019-06-25 | 新华三技术有限公司 | A kind of mask method, device and machine readable storage medium |
CN111401371A (en) * | 2020-06-03 | 2020-07-10 | 中邮消费金融有限公司 | Text detection and identification method and system and computer equipment |
CN111881908A (en) * | 2020-07-20 | 2020-11-03 | 北京百度网讯科技有限公司 | Target detection model correction method, detection method, device, equipment and medium |
CN113239931A (en) * | 2021-05-17 | 2021-08-10 | 上海中通吉网络技术有限公司 | Logistics station license plate recognition method |
CN113326721A (en) * | 2020-02-29 | 2021-08-31 | 湖南超能机器人技术有限公司 | Image blur detection method and device based on sliding window re-blur |
CN113705691A (en) * | 2021-08-30 | 2021-11-26 | 平安国际智慧城市科技股份有限公司 | Image annotation checking method, device, equipment and medium based on artificial intelligence |
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109902672A (en) * | 2019-01-17 | 2019-06-18 | 平安科技(深圳)有限公司 | Image labeling method and device, storage medium, computer equipment |
CN109934851A (en) * | 2019-03-28 | 2019-06-25 | 新华三技术有限公司 | A kind of mask method, device and machine readable storage medium |
CN113326721A (en) * | 2020-02-29 | 2021-08-31 | 湖南超能机器人技术有限公司 | Image blur detection method and device based on sliding window re-blur |
CN111401371A (en) * | 2020-06-03 | 2020-07-10 | 中邮消费金融有限公司 | Text detection and identification method and system and computer equipment |
CN111881908A (en) * | 2020-07-20 | 2020-11-03 | 北京百度网讯科技有限公司 | Target detection model correction method, detection method, device, equipment and medium |
CN113239931A (en) * | 2021-05-17 | 2021-08-10 | 上海中通吉网络技术有限公司 | Logistics station license plate recognition method |
CN113705691A (en) * | 2021-08-30 | 2021-11-26 | 平安国际智慧城市科技股份有限公司 | Image annotation checking method, device, equipment and medium based on artificial intelligence |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114648672A (en) * | 2022-02-25 | 2022-06-21 | 北京百度网讯科技有限公司 | Method and device for constructing sample image set, electronic equipment and readable storage medium |
CN114820456A (en) * | 2022-03-30 | 2022-07-29 | 图湃(北京)医疗科技有限公司 | Image processing method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109492643B (en) | Certificate identification method and device based on OCR, computer equipment and storage medium | |
CN111476227B (en) | Target field identification method and device based on OCR and storage medium | |
US10635946B2 (en) | Eyeglass positioning method, apparatus and storage medium | |
CN109886928B (en) | Target cell marking method, device, storage medium and terminal equipment | |
CN111325104B (en) | Text recognition method, device and storage medium | |
CN110751500B (en) | Processing method and device for sharing pictures, computer equipment and storage medium | |
CN110675940A (en) | Pathological image labeling method and device, computer equipment and storage medium | |
CN108009536A (en) | Scan method to go over files and system | |
CN105373978A (en) | Artificial test paper judgment processing device and artificial test paper judgment processing method based on OCR | |
CN111274957A (en) | Webpage verification code identification method, device, terminal and computer storage medium | |
CN111178147B (en) | Screen crushing and grading method, device, equipment and computer readable storage medium | |
CN110728687B (en) | File image segmentation method and device, computer equipment and storage medium | |
CN112686131B (en) | Image processing method, device, equipment and storage medium | |
CN113869017B (en) | Table image reconstruction method, device, equipment and medium based on artificial intelligence | |
CN111144372A (en) | Vehicle detection method, device, computer equipment and storage medium | |
CN111553334A (en) | Questionnaire image recognition method, electronic device, and storage medium | |
CN110363222B (en) | Picture labeling method and device for model training, computer equipment and storage medium | |
CN106940804B (en) | Architectural engineering material management system form data method for automatically inputting | |
CN113724137A (en) | Image recognition method, device and equipment based on image segmentation and storage medium | |
JP2019079347A (en) | Character estimation system, character estimation method, and character estimation program | |
CN112749649A (en) | Method and system for intelligently identifying and generating electronic contract | |
CN112085721A (en) | Damage assessment method, device and equipment for flooded vehicle based on artificial intelligence and storage medium | |
CN109741273A (en) | A kind of mobile phone photograph low-quality images automatically process and methods of marking | |
CN112418206A (en) | Picture classification method based on position detection model and related equipment thereof | |
CN114386013A (en) | Automatic student status authentication method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |