CN115222739B - Defect labeling method, device, storage medium, equipment and computer program product - Google Patents

Defect labeling method, device, storage medium, equipment and computer program product Download PDF

Info

Publication number
CN115222739B
CN115222739B CN202211142189.9A CN202211142189A CN115222739B CN 115222739 B CN115222739 B CN 115222739B CN 202211142189 A CN202211142189 A CN 202211142189A CN 115222739 B CN115222739 B CN 115222739B
Authority
CN
China
Prior art keywords
defect
image
information
images
labeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211142189.9A
Other languages
Chinese (zh)
Other versions
CN115222739A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Shuzhilian Technology Co Ltd
Original Assignee
Chengdu Shuzhilian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Shuzhilian Technology Co Ltd filed Critical Chengdu Shuzhilian Technology Co Ltd
Priority to CN202211142189.9A priority Critical patent/CN115222739B/en
Publication of CN115222739A publication Critical patent/CN115222739A/en
Application granted granted Critical
Publication of CN115222739B publication Critical patent/CN115222739B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The embodiment of the application discloses a defect labeling method, a defect labeling device, a storage medium, equipment and a computer program product, which relate to the technical field of artificial intelligence and comprise the following steps: inputting the target image into a defect identification model and obtaining a conversion image; converting the converted image and the target image into a color gamut space to obtain a first converted image and a first target image respectively; carrying out subtraction operation on the first conversion image and the first target image to obtain a difference image; searching on the difference image to extract defect information on the difference image; and obtaining the marking information of the defect information according to the defect information. According to the method, the model is used for automatically identifying, extracting and completing the labeling, a large amount of labeling work is performed instead of manual work, the defect images and the non-defective images are combined, the obvious salient information of all source images is obtained through fusion, fine defects which are difficult to observe by naked eyes are further highlighted in subtraction operation, the comprehensiveness and precision of defect identification and labeling are improved, and the quality of defect labeling is improved.

Description

Defect labeling method, device, storage medium, equipment and computer program product
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a defect labeling method, apparatus, storage medium, device, and computer program product.
Background
In recent years, the field of AI artificial intelligence has been developed dramatically, and the deep learning related algorithm is adopted to assist the industrial quality inspection to be applied in a large scale, whereas the deep learning algorithm has higher requirements on the number and quality of training samples, and often tens of thousands of training samples are needed to ensure the accuracy of the model, and when defects are labeled in the training samples, the quality of defect labeling is low due to the reasons of large data size, complexity and the like.
Disclosure of Invention
The present application mainly aims to provide a defect labeling method, apparatus, storage medium, device and computer program product, and aims to solve the problem of low quality of defect labeling in the prior art.
In order to achieve the above purpose, the embodiments of the present application adopt the following technical solutions:
in a first aspect, an embodiment of the present application provides a defect labeling method, including the following steps:
inputting the target image into a defect identification model and obtaining a conversion image; the defect recognition model is obtained based on the set training of the defect-free images and the set training of the fusion images, and the fusion images are obtained based on the fusion of the defect-free images and the defect images, so that the defect recognition model has the capability of converting the defect images into the defect-free images;
converting the converted image and the target image into a color gamut space to obtain a first converted image and a first target image respectively;
carrying out subtraction operation on the first conversion image and the first target image to obtain a difference image;
searching on the difference image to extract defect information on the difference image;
and obtaining the marking information of the defect information according to the defect information.
The method comprises the steps of adopting a non-defective image and a fused image fused with a defective image and a non-defective image as training data, enabling a trained model to learn data distribution of the non-defective image, enabling the input target image with defects to have the capability of converting to the non-defective image, enabling image defect extraction to be carried out under the condition of distribution contrast of the non-defective image, further highlighting defect information so as to be convenient for extraction and marking, then carrying out color gamut space conversion on the image, reducing interference of colors on defect extraction on the defect image under multiple working conditions and multiple background plates, carrying out subtraction operation on the converted image so as to highlight tiny defects on the image, and improving the comprehensiveness of defect extraction so as to improve the marking quality of the defects.
In a possible implementation manner of the first aspect, before the target image is input into the defect identification model and the converted image is obtained, the defect labeling method further includes:
acquiring a plurality of defect images and non-defect images;
fusing the defect image and the non-defect image to obtain a fused image;
and training to obtain a defect recognition model based on the set of the defect-free images and the set of the fusion images.
The model is trained in advance so as to be used repeatedly, the images are fused through an image fusion means so as to ensure the high quality of the images serving as training samples, and all defect information on the images is kept as far as possible so as to improve the prediction quality of the model. If extremely individual defect images are difficult to extract in use, the model can be optimized on the basis of the images after being manually and independently processed, so that the model can learn the identification of the defect images, and the detection quality is further improved.
In one possible implementation of the first aspect, fusing the defect image with the non-defect image to obtain a fused image includes:
and fusing the defect image and the non-defect image by adopting a Poisson image fusion method to obtain a fused image.
The Poisson image fusion method takes a gradient field in a source image block as guidance, the difference between a target scene and the source image on a fusion boundary is smoothly diffused into a fusion module, the fused image block can be seamlessly fused into the target scene, and the tone and the illumination of the fused image block can be kept consistent with the target scene. In brief, the defects on the defective images are more naturally and smoothly transited to the non-defective images, so that the phenomenon that the fused images are too hard or the defect information is obviously inconsistent with the backboard, the images cannot restore the real scene is avoided, and the quality of model training is improved to ensure the detection quality.
In one possible implementation manner of the first aspect, converting the converted image and the target image into a color gamut space to obtain a first converted image and a first target image, respectively, includes:
and converting the conversion image and the target image into an RGB color gamut space to obtain a first conversion image and a first target image respectively.
The RGB color gamut covers all color gamuts which can be felt by human eyes, is a color standard which is most widely applied in the industry, and obtains various colors by changing red, green and blue color channels and superposing the three color channels, wherein RGB represents the colors of the red, green and blue channels, and can quickly adjust the color of an image in the color gamut range so as to highlight the defect information on the image and improve the effect of model training.
In one possible implementation manner of the first aspect, converting the conversion image and the target image into an RGB color gamut space to obtain a first conversion image and a first target image, respectively, includes:
graying the conversion image and the target image to respectively obtain a first grayscale conversion image and a first grayscale target image;
subtracting the first converted image from the first target image to obtain a difference image, comprising:
and carrying out subtraction operation on the first gray scale conversion image and the first gray scale target image to obtain a difference image.
In order to reduce the data processing amount, facilitate the calculation amount in the subsequent processing, and improve the defect detection and labeling efficiency, the converted image is grayed, and in the RGB color gamut, if R = G = B, the color represents a gray color, wherein the value of R = G = B is called a gray value, therefore, each pixel of the gray image only needs one byte to store the gray value, and the gray range is 0-255.
In one possible implementation manner of the first aspect, subtracting the first converted image from the first target image to obtain a difference image includes:
respectively obtaining pixel values of corresponding pixel points of the first conversion image and the first target image, and performing subtraction operation;
and taking the absolute value of the operation result as the pixel value of the corresponding point of the difference image to obtain the difference image.
The method comprises the steps of obtaining a first target image, obtaining a second target image, obtaining a pixel value of the first target image, obtaining a pixel value of a pixel point corresponding to the second target image, obtaining a pixel value of a pixel point corresponding to the first target image, obtaining a difference image after obtaining all the pixel points, and keeping consistency between the difference image and the image subjected to subtraction operation.
In one possible implementation manner of the first aspect, performing a search on the difference image to extract defect information on the difference image includes:
searching on the difference image by adopting a contour searching method to extract defect contour information on the difference image;
obtaining labeling information of the defect information according to the defect information, including:
and obtaining marking information of the defect information according to the defect outline information.
Because the pixel color difference on the binary and gray images is possibly not obvious, in order to ensure that the defects can be extracted quickly and completely, an outline searching mode can be adopted, the defects can be obtained quickly through the outline searching function of OpenCV, a certain threshold condition is set to obtain the outline boundary line of the defects, the defect extraction is completed only after the outline forms a closed loop, whether the defects are overlapped or not can be quickly obtained according to the distribution condition of the outline, the outline can be cut as required to obtain the completed defect outline, and the defects can be extracted more fully.
In a possible implementation manner of the first aspect, after performing a search on the difference image by using a contour search method to extract defect contour information on the difference image, the defect labeling method further includes:
performing pixel filling on the difference image according to the defect contour information to obtain first defect contour information;
obtaining labeling information of the defect information according to the defect contour information, wherein the labeling information comprises:
and obtaining marking information of the defect information according to the first defect outline information.
After the defect is obtained by delineating the contour, in order to more highlight the defect information and facilitate the further extraction of the defect so as to enable the extraction and standard of the defect to be more accurate, the defect is subjected to pixel filling on the difference image according to the defect contour information, and due to the limitation that the contour information is used as a filling boundary, the defect can be quickly colored.
In one possible implementation manner of the first aspect, performing pixel filling on the difference image according to the defect contour information to obtain first defect contour information includes:
filling pixels in the area covered by the defect outline on the difference image, and blackening the area except the area covered by the defect outline on the difference image to obtain a first difference image;
and obtaining first defect contour information according to the first difference image.
And further displaying the defect area so as to enable the extraction of the defect to be more accurate and efficient, after the pixel filling is carried out on the defect area, putting other areas black as a backboard for highlighting the defect, naturally, the pixel filling of the defect area is inevitably the color except black, and white can be selected to be filled in the binary image so as to be more obviously different from the black backboard, so that the extraction and marking of the defect are more accurate and efficient, and the quality of the defect marking is improved.
In a possible implementation manner of the first aspect, obtaining annotation information of the defect information according to the defect information includes:
and rewriting the defect information in a labeling data format to obtain labeling information of the defect information.
The universal mark data format in the industry is adopted for rewriting, the trouble of rewriting the rewriting program is avoided, the rewriting of the defect information can be finished by directly calling the rewriting program in the existing output equipment, the mark information of the defect information which can be identified by the equipment is directly obtained after the rewriting, and the efficiency of marking the defect is improved.
In a second aspect, an embodiment of the present application provides a defect labeling apparatus, including:
the first conversion module is used for inputting the target image into the defect identification model and obtaining a conversion image; the defect recognition model is obtained based on the set of the non-defective images and the set of the fused images through training, and the fused images are obtained based on the fusion of the non-defective images and the defect images, so that the defect recognition model has the capability of converting the defect images into the non-defective images;
the second conversion module is used for converting the converted image and the target image into a color gamut space so as to respectively obtain a first converted image and a first target image;
the difference module is used for carrying out subtraction operation on the first conversion image and the first target image to obtain a difference image;
the searching module is used for searching on the difference image so as to extract the defect information on the difference image;
and the marking module is used for acquiring marking information of the defect information according to the defect information.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is loaded and executed by a processor, the defect labeling method provided in any one of the above first aspects is implemented.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, wherein,
the memory is used for storing a computer program;
the processor is configured to load and execute a computer program to enable the electronic device to perform the defect labeling method provided in any one of the above first aspects.
Compared with the prior art, the beneficial effects of this application are:
the method comprises the steps of inputting a target image into a defect identification model and obtaining a conversion image; the defect recognition model is obtained based on the set of the non-defective images and the set of the fused images through training, and the fused images are obtained based on the fusion of the non-defective images and the defect images, so that the defect recognition model has the capability of converting the defect images into the non-defective images; converting the converted image and the target image into a color gamut space to obtain a first converted image and a first target image respectively; carrying out subtraction operation on the first conversion image and the first target image to obtain a difference image; searching on the difference image to extract defect information on the difference image; and obtaining the marking information of the defect information according to the defect information. The method comprises the steps of firstly inputting a target image to be marked into a defect identification model, learning data distribution of the non-defective image as training data due to the fact that the defect identification model adopts the non-defective image and a fused image fused with the defective image and the non-defective image during training, converting the input target image with defects into the non-defective image to obtain a converted image, then converting the converted image and the target image into a color gamut space to facilitate image subtraction operation, further highlighting defect characteristics on the target image, finally searching and extracting highlighted defect information, and finally obtaining marking information of the defect information to finish marking of the defects on the image. According to the method, automatic identification and extraction are carried out through the defect identification model, and finally, labeling is completed, a large amount of labeling work can be carried out instead of manual work, the efficiency of defect labeling is improved, compared with other artificial intelligence identification means, the method can be used for identifying the defect images of multiple working conditions and multiple background plates, the original defect-free images of the defect images are combined, obvious salient information of all source images is obtained through fusion, fine defects which are difficult to observe by naked eyes are further highlighted in subtraction operation, the comprehensiveness and accuracy of defect identification and labeling are improved, and the quality of defect labeling is improved.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device in a hardware operating environment according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a defect labeling method according to an embodiment of the present application;
FIG. 3 is a block diagram of a defect labeling apparatus according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a defect-free image in the defect labeling method according to the embodiment of the present application;
FIG. 5 is a schematic illustration of a fused image based on the defect-free image shown in FIG. 4;
fig. 6 is a schematic diagram of a target image in a defect labeling method according to an embodiment of the present application;
FIG. 7 is a schematic view of a defect-free image corresponding to the target image shown in FIG. 6;
FIG. 8 is a schematic diagram of a difference image in a defect labeling method according to an embodiment of the present application;
fig. 9 is a schematic diagram of a time difference image when performing contour search in a defect labeling method according to an embodiment of the present application;
fig. 10 is a schematic view of an image after annotation information is obtained in a defect annotation method according to an embodiment of the application;
the labels in the figure are: 101-processor, 102-communication bus, 103-network interface, 104-user interface, 105-memory.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The main solution of the embodiment of the application is as follows: a defect labeling method, a device, a storage medium, equipment and a computer program product are provided, wherein the method comprises the following steps: inputting the target image into a defect identification model and obtaining a conversion image; the defect recognition model is obtained based on the set of the non-defective images and the set of the fused images through training, and the fused images are obtained based on the fusion of the non-defective images and the defect images, so that the defect recognition model has the capability of converting the defect images into the non-defective images; converting the converted image and the target image into color gamut space to obtain a first converted image and a first target image respectively; carrying out subtraction operation on the first conversion image and the first target image to obtain a difference image; searching on the difference image to extract defect information on the difference image; and obtaining the marking information of the defect information according to the defect information.
In recent years, the field of AI artificial intelligence has been developed dramatically, and automated defect detection schemes based on deep learning target detection and target segmentation technologies are gradually applied to mass production. The deep learning algorithm has higher requirements on the quantity and quality of training samples, and particularly for the application in the industrial field with extremely high requirements on model accuracy, tens of thousands of training samples are often needed to ensure the accuracy of the model, which also increases the difficulty and implementation cost for the development of related projects. In order to enable a model to learn the capability of extracting defects, defect information on a large number of training samples needs to be labeled, but due to the reasons that the data size of the training samples is large and complex, a large amount of image data can be generated particularly in the printed circuit board industry, on one hand, some small defects on images are not easy to identify, on the other hand, the defects are not easy to extract due to confusion with a back plate under the multi-working-condition and the multi-background plate condition of the printed circuit board, the accuracy and efficiency of defect labeling are low due to the above conditions, and the labeling quality is not ideal enough.
Therefore, the application provides a solution, a target image is input into a defect identification model, and a conversion image is obtained; the defect recognition model is obtained based on the set of the non-defective images and the set of the fused images through training, and the fused images are obtained based on the fusion of the non-defective images and the defect images, so that the defect recognition model has the capability of converting the defect images into the non-defective images; converting the converted image and the target image into color gamut space to obtain a first converted image and a first target image respectively; carrying out subtraction operation on the first conversion image and the first target image to obtain a difference image; searching on the difference image to extract defect information on the difference image; and obtaining the marking information of the defect information according to the defect information. This application is through the original flawless image that combines the defect image, the obvious outstanding information that the fusion obtained all source images, also be defect information, and further the tiny defect that the naked eye is difficult to observe is outstanding in the operation of subtracting the image, make the model to the extraction precision of defect with this training model, efficiency all promotes, can replace the manual work to carry out a large amount of marking work, promote the efficiency of defect marking, and compare other artificial intelligence recognition means, can discern to the defect image of multiplex condition and many background boards, promote to defect discernment, the comprehensiveness and the precision of mark, and then promote the quality of defect marking.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device in a hardware operating environment according to an embodiment of the present application, where the electronic device may include: a processor 101, such as a Central Processing Unit (CPU), a communication bus 102, a user interface 104, a network interface 103, and a memory 105. Wherein the communication bus 102 is used for enabling connection communication between these components. The user interface 104 may comprise a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 104 may also comprise a standard wired interface, a wireless interface. The network interface 103 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 105 may be a storage device independent from the processor 101, and the Memory 105 may be a high-speed Random Access Memory (RAM) Memory or a Non-Volatile Memory (NVM), such as at least one disk Memory; the processor 101 may be a general-purpose processor including a central processing unit, a network processor, etc., and may also be a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, the memory 105, which is a storage medium, may include therein an operating system, a data storage module, a network communication module, a user interface module, and an electronic program.
In the electronic device shown in fig. 1, the network interface 103 is mainly used for data communication with a network server; the user interface 104 is mainly used for data interaction with a user; the processor 101 and the memory 105 in the present application may be disposed in an electronic device, and the electronic device calls the defect labeling apparatus stored in the memory 105 through the processor 101 and executes the defect labeling method provided in the embodiment of the present application.
Referring to fig. 2, based on the hardware device of the foregoing embodiment, an embodiment of the present application provides a defect labeling method, including the following steps:
s10: inputting the target image into a defect recognition model and obtaining a conversion image; the defect identification model is obtained based on the set of the non-defective images and the set of the fused images through training, and the fused images are obtained based on the fusion of the non-defective images and the defect images, so that the defect identification model has the capability of converting the defect images into the non-defective images.
In the specific implementation process, the target image is an image to be marked obtained by shooting a target product, and the image has defects; the non-defective image refers to an image without defects, and may be obtained by shooting a non-defective product through an image acquisition device, or may be a defective image obtained through a standard non-defective image such as a design drawing and a learning drawing of the product, as shown in fig. 4 and 7; the defect image is an image with defects, and may be obtained by collecting an image of a product with defects, or may be a stored defect material or an illustration material, as shown in fig. 6, and the defect image may be formed by placing the material on the product image through image processing software. The defect image and the non-defect image are fused through an image fusion means to obtain a fusion image, a defect identification model is obtained through training based on the fusion image and the non-defect image, the trained model has the capability of converting the input target image with defects to the fusion image, and the converted image is a conversion image, namely the target image is converted to the non-defect image so as to highlight the defect information on the image.
S20: the converted image and the target image are converted into a color gamut space to obtain a first converted image and a first target image, respectively.
In a specific implementation, the color gamut space refers to a range area formed by the number of colors that can be expressed by a certain color representation mode, such as editing images in RGB, CMYK and Lab, and the essential difference is that the images operate in different color gamut spaces, that is, the images are converted into different color gamuts for display, so as to facilitate subsequent processing of the images and avoid confusion of colors with a backplane and inconvenience in defect extraction, and the images after the color gamut spaces are the first converted image corresponding to the converted image and the first target image corresponding to the target image.
S30: and carrying out subtraction operation on the first conversion image and the first target image to obtain a difference image.
In a specific implementation process, the subtraction operation between the images refers to subtraction operation performed on corresponding pixels between the two images, so that difference information of the two images can be obtained, and since the model has the capability of converting a defective image into a non-defective image, the first converted image is obtained based on the non-defective image, so that some fine defects on the first converted image can be highlighted after the subtraction operation, as shown in fig. 8, after the subtraction operation, there are differences between pixels at corresponding positions, and after a new pixel value is determined according to the differences, a difference image can be correspondingly obtained.
S40: a lookup is performed on the difference image to extract defect information on the difference image.
In the specific implementation process, because the defect is obviously highlighted on the difference image, the defect information can be quickly and accurately extracted on the image, the extraction mode can adopt some means commonly used in the field, such as coloring extraction, contour extraction, threshold segmentation and the like, the contour extraction refers to forming a contour boundary along the contour of the defect outline, and the region contained in the extraction boundary is a defect region; the coloring extraction refers to the extraction of the defect by filling the defect position with pixels to distinguish it from the backplane and then recognizing the filled pixels by capturing. The principle of threshold segmentation is to set a certain threshold based on pixel points of an image according to the difference between a defect area and a backboard area, and then divide the image into two parts according to the threshold, thereby completing the extraction of the defect on the image.
S50: and obtaining the marking information of the defect information according to the defect information.
In the specific implementation process, because the automatic identification and labeling are realized by the computer technology in the actual application, the extracted defect information is only in the image level and needs to be converted into a computer readable language, such as various programming languages, or the defect area is labeled by adopting a drawing mode shown in fig. 10, and can be rewritten according to a general labeling data format to obtain the labeling information corresponding to the defect information. Specifically provided is an embodiment, including:
s501: and rewriting the defect information in a labeling data format to obtain labeling information of the defect information.
In the specific implementation process, the marking data format which is universal in the industry is adopted for rewriting, the trouble of rewriting the rewriting program is avoided, the rewriting of the defect information can be completed directly by calling the rewriting program in the existing output equipment, the marking information of the defect information which can be identified by the equipment is directly obtained after the rewriting, and the efficiency of marking the defect is improved.
In the embodiment, a non-defective image and a fused image fused with a defective image and a non-defective image are used as training data, so that a trained model can learn the data distribution of the non-defective image, the capacity of converting an input target image with defects to the non-defective image is realized, the extraction of image defects can be performed under the comparison of the distribution of the non-defective image, the defect information is highlighted so as to be convenient for extraction and labeling, then, the image is subjected to color gamut space conversion, the interference of colors on defect extraction can be reduced for the defect images with multiple working conditions and multiple background plates, the subtraction operation is performed on the converted images so as to highlight tiny defects on the image, the comprehensiveness of defect extraction is improved, and the labeling quality of the defects is improved.
In one embodiment, before inputting the target image into the defect recognition model and obtaining the transformed image, the defect labeling method further comprises:
acquiring a plurality of defect images and non-defect images;
in the specific implementation process, the defect image and the non-defect image can be obtained by shooting through a camera, or can be obtained through AOI automatic optical detection equipment, the non-defect image can also be directly obtained on a finished product image such as a product design drawing, the defect image can be a stored historical shot image, or can be obtained by synthesizing materials such as an illustration material and a defect material on the non-defect image, and the method can be used for making false data to expand a training image set under the condition that the data volume is insufficient, so that the data volume of model training is enough to ensure the accuracy of the model.
Fusing the defect image and the non-defect image to obtain a fused image;
in the specific implementation process, the image fusion refers to that image data about the same target collected by a multi-source channel is subjected to image processing, computer technology and the like, so that favorable information in each channel is extracted to the maximum extent, and finally, high-quality images are synthesized, so that the utilization rate of image information is improved, the computer interpretation precision and reliability are improved, and the spatial resolution and the spectral resolution of an original image are improved. In the embodiment, the defect image and the non-defect image are fused, so that the defect information can be added to the non-defect image as much as possible from the defect image to ensure the defect detection capability of the model, as shown in fig. 4 for the non-defect image, and the corresponding fused image is shown in fig. 5 for the non-defect image. The image fusion algorithm can be divided into pixel-level image fusion, feature-level image fusion and decision-level image fusion according to the hierarchy of the image fusion algorithm, and various image fusion methods are derived based on the different image fusion algorithms, such as a conventional image fusion method, a multi-scale image fusion method, a transform domain-based image fusion method and the like.
And training to obtain a defect recognition model based on the set of the defect-free images and the set of the fusion images.
In the specific implementation process, after the fused images are obtained, a plurality of fused images are used for constructing a set, the set and the set of the non-defective images are used as a data set for model training, and the defect identification model is obtained by matching with the existing networks such as a generative antagonistic neural network and semantic segmentation network training.
In the embodiment, the model is trained in advance so as to be used repeatedly, the images are fused by an image fusion means so as to ensure the high quality of the images serving as training samples, and all defect information on the images is kept as much as possible so as to improve the prediction quality of the model. If extremely individual defect images are difficult to extract in use, the model can be optimized on the basis of the images after being manually and independently processed, so that the model can learn the identification of the defect images, and the detection quality is further improved.
In one embodiment, fusing the defect image with the non-defect image to obtain a fused image comprises:
and fusing the defect image and the non-defect image by adopting a Poisson image fusion method to obtain a fused image.
In a specific implementation process, the implementation provides an image fusion mode, namely a poisson image fusion method, which takes a gradient field in a source image block as a guide, smoothly diffuses the difference between a target scene and the source image on a fusion boundary into a fusion module, the fused image block can be seamlessly fused into the target scene, and the hue and the illumination of the fused image block can be consistent with those of the target scene. In brief, the defects on the defective images are more naturally and smoothly transited to the non-defective images, so that the phenomenon that the fused images are too hard or the defect information is obviously inconsistent with the backboard, the images cannot restore the real scene is avoided, and the quality of model training is improved to ensure the detection quality.
In one embodiment, converting the converted image and the target image into a color gamut space to obtain a first converted image and a first target image, respectively, comprises:
and converting the conversion image and the target image into an RGB color gamut space to obtain a first conversion image and a first target image respectively.
In the specific implementation process, the RGB color gamut covers all color gamuts that can be felt by human eyes, and is a color standard that is most widely applied in the industry, and various colors are obtained by changing three color channels of red, green, and blue and superimposing the three color channels, where RGB represents colors of the three channels of red, green, and blue, and the color of an image can be quickly adjusted within the color gamut range so as to highlight defect information on the image, improve the effect of model training, convert the converted image and a target image into an RGB color gamut space, and obtain corresponding converted images, that is, a first converted image and a first target image.
In one embodiment, converting the converted image and the target image into an RGB color gamut space to obtain a first converted image and a first target image, respectively, comprises:
the conversion image and the target image are grayed to obtain a first grayscale conversion image and a first grayscale target image respectively.
In a specific implementation process, in order to reduce data processing amount, facilitate less calculation amount during subsequent processing, and improve defect detection and labeling efficiency, a converted image is subjected to graying processing, and in an RGB color gamut, if R = G = B, a color represents a grayscale color, wherein a value of R = G = B is called a grayscale value, so that each pixel of the grayscale image only needs one byte to store the grayscale value, and the grayscale range is 0-255. And after the converted image and the target image are subjected to graying processing, respectively obtaining a first grayscale converted image corresponding to the converted image and a first grayscale target image corresponding to the target image.
Performing a subtraction operation on the first converted image and the first target image based on the graying processing to obtain a difference image, comprising:
and carrying out subtraction operation on the first gray scale conversion image and the first gray scale target image to obtain a difference image.
In one embodiment, subtracting the first converted image from the first target image to obtain a difference image comprises:
respectively obtaining pixel values of corresponding pixel points of the first conversion image and the first target image, and performing subtraction operation;
and taking the absolute value of the operation result as the pixel value of the corresponding point of the difference image to obtain the difference image.
In the specific implementation process, an implementation mode of performing subtraction operation on an image is provided, a basic composition unit of the image is used, pixels are used as the basis of the subtraction operation, a difference image can be obtained more accurately, because a first conversion image and a first target image are obtained based on the same position area of a product, pixel points on the image are necessarily respectively corresponding, the pixel value of one pixel point corresponding to the first conversion image and the first target image is subjected to difference, the absolute value of the difference value is used as the pixel value of the pixel point at the corresponding position of the difference image, the steps are repeated, the difference image is obtained after all the pixel points are obtained, the difference image and the image subjected to the subtraction operation can keep consistency, the efficiency and the effect of defect extraction can be improved, and the labeling quality of the target image is improved.
In one embodiment, performing a lookup on the difference image to extract defect information on the difference image comprises:
searching on the difference image by adopting a contour searching method to extract defect contour information on the difference image;
in a specific implementation process, a mode of contour search for extracting defects is provided, as shown in fig. 9, since pixel color differences on a binarized and grayed image may not be obvious, and in order to ensure that defects can be extracted quickly and completely, a mode of contour search may be adopted, which can be obtained quickly through the contour search function of OpenCV, a certain threshold condition is set to obtain a contour boundary line of the defects, the defect extraction is completed only after the contour lines form a closed loop, and whether the defects are overlapped or not can be obtained quickly according to the distribution condition of the contour lines, and then the contour lines can be cut as required to obtain the completed defect contours, so that the defect extraction is more sufficient.
Based on the above embodiment, obtaining the label information of the defect information according to the defect information includes:
and obtaining marking information of the defect information according to the defect outline information.
In one embodiment, after performing a search on the difference image by using a contour search method to extract defect contour information on the difference image, the defect labeling method further includes:
performing pixel filling on the difference image according to the defect contour information to obtain first defect contour information;
in the specific implementation process, after the defect is obtained by defining the contour, in order to more highlight the defect information and facilitate further extraction of the defect so as to enable the extraction and standard of the defect to be more accurate, pixel filling is carried out on the defect on the difference image according to the defect contour information, the coloring of the defect can be rapidly finished due to the limitation that the contour information is used as a filling boundary, and the first defect contour information is obtained after the coloring is finished.
Based on the above embodiment, obtaining the label information of the defect information according to the defect contour information includes:
and obtaining marking information of the defect information according to the first defect outline information.
In one embodiment, pixel filling on the difference image according to the defect contour information to obtain first defect contour information includes:
filling pixels in the region covered by the defect outline on the difference image, and blackening the region except the region covered by the defect outline on the difference image to obtain a first difference image;
and obtaining first defect contour information according to the first difference image.
In the specific implementation process, the defect area is further displayed, so that the extraction of the defect is more accurate and the extraction efficiency of the defect is improved, after the pixel filling is carried out on the defect area, the other areas are arranged to be black as a backboard for highlighting the defect, certainly, the pixel filling on the defect area is inevitably the color except the black, the white can be selected to be filled in the binary image, so that the color is more obviously different from the black backboard, the extraction and the labeling of the defect are more accurate and more efficient, and the quality of the defect labeling is improved.
Referring to fig. 3, based on the same inventive concept as the previous embodiment, the embodiment of the present application further provides a defect labeling apparatus, including:
the first conversion module is used for inputting the target image into the defect identification model and obtaining a conversion image; the defect recognition model is obtained based on the set training of the defect-free images and the set training of the fusion images, and the fusion images are obtained based on the fusion of the defect-free images and the defect images, so that the defect recognition model has the capability of converting the defect images into the defect-free images;
the second conversion module is used for converting the converted image and the target image into a color gamut space so as to respectively obtain a first converted image and a first target image;
the difference module is used for carrying out subtraction operation on the first conversion image and the first target image to obtain a difference image;
the searching module is used for searching on the difference image so as to extract the defect information on the difference image;
and the marking module is used for acquiring marking information of the defect information according to the defect information.
It should be understood by those skilled in the art that the division of each module in the embodiment is only a division of a logic function, and all or part of the division may be integrated on one or more actual carriers in actual application, and all of the modules may be implemented in a form called by a processing unit through software, may also be implemented in a form of hardware, or implemented in a form of combination of software and hardware, and it needs to be described that each module in the defect labeling apparatus in the embodiment corresponds to each step in the defect labeling method in the foregoing embodiment one to one, therefore, the specific implementation manner of the embodiment may refer to the implementation manner of the defect labeling method, and details are not described here.
Based on the same inventive concept as that in the foregoing embodiments, embodiments of the present application further provide a computer-readable storage medium, which stores a computer program, and when the computer program is loaded and executed by a processor, the defect labeling method provided by the embodiments of the present application is implemented.
Based on the same inventive concept as the foregoing embodiments, embodiments of the present application further provide an electronic device, including a processor and a memory, wherein,
the memory is used for storing a computer program;
the processor is used for loading and executing the computer program, so as to enable the electronic device to execute the defect labeling method provided by the embodiment of the application.
Furthermore, based on the same inventive concept as in the previous embodiments, embodiments of the present application also provide a computer program product comprising a computer program for executing the defect labeling method as provided by the embodiments of the present application when the computer program is executed.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories. The computer may be a variety of computing devices including intelligent terminals and servers.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., a rom/ram, a magnetic disk, an optical disk) and includes instructions for enabling a multimedia terminal (e.g., a mobile phone, a computer, a television receiver, or a network device) to perform the method according to the embodiments of the present application.
In summary, the defect labeling method, apparatus, storage medium, device and computer program product provided by the present application input the target image into the defect recognition model and obtain the converted image; the defect recognition model is obtained based on the set of the non-defective images and the set of the fused images through training, and the fused images are obtained based on the fusion of the non-defective images and the defect images, so that the defect recognition model has the capability of converting the defect images into the non-defective images; converting the converted image and the target image into a color gamut space to obtain a first converted image and a first target image respectively; carrying out subtraction operation on the first conversion image and the first target image to obtain a difference image; searching on the difference image to extract defect information on the difference image; and obtaining the marking information of the defect information according to the defect information. This application replaces the manual work to carry out a large amount of mark work through defect identification model, promote the efficiency of defect mark, and compare other artificial intelligence identification means, can discern to the defect image of multiplex condition and many background boards, combine the original flawless image of defect image, the obvious outstanding information of all source images is obtained in the fusion, and further outstanding the tiny defect that the naked eye is difficult to observe in subtracting the operation, promote to defect discernment, the comprehensiveness and the precision of mark, promote the quality of defect mark.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (13)

1. A defect labeling method, comprising the steps of:
inputting the target image into a defect identification model and obtaining a conversion image; the defect identification model is obtained based on the training of a set of non-defective images and a set of fused images, and the fused images are obtained based on the fusion of the non-defective images and the defect images, so that the defect identification model has the capability of converting the defect images into the non-defective images;
converting the converted image and the target image into a color gamut space to obtain a first converted image and a first target image respectively;
carrying out subtraction operation on the first conversion image and the first target image to obtain a difference image;
searching on the difference image to extract defect information on the difference image;
and acquiring the labeling information of the defect information according to the defect information.
2. The method of claim 1, wherein before inputting the target image into the defect recognition model and obtaining the transformed image, the method further comprises:
acquiring a plurality of defect images and non-defect images;
fusing the defect image with the non-defect image to obtain a fused image;
and training to obtain the defect identification model based on the set of the non-defective images and the set of the fused images.
3. The method for labeling defects according to claim 2, wherein the fusing the defect image and the non-defect image to obtain the fused image comprises:
and fusing the defect image and the non-defect image by adopting a Poisson image fusion method to obtain a fused image.
4. The method for defect labeling according to claim 1, wherein the converting the converted image and the target image into a color gamut space to obtain a first converted image and a first target image respectively comprises:
and converting the conversion image and the target image into RGB color gamut space to respectively obtain a first conversion image and a first target image.
5. The method of claim 4, wherein the converting the converted image and the target image into RGB color gamut space to obtain a first converted image and a first target image respectively comprises:
graying the conversion image and the target image to respectively obtain a first grayscale conversion image and a first grayscale target image;
the subtracting the first converted image from the first target image to obtain a difference image includes:
and performing subtraction operation on the first gray scale conversion image and the first gray scale target image to obtain a difference image.
6. The method of claim 1, wherein subtracting the first transformed image from the first target image to obtain a difference image comprises:
respectively obtaining pixel values of corresponding pixel points of the first conversion image and the first target image, and performing subtraction operation;
and taking the absolute value of the operation result as the pixel value of the corresponding point of the difference image to obtain the difference image.
7. The method of claim 1, wherein the performing a search on the difference image to extract defect information on the difference image comprises:
searching on the difference image by adopting a contour searching method to extract defect contour information on the difference image;
the obtaining of the labeling information of the defect information according to the defect information includes:
and obtaining the marking information of the defect information according to the defect outline information.
8. The method of claim 7, wherein after the performing a search on the difference image by using a contour search method to extract the defect contour information on the difference image, the method further comprises:
performing pixel filling on the difference image according to the defect contour information to obtain first defect contour information;
obtaining labeling information of the defect information according to the defect contour information, wherein the labeling information comprises:
and obtaining the marking information of the defect information according to the first defect outline information.
9. The method of claim 8, wherein the pixel filling on the difference image according to the defect profile information to obtain first defect profile information comprises:
filling pixels in a region covered by the defect outline on the difference image, and blackening the region except the region covered by the defect outline on the difference image to obtain a first difference image;
and obtaining the first defect outline information according to the first difference image.
10. The method of claim 1, wherein the obtaining the labeling information of the defect information according to the defect information comprises:
and rewriting the defect information in a labeling data format to obtain labeling information of the defect information.
11. A defect labeling apparatus, comprising:
a first conversion module for inputting a target image into a defect recognition model and obtaining a converted image; the defect identification model is obtained based on the training of a set of non-defective images and a set of fused images, and the fused images are obtained based on the fusion of the non-defective images and the defect images, so that the defect identification model has the capability of converting the defect images into the non-defective images;
a second conversion module for converting the converted image and the target image into a color gamut space to obtain a first converted image and a first target image, respectively;
a difference module for performing a subtraction operation on the first converted image and the first target image to obtain a difference image;
the searching module is used for searching on the difference image so as to extract defect information on the difference image;
and the marking module is used for acquiring the marking information of the defect information according to the defect information.
12. A computer-readable storage medium, storing a computer program, wherein the computer program, when loaded and executed by a processor, implements a method for defect tagging according to any one of claims 1 to 10.
13. An electronic device comprising a processor and a memory, wherein,
the memory is used for storing a computer program;
the processor is configured to load and execute the computer program to cause the electronic device to perform the defect labeling method according to any one of claims 1 to 10.
CN202211142189.9A 2022-09-20 2022-09-20 Defect labeling method, device, storage medium, equipment and computer program product Active CN115222739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211142189.9A CN115222739B (en) 2022-09-20 2022-09-20 Defect labeling method, device, storage medium, equipment and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211142189.9A CN115222739B (en) 2022-09-20 2022-09-20 Defect labeling method, device, storage medium, equipment and computer program product

Publications (2)

Publication Number Publication Date
CN115222739A CN115222739A (en) 2022-10-21
CN115222739B true CN115222739B (en) 2022-12-02

Family

ID=83617583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211142189.9A Active CN115222739B (en) 2022-09-20 2022-09-20 Defect labeling method, device, storage medium, equipment and computer program product

Country Status (1)

Country Link
CN (1) CN115222739B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661156B (en) * 2022-12-28 2023-04-14 成都数联云算科技有限公司 Image generation method, image generation device, storage medium, image generation apparatus, and computer program product
CN115861293A (en) * 2023-02-08 2023-03-28 成都数联云算科技有限公司 Defect contour extraction method, defect contour extraction device, storage medium, defect contour extraction device, and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378869A (en) * 2019-06-05 2019-10-25 北京交通大学 A kind of rail fastening method for detecting abnormality of sample automatic marking
CN110880169A (en) * 2019-10-16 2020-03-13 平安科技(深圳)有限公司 Method, device, computer system and readable storage medium for marking focus area
CN113344910A (en) * 2021-07-02 2021-09-03 深圳市派科斯科技有限公司 Defect labeling image generation method and device, computer equipment and storage medium
CN113673607A (en) * 2021-08-24 2021-11-19 支付宝(杭州)信息技术有限公司 Method and device for training image annotation model and image annotation
CN113793333A (en) * 2021-11-15 2021-12-14 常州微亿智造科技有限公司 Defect picture generation method and device applied to industrial quality inspection
CN114898096A (en) * 2022-05-20 2022-08-12 赵凯 Segmentation and annotation method and system for figure image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10186026B2 (en) * 2015-11-17 2019-01-22 Kla-Tencor Corp. Single image detection
CN111044522B (en) * 2019-12-14 2022-03-11 中国科学院深圳先进技术研究院 Defect detection method and device and terminal equipment
US11714878B2 (en) * 2020-01-28 2023-08-01 Faro Technologies, Inc. Construction site defect and hazard detection using artificial intelligence
CN113763355A (en) * 2021-09-07 2021-12-07 创新奇智(青岛)科技有限公司 Defect detection method and device, electronic equipment and storage medium
CN114581723A (en) * 2022-05-06 2022-06-03 成都数之联科技股份有限公司 Defect classification method, device, storage medium, equipment and computer program product

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378869A (en) * 2019-06-05 2019-10-25 北京交通大学 A kind of rail fastening method for detecting abnormality of sample automatic marking
CN110880169A (en) * 2019-10-16 2020-03-13 平安科技(深圳)有限公司 Method, device, computer system and readable storage medium for marking focus area
CN113344910A (en) * 2021-07-02 2021-09-03 深圳市派科斯科技有限公司 Defect labeling image generation method and device, computer equipment and storage medium
CN113673607A (en) * 2021-08-24 2021-11-19 支付宝(杭州)信息技术有限公司 Method and device for training image annotation model and image annotation
CN113793333A (en) * 2021-11-15 2021-12-14 常州微亿智造科技有限公司 Defect picture generation method and device applied to industrial quality inspection
CN114898096A (en) * 2022-05-20 2022-08-12 赵凯 Segmentation and annotation method and system for figure image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A Practice of Smart Sensing System for Buried Mines Detecting based on Active Infrared Thermography Approach;Katsumi Wasaki 等;《Journal of Advance Computational Research》;20161231;第1卷(第1期);29-37 *
基于深度学习的增压器铸造缺陷检测算法研究;陈相吉;《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》;20210215(第(2021)02期);C035-722 *
基于深度学习的焊缝缺陷自动检测研究与实现;渠慧帆;《中国优秀硕士学位论文全文数据库信息科技辑》;20190815(第(2019)08期);I138-829 *

Also Published As

Publication number Publication date
CN115222739A (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN115222739B (en) Defect labeling method, device, storage medium, equipment and computer program product
CN110274908B (en) Defect inspection apparatus, defect inspection method, and computer-readable recording medium
CN112567229A (en) Defect inspection device, defect inspection method, and program thereof
CN109791688A (en) Expose relevant luminance transformation
CN108463823B (en) Reconstruction method and device of user hair model and terminal
CN114677567B (en) Model training method and device, storage medium and electronic equipment
US20230222829A1 (en) Symbol analysis device and method included in facility floor plan
CN115239734A (en) Model training method, device, storage medium, equipment and computer program product
CN112489143A (en) Color identification method, device, equipment and storage medium
CN115861327A (en) PCB color change defect detection method, device, equipment and medium
CN113392819B (en) Batch academic image automatic segmentation and labeling device and method
US20230005107A1 (en) Multi-task text inpainting of digital images
US20240127404A1 (en) Image content extraction method and apparatus, terminal, and storage medium
CN109657671A (en) Nameplate character recognition method, device, computer equipment and storage medium
CN111144160B (en) Full-automatic material cutting method and device and computer readable storage medium
CN112634314A (en) Target image acquisition method and device, electronic equipment and storage medium
CN112419214A (en) Method and device for generating labeled image, readable storage medium and terminal equipment
CN112927321B (en) Intelligent image design method, device, equipment and storage medium based on neural network
KR101189003B1 (en) Method for converting image file of cartoon contents to image file for mobile
CN115546141A (en) Small sample Mini LED defect detection method and system based on multi-dimensional measurement
CN113034449A (en) Target detection model training method and device and communication equipment
CN115393855A (en) License plate product quality detection method, system and equipment
EP3038059A1 (en) Methods and systems for color processing of digital images
CN115496807B (en) Meter pointer positioning method and device, computer equipment and storage medium
CN112418033B (en) Landslide slope surface segmentation recognition method based on mask rcnn neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant