CN113240642A - Image defect detection method and device, electronic equipment and storage medium - Google Patents

Image defect detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113240642A
CN113240642A CN202110525720.XA CN202110525720A CN113240642A CN 113240642 A CN113240642 A CN 113240642A CN 202110525720 A CN202110525720 A CN 202110525720A CN 113240642 A CN113240642 A CN 113240642A
Authority
CN
China
Prior art keywords
image
detected
sub
feature
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110525720.XA
Other languages
Chinese (zh)
Inventor
张发恩
秦树鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alnnovation Beijing Technology Co ltd
Original Assignee
Alnnovation Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alnnovation Beijing Technology Co ltd filed Critical Alnnovation Beijing Technology Co ltd
Priority to CN202110525720.XA priority Critical patent/CN113240642A/en
Publication of CN113240642A publication Critical patent/CN113240642A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Abstract

The application provides an image defect detection method and device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: dividing an image to be detected into a plurality of sub images to be detected; inputting each sub image to be detected into the trained feature generation network, and obtaining the image feature which is output by the feature generation network and corresponds to each sub image to be detected; judging whether the difference between the image characteristics of the subimages to be detected and the template image characteristics reaches a preset difference threshold value or not for each subimage to be detected; and if so, determining that the corresponding area of the sub-image to be detected in the image to be detected has defects. According to the scheme, the defect detection is realized under the condition that the labeling difficulty and the workload are low, the defect detection method and the defect detection device can adapt to the defects of various forms, and the missed detection is avoided.

Description

Image defect detection method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image defect detection method and apparatus, an electronic device, and a computer-readable storage medium.
Background
At various stages of industrial production, there are a number of quality inspection processes. For production objects such as automobile parts, 3C electronic products, textile fabrics, building objects and the like, the content of quality inspection comprises surface defect detection. The traditional image vision technology can detect obvious defects (such as large-size and clear-edge damage) on the surface of an object to be detected, and the detection effect is not ideal for the defects of small size and fuzzy edge, such as scratches, pressure injury, stains, concave-convex and the like.
With the development of artificial intelligence technology, some general classification, target detection and semantic segmentation methods are gradually applied to the surface defect detection link of industrial production. For example, the trained classification model is used for distinguishing surface images with large defect areas; detecting one or more types of defects in the surface image by using the trained target detection model; and performing pixel-level segmentation on the surface image by using the trained semantic segmentation model so as to determine defects in the surface image.
These general approaches can solve part of the defect detection problem, but rely on a large number of training samples and a large amount of labeling effort. However, in the actual industrial production process, a large number of training samples with defects are difficult to collect in a short time; the appearance of the surface defect is varied (for example, the surface scratch may have shapes of different shapes, different colors and different depths), especially for fine defects, the labeling difficulty is large, the labeling scale is difficult to keep consistent, the training effect is influenced, the adaptability to the defect with a large difference in the training sample during application is poor, the detection is easy to miss, the detection precision is not high during subsequent application, and the quality inspection requirement in the industrial production process is difficult to meet.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image defect detection method and apparatus, an electronic device, and a computer-readable storage medium, which are used for implementing image defect detection under the condition of low annotation cost.
In one aspect, an embodiment of the present application provides an image defect detection method, including:
dividing an image to be detected into a plurality of sub images to be detected;
inputting each sub image to be detected into the trained feature generation network, and obtaining the image feature which is output by the feature generation network and corresponds to each sub image to be detected;
judging whether the difference between the image characteristics of the subimages to be detected and the template image characteristics reaches a preset difference threshold value or not for each subimage to be detected;
and if so, determining that the corresponding area of the sub-image to be detected in the image to be detected has defects.
In an embodiment, the dividing the image to be detected into a plurality of sub-images to be detected includes:
dividing the image to be detected into a plurality of sub images to be detected with the same scale; wherein, the adjacent subimages to be detected are overlapped.
In one embodiment, the feature generation network comprises a feature extraction network and a feature coding network, and the image features are feature vectors;
the inputting of each sub-image to be detected into the trained feature generation network to obtain the image features output by the feature generation network and corresponding to each sub-image to be detected comprises:
inputting each sub image to be detected into the feature extraction network to obtain a feature map with a plurality of scales corresponding to each sub image to be detected;
and respectively coding the feature maps of a plurality of scales corresponding to each sub-image to be detected through the feature coding network to obtain a feature vector corresponding to each sub-image to be detected.
In one embodiment, the feature generation network is trained by:
inputting a sample image in a sample data set into a convolutional neural network to obtain class prediction information of the sample image input by the convolutional neural network; wherein the category prediction information is that the sample image has defects or does not have defects;
adjusting network parameters of the convolutional neural network based on a difference between a class label of the sample image and the class prediction information;
and repeating the process until the convolutional neural network is converged, and taking the convolutional neural network with the classification layer removed as the feature generation network.
In an embodiment, before said inputting a sample image of the sample data set into the convolutional neural network, the method further comprises:
acquiring a plurality of original images and category labels corresponding to the original images;
carrying out conventional enhancement processing on the original image to obtain an enhanced image corresponding to the original image;
adding the category label of the original image to the enhanced image corresponding to the original image, taking the original image and the enhanced image carrying the category label as sample images, and putting the sample images into the sample data set.
In one embodiment, the original image carries defect type information;
before adding the category label of the original image to the enhanced image corresponding to the original image, the method further comprises:
and when the defect type information of the original image indicates that the specified defect exists in the original image, performing specified enhancement processing on the original image to obtain an enhanced image corresponding to the original image.
In an embodiment, before the determining whether the difference between the image feature of the sub-image to be detected and the template image feature reaches a preset difference threshold, the method further includes:
dividing at least one normal image into a plurality of normal sub-images;
inputting each normal sub-image into the feature generation network to obtain image features output by the feature generation network and corresponding to each normal sub-image;
and determining the template image characteristics according to the image characteristics corresponding to the plurality of normal sub-images.
On the other hand, an embodiment of the present application further provides an image defect detection apparatus, including:
the dividing module is used for dividing the image to be detected into a plurality of sub images to be detected;
the generation module is used for inputting each sub image to be detected into the trained feature generation network to obtain the image features which are output by the feature generation network and correspond to each sub image to be detected;
the judging module is used for judging whether the difference between the image characteristics of the subimages to be detected and the template image characteristics reaches a preset difference threshold value or not aiming at each subimage to be detected;
and the determining module is used for determining that the corresponding area of the sub-image to be detected in the image to be detected has defects if the corresponding area of the sub-image to be detected in the image to be detected is defective.
Further, an embodiment of the present application further provides an electronic device, where the electronic device includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the image defect detection method described above.
In addition, the embodiment of the application also provides a computer readable storage medium, and the storage medium stores a computer program which can be executed by a processor to complete the image defect detection method.
According to the scheme, after the image to be detected is divided into the multiple sub-images to be detected, the image characteristics are respectively obtained from each sub-image to be detected, whether the defects exist in each sub-image to be detected is judged according to the difference between the image characteristics of each sub-image to be detected and the template image characteristics, and then whether the defects exist in each area of the image to be detected is determined. The feature generation network for generating the image features does not directly output the defect detection result, so that the difficulty and the workload of labeling are reduced, and the training difficulty is also reduced; no matter which form of surface defect exists, the image characteristics of the image where the defect exists and the template image characteristics have larger difference, so that missing detection can be avoided, and the detection precision requirement can be easily met.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic view of an application scenario of an image defect detection method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic flowchart of an image defect detection method according to an embodiment of the present application;
FIG. 4 is a diagram illustrating image partitioning according to an embodiment of the present application;
FIGS. 5a-5b are schematic diagrams of image partitioning according to another embodiment of the present application;
fig. 6 is a schematic flowchart of a training method for a feature generation network according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a method for constructing a sample data set according to an embodiment of the present application;
fig. 8 is a flowchart illustrating a method for generating a template image feature according to an embodiment of the present application;
FIG. 9 is a flowchart illustrating an image defect detecting method according to another embodiment of the present application;
fig. 10 is a block diagram of an image defect detecting apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Fig. 1 is a schematic view of an application scenario of an image defect detection method according to an embodiment of the present application. As shown in fig. 1, the application scenario includes a client 20 and a server 30; the client 20 may be a camera for collecting an image to be detected, and is configured to transmit the image to be detected to the server 30; the server 30 may be a server, a server cluster, or a cloud computing center, and may perform a defect detection task on the to-be-detected image uploaded by the client 20.
As shown in fig. 2, the present embodiment provides an electronic apparatus 1 including: at least one processor 11 and a memory 12, one processor 11 being exemplified in fig. 2. The processor 11 and the memory 12 are connected by a bus 10, and the memory 12 stores instructions executable by the processor 11, and the instructions are executed by the processor 11, so that the electronic device 1 can execute all or part of the flow of the method in the embodiments described below. In an embodiment, the electronic device 1 may be the server 30 described above, and is configured to perform the image defect detection method.
The Memory 12 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk.
The present application also provides a computer-readable storage medium storing a computer program executable by a processor 11 to perform the image defect detection method provided by the present application.
Referring to fig. 3, a flowchart of an image defect detection method according to an embodiment of the present application is shown, and as shown in fig. 3, the method may include the following steps 310 to 340.
Step 310: and dividing the image to be detected into a plurality of sub images to be detected.
Wherein the image to be detected is the image subjected to defect detection. For example, the image to be detected may be an image of the surface of the object undergoing detection of surface defects.
The server side can perform average division on the image to be detected in the horizontal direction and perform average division in the vertical direction, so that a plurality of sub-images to be detected are obtained. For example, the server may divide the image to be detected into m parts in the horizontal direction and divide the image to be detected into n parts in the vertical direction, so as to obtain m × n sub-images to be detected.
Referring to fig. 4, which is a schematic diagram of image division provided in an embodiment of the present application, as shown in fig. 4, an image to be detected is divided into 6 parts in the horizontal direction and 4 parts in the vertical direction, so as to obtain 6 × 4 sub-images to be detected.
Step 320: and inputting each sub image to be detected into the trained feature generation network, and obtaining the image features which are output by the feature generation network and correspond to each sub image to be detected.
The feature generation network is used for generating corresponding image features for the image. Here, the image feature may be a feature map or a feature vector. The feature generation network may be trained from a convolutional neural network.
And after the server side inputs the sub-images to be detected into the feature generation network, calculating each sub-image to be detected through the feature generation network respectively, so as to obtain the image features corresponding to each sub-image to be detected.
Step 330: and judging whether the difference between the image characteristics of the sub-images to be detected and the template image characteristics reaches a preset difference threshold value or not for each sub-image to be detected.
The template image feature may be an image feature extracted from a non-defective image. For example, if the image to be detected is a surface image of an automobile part, the template image feature may be an image feature extracted from the surface image of the automobile part with a defect-free surface; if the image to be detected is the surface image of the textile fabric, the template image features can be image features extracted from the surface image of the textile fabric with a defect-free surface. If the image features are feature maps, the template image features can be the feature maps; if the image features are feature vectors, the template image features may be feature vectors.
The difference threshold may be a preconfigured empirical value for screening image features that differ significantly from the template image features.
The server can respectively calculate the difference value between the image characteristic of each sub-image to be detected and the template image characteristic. Here, the difference value may be expressed by a euclidean distance, a cosine distance, or the like. Preferably, the server may calculate a cosine distance between the image feature of the sub-image to be detected and the template image feature, and the cosine distance represents a difference value between the image feature and the template image feature.
The server can determine whether the difference value reaches the difference threshold. On one hand, if the image characteristics of the sub-image to be detected are not reached, the image characteristics of the sub-image to be detected are similar to the image characteristics of the template, and no defect exists in the sub-image to be detected. On the other hand, if so, execution may continue with step 340.
Step 340: and if so, determining that the corresponding area of the sub-image to be detected in the image to be detected has defects.
When it is determined that the difference between the image feature of any sub-image to be detected and the template image feature reaches the difference threshold, the server side can determine that the corresponding area of the sub-image to be detected in the image to be detected has a defect. For example, the image to be detected is divided into 6 × 4 sub-images to be detected, and when the difference between the image characteristics of the 2 nd row and 2 nd column sub-images to be detected and the template image characteristics reaches the difference threshold, it can be determined that the region where the 2 nd row and 2 nd column sub-images to be detected in the image to be detected are located has a defect.
The server can output a defect detection result, and the defect detection result can indicate that the image to be detected has defects and indicate the positions of the defects.
By the measures, after the image to be detected is divided, the image characteristics of the subimage to be detected in each area of the image to be detected are extracted, whether the defect exists is judged according to the difference between the image characteristics and the template image characteristics, and the defect detection result is not directly output by the characteristic generation network, so that the training difficulty can be greatly reduced, and the marking difficulty and the workload are reduced; no matter which form of surface defect exists, the image characteristics of the image where the defect exists and the template image characteristics have larger difference, so that missing detection can be avoided, and the detection precision requirement can be easily met.
In an embodiment, when the server divides the image to be detected into a plurality of sub images to be detected, the server may divide the image to be detected into a plurality of sub images to be detected with the same scale, and the adjacent sub images to be detected are overlapped.
Referring to fig. 5a-5b, schematic diagrams of image division provided in another embodiment of the present application are shown, as shown in fig. 5a, sub-images to be detected adjacent to each other in the horizontal direction of the image to be detected overlap; as shown in fig. 5b, the vertically adjacent sub-images to be detected of the image to be detected overlap.
The overlap between adjacent sub-images to be detected can be represented by a preset area ratio or by the number of overlapping pixel rows/columns.
By the aid of the measures, information loss caused by the fact that the defects are located at the edge of the subimage to be detected and further missed detection can be avoided in the dividing process, and defect detection precision is remarkably improved.
In one embodiment, the feature generation network may include a feature extraction network and a feature coding network, where the feature extraction network is configured to extract a feature map from an image, and the feature coding network is configured to perform coding processing on the feature map output by the feature extraction network.
The feature extraction network may include a backbone network and a multi-scale extraction network. The backbone network can adopt universal network structures such as ResNet, DarkNet, DenseNet and the like, and can also be a deep neural network which is built automatically according to requirements, and the deep neural network can gradually reduce the width and height of the feature map and increase the channel number of the feature map in the process of processing the image.
The multi-scale extraction network can be obtained by modifying the FPN (Feature Pyramid network), DarkNet and other Feature Pyramid extraction methods, the modification mode can be that the Feature Pyramid extraction method is added with a task of extracting a shallower Feature map, and the width and the height of the Feature map can be the same as the image of the input Feature extraction network, or the width and the height of the Feature map are half of the width and the height of the image of the input Feature extraction network. The shallow feature map can be used for better extracting the apparent features of small-size defects in the image.
The multi-scale extraction network can output a plurality of feature maps with different scales, and the feature extraction network performs fusion processing on the feature maps with different scales to obtain a fused feature map.
The feature coding network may be a fully-connected network, and includes a plurality of fully-connected network layers, and may encode the feature map output by the feature extraction network into a multidimensional feature vector, and use the feature vector as an image feature of the sub-image to be detected. Illustratively, 256 or 512 elements may be included in the feature vector.
By the measures, the defects of various appearances can be adapted by adopting multi-scale feature extraction and multi-granularity feature fusion strategies, the feature extraction precision is improved, and the generalization capability of the feature generation network is enhanced.
In an embodiment, before the image defect detection method is performed, a feature generation network may be trained, referring to fig. 6, which is a schematic flow chart of a training method for a feature generation network provided in an embodiment of the present application, as shown in fig. 6, the method may include the following steps 610 to 630.
Step 610: inputting the sample images in the sample data set into a convolutional neural network to obtain class prediction information of the sample images input by the convolutional neural network; wherein, the category prediction information is that the sample image has defects or does not have defects.
The sample data set includes a number of sample images, which may be images of the surface of an object undergoing surface defect detection. Each sample image may carry a category label indicating the presence or absence of a defect in the sample image.
The convolutional neural network may include the above-described feature extraction network and feature encoding network, and may contain a classification layer (e.g., softmax function). The network parameters of each network layer of the convolutional neural network can take random numbers as initial values, and the backbone network in the convolutional neural network can use pre-trained network parameters of other data sets as initial values.
The server can input the sample image into the convolutional neural network, and the convolutional neural network is used for carrying out feature extraction and feature coding on the sample image and classifying the coded feature vectors to obtain class prediction information.
Step 620: the convolutional neural network is adjusted based on a difference between the class label and the class prediction information of the sample image.
Step 630: and repeating the process until the convolutional neural network is converged, and taking the convolutional neural network without the classification layer as a characteristic generation network.
The server can evaluate the difference between the class label and the class prediction information of the sample image according to a preset loss function, so as to adjust the network parameters of the convolutional neural network. The server may update the network parameters of the feature extraction network and the feature coding network in a back propagation manner, and the local methods for updating the network parameters include, but are not limited to, Adam (Adaptive motion optimization), SGD (Stochastic Gradient Descent), nesterov estimated Gradient, RMSProp (Root Mean Square Prop), and the like, or a combination thereof.
After the server side adjusts the network parameters, the server side recalculates the sample image according to the adjusted convolutional neural network to obtain category prediction information, and evaluates the difference between the category prediction information and the category label based on the loss function again to adjust the network parameters.
After multiple iterations, when the function value of the loss function tends to be stable, the convolutional neural network can be considered to be converged, the server side can keep the network parameters of the convolutional neural network, and the convolutional neural network with the classification layer removed is used as a characteristic generation network.
By the measures, the feature generation network for acquiring the image features from the sub-images to be detected can be obtained through training.
In an embodiment, a sample data set may be constructed prior to training the convolutional neural network. Referring to fig. 7, a flowchart of a method for constructing a sample data set according to an embodiment of the present application is shown, and as shown in fig. 7, the method may include the following steps 710 to 730.
Step 710: a plurality of original images and category labels corresponding to the original images are acquired.
Wherein the original image may be an image of the surface of the object subject to surface defect detection. The original image is pre-labeled and carries a category label, and the category label indicates that the original image has defects or does not have defects.
Step 720: and carrying out conventional enhancement processing on the original image to obtain an enhanced image corresponding to the original image.
Here, the conventional enhancement processing may include enhancement means for an image such as inversion, rotation, translation, cropping, and the like.
After the server performs conventional enhancement processing on each original image, an enhanced image corresponding to each original image can be obtained.
Step 730: adding a category label of the original image to the enhanced image corresponding to the original image, taking the original image and the enhanced image carrying the category label as sample images, and putting the sample images into a sample data set.
The original image and the corresponding enhanced image have the same property, and if the original image has a defect, the corresponding enhanced image has a defect; and if the original image has no defect, the corresponding enhanced image has no defect.
In this case, the server may add a category label of the original image to the enhanced image corresponding to the original image, and place both the original image and the enhanced image carrying the category label as sample images into the sample data set.
By the measures, the sample diversity can be effectively improved, and the generalization capability of the feature generation network is improved.
In an embodiment, the original image may carry defect type information, which may be labeled in a process of artificially adding a category label and may indicate a defect type in the original image; the same original image may carry more than one type of defect information. The defect types may include noise, stains, scratches, crushing, rusting, cracking, and the like.
Before adding the category label to the enhanced image corresponding to the original image, the server may also check the defect type information of the original image, and determine whether the defect type information indicates that a specified defect exists in the original image. Here, the designated defects may include scratches, crush, rust, cracks, and the like.
On the one hand, if the specified defect does not exist in the original image, the step of adding the class label to the enhanced image can be continuously executed. On the other hand, if the original image has the designated defect, the server may perform the designated enhancement processing on the original image to obtain an enhanced image corresponding to the original image. Here, the specified enhancement processing may include enhancement means for an image such as contrast adjustment, noise addition, image scaling, and the like.
After the specified enhancement processing, more enhanced images can be obtained, so that the diversity of the sample images is further prompted, and the generalization capability of the feature generation network is improved.
In an embodiment, referring to fig. 8, a flowchart of a method for generating a template image feature provided in an embodiment of the present application is shown, as shown in fig. 8, the method may include the following steps 810 to 830.
Step 810: at least one normal image is divided into a plurality of normal sub-images.
Wherein the normal image may be an image of the surface of the object subject to surface defect detection, and the normal image is free of defects.
The server may perform average division on the normal image in the horizontal direction and perform average division on the normal image in the vertical direction, so as to obtain a plurality of normal sub-images. For example, the server may divide the normal image into m parts in the horizontal direction and divide the normal image into n parts in the vertical direction, so as to obtain m × n normal sub-images.
In an embodiment, when the server divides the normal image into a plurality of normal sub-images, the server may divide the normal image into a plurality of normal sub-images with the same scale, and there is an overlap between adjacent normal sub-images.
The width and height of the divided normal sub-image and the sub-image to be detected in the application process can be the same.
Step 820: and inputting each normal sub-image into the feature generation network to obtain the image features which are output by the feature generation network and correspond to each normal sub-image.
And after the server side inputs the normal subimages into the feature generation network, each normal subimage is respectively calculated through the feature generation network, so that the image feature corresponding to each normal subimage is obtained. Here, the image feature may be a feature map or a feature vector.
Step 830: and determining the template image characteristics according to the image characteristics corresponding to the plurality of normal sub-images.
After obtaining the image features corresponding to the plurality of normal sub-images, the server may perform averaging processing on the plurality of image features, thereby obtaining the template image features. For example, if the image feature is a feature map, a mean value may be calculated for the pixel value of each position of the feature map corresponding to each normal sub-image, so as to obtain the template image feature. If the image features are feature vectors, the mean value of the elements at each position of the feature vectors corresponding to each normal sub-image can be calculated, so as to obtain the template image features.
By the above measures, template image features can be obtained for performing a subsequent image defect detection method.
In an embodiment, referring to fig. 9, which is a schematic flow chart of an image defect detection method provided in another embodiment of the present application, as shown in fig. 9, after an image to be detected (an image to be detected) is acquired, the image may be partitioned to obtain a plurality of image blocks (sub-images to be detected), and feature extraction and encoding may be performed on each image block to obtain a feature vector corresponding to each image block. In addition, a large number of images which are subjected to blocking processing and have no defects can be obtained in advance, feature extraction and encoding are carried out on the images to obtain feature vectors corresponding to the images, and the template feature vectors can be obtained after the average values of the feature vectors are calculated. By calculating the distance between the feature vector corresponding to each image block and the feature vector of the template, whether each image block is a defect image can be judged according to the distance, and whether the image to be detected has defects and the area where the defects are located can be further determined according to the defect detection result of each image block.
Fig. 10 is an image defect detecting apparatus according to an embodiment of the present invention, and as shown in fig. 10, the apparatus may include:
the dividing module 1010 is configured to divide the image to be detected into a plurality of sub images to be detected;
a generating module 1020, configured to input each to-be-detected subimage into the trained feature generation network, and obtain an image feature output by the feature generation network and corresponding to each to-be-detected subimage;
the determining module 1030 is configured to determine, for each to-be-detected subimage, whether a difference between an image feature of the to-be-detected subimage and a template image feature reaches a preset difference threshold;
the determining module 1040 is configured to determine that the corresponding region of the sub-image to be detected in the image to be detected has a defect if the corresponding region of the sub-image to be detected in the image to be detected has a defect.
The implementation process of the functions and actions of each module in the above-mentioned apparatus is specifically detailed in the implementation process of the corresponding step in the above-mentioned image defect detection method, and is not described herein again.
In the embodiments provided in the present application, the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (10)

1. An image defect detection method, comprising:
dividing an image to be detected into a plurality of sub images to be detected;
inputting each sub image to be detected into the trained feature generation network, and obtaining the image feature which is output by the feature generation network and corresponds to each sub image to be detected;
judging whether the difference between the image characteristics of the subimages to be detected and the template image characteristics reaches a preset difference threshold value or not for each subimage to be detected;
and if so, determining that the corresponding area of the sub-image to be detected in the image to be detected has defects.
2. The method according to claim 1, wherein the dividing the image to be detected into a plurality of sub-images to be detected comprises:
dividing the image to be detected into a plurality of sub images to be detected with the same scale; wherein, the adjacent subimages to be detected are overlapped.
3. The method of claim 1, wherein the feature generation network comprises a feature extraction network and a feature encoding network, and the image features are feature vectors;
the inputting of each sub-image to be detected into the trained feature generation network to obtain the image features output by the feature generation network and corresponding to each sub-image to be detected comprises:
inputting each sub image to be detected into the feature extraction network to obtain a feature map with a plurality of scales corresponding to each sub image to be detected;
and respectively coding the feature maps of a plurality of scales corresponding to each sub-image to be detected through the feature coding network to obtain a feature vector corresponding to each sub-image to be detected.
4. The method of claim 3, wherein the feature generation network is trained by:
inputting a sample image in a sample data set into a convolutional neural network to obtain class prediction information of the sample image input by the convolutional neural network; wherein the category prediction information is that the sample image has defects or does not have defects;
adjusting network parameters of the convolutional neural network based on a difference between a class label of the sample image and the class prediction information;
and repeating the process until the convolutional neural network is converged, and taking the convolutional neural network with the classification layer removed as the feature generation network.
5. The method of claim 4, wherein prior to said inputting sample images of the sample data set into the convolutional neural network, the method further comprises:
acquiring a plurality of original images and category labels corresponding to the original images;
carrying out conventional enhancement processing on the original image to obtain an enhanced image corresponding to the original image;
adding the category label of the original image to the enhanced image corresponding to the original image, taking the original image and the enhanced image carrying the category label as sample images, and putting the sample images into the sample data set.
6. The method of claim 5, wherein the original image carries defect type information;
before adding the category label of the original image to the enhanced image corresponding to the original image, the method further comprises:
and when the defect type information of the original image indicates that the specified defect exists in the original image, performing specified enhancement processing on the original image to obtain an enhanced image corresponding to the original image.
7. The method according to claim 1, wherein before the determining whether the difference between the image feature of the sub-image to be detected and the template image feature reaches a preset difference threshold, the method further comprises:
dividing at least one normal image into a plurality of normal sub-images;
inputting each normal sub-image into the feature generation network to obtain image features output by the feature generation network and corresponding to each normal sub-image;
and determining the template image characteristics according to the image characteristics corresponding to the plurality of normal sub-images.
8. An image defect detecting apparatus, comprising:
the dividing module is used for dividing the image to be detected into a plurality of sub images to be detected;
the generation module is used for inputting each sub image to be detected into the trained feature generation network to obtain the image features which are output by the feature generation network and correspond to each sub image to be detected;
the judging module is used for judging whether the difference between the image characteristics of the subimages to be detected and the template image characteristics reaches a preset difference threshold value or not aiming at each subimage to be detected;
and the determining module is used for determining that the corresponding area of the sub-image to be detected in the image to be detected has defects if the corresponding area of the sub-image to be detected in the image to be detected is defective.
9. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the image defect detection method of any one of claims 1-7.
10. A computer-readable storage medium, characterized in that the storage medium stores a computer program executable by a processor to perform the image defect detection method of any one of claims 1 to 7.
CN202110525720.XA 2021-05-13 2021-05-13 Image defect detection method and device, electronic equipment and storage medium Pending CN113240642A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110525720.XA CN113240642A (en) 2021-05-13 2021-05-13 Image defect detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110525720.XA CN113240642A (en) 2021-05-13 2021-05-13 Image defect detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113240642A true CN113240642A (en) 2021-08-10

Family

ID=77134258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110525720.XA Pending CN113240642A (en) 2021-05-13 2021-05-13 Image defect detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113240642A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379742A (en) * 2021-08-12 2021-09-10 浙江华睿科技股份有限公司 Structure detection method and device of device based on artificial intelligence and electronic equipment
CN114972350A (en) * 2022-08-01 2022-08-30 深圳市信润富联数字科技有限公司 Method, device and equipment for detecting abnormality of mold and storage medium
CN115690101A (en) * 2022-12-29 2023-02-03 摩尔线程智能科技(北京)有限责任公司 Defect detection method, defect detection apparatus, electronic device, storage medium, and program product
CN115937109A (en) * 2022-11-17 2023-04-07 创新奇智(上海)科技有限公司 Silicon wafer defect detection method and device, electronic equipment and storage medium
CN116797590A (en) * 2023-07-03 2023-09-22 深圳市拓有软件技术有限公司 Mura defect detection method and system based on machine vision
CN116883416A (en) * 2023-09-08 2023-10-13 腾讯科技(深圳)有限公司 Method, device, equipment and medium for detecting defects of industrial products
CN116912230A (en) * 2023-08-11 2023-10-20 海格欧义艾姆(天津)电子有限公司 Patch welding quality detection method and device, electronic equipment and storage medium
CN117152459A (en) * 2023-10-30 2023-12-01 腾讯科技(深圳)有限公司 Image detection method, device, computer readable medium and electronic equipment
CN117474916A (en) * 2023-12-27 2024-01-30 苏州镁伽科技有限公司 Image detection method, electronic equipment and storage medium
CN117495846A (en) * 2023-12-27 2024-02-02 苏州镁伽科技有限公司 Image detection method, device, electronic equipment and storage medium
CN117523324A (en) * 2024-01-04 2024-02-06 苏州镁伽科技有限公司 Image processing method and image sample classification method, device and storage medium
CN117474916B (en) * 2023-12-27 2024-04-26 苏州镁伽科技有限公司 Image detection method, electronic equipment and storage medium

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379742A (en) * 2021-08-12 2021-09-10 浙江华睿科技股份有限公司 Structure detection method and device of device based on artificial intelligence and electronic equipment
CN113379742B (en) * 2021-08-12 2021-11-19 浙江华睿科技股份有限公司 Structure detection method and device of device based on artificial intelligence and electronic equipment
CN114972350A (en) * 2022-08-01 2022-08-30 深圳市信润富联数字科技有限公司 Method, device and equipment for detecting abnormality of mold and storage medium
CN114972350B (en) * 2022-08-01 2022-11-15 深圳市信润富联数字科技有限公司 Method, device and equipment for detecting abnormality of mold and storage medium
CN115937109A (en) * 2022-11-17 2023-04-07 创新奇智(上海)科技有限公司 Silicon wafer defect detection method and device, electronic equipment and storage medium
CN115690101A (en) * 2022-12-29 2023-02-03 摩尔线程智能科技(北京)有限责任公司 Defect detection method, defect detection apparatus, electronic device, storage medium, and program product
CN116797590A (en) * 2023-07-03 2023-09-22 深圳市拓有软件技术有限公司 Mura defect detection method and system based on machine vision
CN116912230A (en) * 2023-08-11 2023-10-20 海格欧义艾姆(天津)电子有限公司 Patch welding quality detection method and device, electronic equipment and storage medium
CN116883416A (en) * 2023-09-08 2023-10-13 腾讯科技(深圳)有限公司 Method, device, equipment and medium for detecting defects of industrial products
CN116883416B (en) * 2023-09-08 2023-11-24 腾讯科技(深圳)有限公司 Method, device, equipment and medium for detecting defects of industrial products
CN117152459A (en) * 2023-10-30 2023-12-01 腾讯科技(深圳)有限公司 Image detection method, device, computer readable medium and electronic equipment
CN117474916A (en) * 2023-12-27 2024-01-30 苏州镁伽科技有限公司 Image detection method, electronic equipment and storage medium
CN117495846A (en) * 2023-12-27 2024-02-02 苏州镁伽科技有限公司 Image detection method, device, electronic equipment and storage medium
CN117495846B (en) * 2023-12-27 2024-04-16 苏州镁伽科技有限公司 Image detection method, device, electronic equipment and storage medium
CN117474916B (en) * 2023-12-27 2024-04-26 苏州镁伽科技有限公司 Image detection method, electronic equipment and storage medium
CN117523324A (en) * 2024-01-04 2024-02-06 苏州镁伽科技有限公司 Image processing method and image sample classification method, device and storage medium
CN117523324B (en) * 2024-01-04 2024-04-16 苏州镁伽科技有限公司 Image processing method and image sample classification method, device and storage medium

Similar Documents

Publication Publication Date Title
CN113240642A (en) Image defect detection method and device, electronic equipment and storage medium
CN112232349B (en) Model training method, image segmentation method and device
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN106920229B (en) Automatic detection method and system for image fuzzy area
CN110148130B (en) Method and device for detecting part defects
KR102150673B1 (en) Inspection method for appearance badness and inspection system for appearance badness
CN108460764A (en) The ultrasonoscopy intelligent scissor method enhanced based on automatic context and data
CN110163213B (en) Remote sensing image segmentation method based on disparity map and multi-scale depth network model
CN111833306A (en) Defect detection method and model training method for defect detection
CN110596120A (en) Glass boundary defect detection method, device, terminal and storage medium
CN113392669B (en) Image information detection method, detection device and storage medium
CN111612747B (en) Rapid detection method and detection system for product surface cracks
CN111768392B (en) Target detection method and device, electronic equipment and storage medium
CN111680690B (en) Character recognition method and device
CN114926407A (en) Steel surface defect detection system based on deep learning
CN113538603A (en) Optical detection method and system based on array product and readable storage medium
CN114926441A (en) Defect detection method and system for machining and molding injection molding part
CN115147418A (en) Compression training method and device for defect detection model
CN113516652A (en) Battery surface defect and adhesive detection method, device, medium and electronic equipment
CN112288726A (en) Method for detecting foreign matters on belt surface of underground belt conveyor
CN110751623A (en) Joint feature-based defect detection method, device, equipment and storage medium
CN115937095A (en) Printing defect detection method and system integrating image processing algorithm and deep learning
CN112529815B (en) Method and system for removing raindrops in real image after rain
García et al. A configuration approach for convolutional neural networks used for defect detection on surfaces
CN114219933A (en) Photographing question searching method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination