CN113498528B - Image defect determining method and device, electronic equipment and storage medium - Google Patents

Image defect determining method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113498528B
CN113498528B CN202080000055.6A CN202080000055A CN113498528B CN 113498528 B CN113498528 B CN 113498528B CN 202080000055 A CN202080000055 A CN 202080000055A CN 113498528 B CN113498528 B CN 113498528B
Authority
CN
China
Prior art keywords
image
defect
detected
pixels
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202080000055.6A
Other languages
Chinese (zh)
Other versions
CN113498528A (en
Inventor
路元元
李昭月
柴栋
张锁
王洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Publication of CN113498528A publication Critical patent/CN113498528A/en
Application granted granted Critical
Publication of CN113498528B publication Critical patent/CN113498528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The disclosure relates to an image defect determining method and device, electronic equipment and a storage medium, and belongs to the technical field of display panels. The method comprises the following steps: performing defect enhancement processing on the acquired image to be detected to obtain a defect enhanced image; performing downsampling treatment on the defect enhanced image to obtain a low-resolution image; binarization processing is carried out on the low-resolution image, and the position of the defect in the low-resolution image is determined; and determining the position of the defect in the image to be detected according to the position of the defect in the low-resolution image and the mapping relation between the low-resolution image and the image to be detected. The present disclosure may improve efficiency of image defect determination.

Description

Image defect determining method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of display panels, and in particular, to an image defect determining method, an image defect determining apparatus, an electronic device, and a computer readable storage medium.
Background
The production of the liquid crystal panel includes many process sections, such as an Array section, a cell section, a module section, etc., which are different according to the type of the product. Each section approximately needs 5-15 layers of processes, and each layer of process needs 4-8 manufacturing processes. And each manufacturing flow is correspondingly provided with a detection procedure, and the automatic optical detection equipment is used for collecting the defect pictures of the panel and manually detecting the defect pictures. The more complex the process is to the defect picture obtained later, the lower the efficiency of manual detection.
Disclosure of Invention
An object of the present disclosure is to provide an image defect determining method, an image defect determining apparatus, an electronic device, and a computer-readable storage medium.
According to a first aspect of the present disclosure, there is provided an image defect determining method including:
performing defect enhancement processing on the acquired image to be detected to obtain a defect enhanced image;
Performing downsampling processing on the defect enhanced image to obtain a low-resolution image;
Performing binarization processing on the low-resolution image, and determining the position of a defect in the low-resolution image;
And determining the position of the defect in the image to be detected according to the position of the defect in the low-resolution image and the mapping relation between the low-resolution image and the image to be detected.
Optionally, the image defect determining method of the embodiment of the present disclosure further includes:
acquiring an original image, and performing downsampling and gray scale processing on the original image to obtain a gray scale image;
And determining the image to be detected according to the gray level image.
Optionally, the determining the image to be detected according to the gray level image includes:
when the number of pixels in the gray level image is not an integer multiple of the number of pixels of the target image, cutting the gray level image to obtain a plurality of images to be detected, and enabling the number of pixels in the images to be detected to be an integer multiple of the number of pixels of the target image.
Optionally, the cropping the gray scale image includes:
If the number of pixels in the horizontal direction and the vertical direction of the gray image are a and C, respectively, and the number of pixels in the horizontal direction and the vertical direction of the target image are B and D, respectively, then according to the formula:
Δx=b-mod (a, B), Δy=d-mod (C, D), determining the number of pixels cropped in the horizontal direction Δx of the grayscale image, and the number of pixels cropped in the vertical direction Δy, where mod represents a remainder function;
and clipping the gray scale image according to deltax and deltay.
Optionally, the performing defect enhancement processing on the obtained image to be detected to obtain a defect enhanced image includes:
And carrying out defect enhancement processing on the acquired image to be detected through a mask image to obtain a defect enhanced image, wherein the numerical value corresponding to each pixel in the mask image represents the probability of occurrence of the defect.
Optionally, performing defect enhancement processing on the obtained image to be detected through the mask image to obtain a defect enhanced image, including:
performing time-frequency transformation, amplitude normalization and frequency transformation on the acquired image to be detected to obtain a transformed image;
Carrying out noise reduction treatment on the transformed image to obtain a noise-reduced image;
processing the noise reduction image through a mask image with the same resolution as the noise reduction image to obtain a mask processed image;
Normalizing the image after mask processing to obtain a defect enhanced image.
Optionally, the gray value corresponding to each pixel in the mask image is inversely related to the distance between the pixel and the center of the mask image.
Optionally, the gray values corresponding to the pixels in the mask image obey normal distribution, and the gray values corresponding to the center of the mask image are the largest.
Optionally, the performing downsampling processing on the defect enhanced image to obtain a low resolution image includes:
And performing multiple downsampling treatment, multiple noise reduction treatment and multiple normalization treatment on the defect enhanced image to obtain a low-resolution image.
Optionally, the binarizing the low resolution image to determine a location of a defect in the low resolution image includes:
and taking the position of the pixel with the corresponding value smaller than the preset threshold value in the low-resolution image as the position of the defect.
Optionally, the image defect determining method of the embodiment of the present disclosure further includes:
Classifying the image to be detected according to a defect classification model to obtain defect types of the image to be detected, wherein the defect classification model is used for identifying one or more defect types.
Optionally, the image defect determining method of the embodiment of the present disclosure further includes:
acquiring an original image, and performing downsampling and gray scale processing on the original image to obtain a gray scale image;
When the number of pixels in the gray level image is not an integer multiple of the number of pixels of the target image, cutting the gray level image to obtain a plurality of images to be detected, and enabling the number of pixels in the images to be detected to be an integer multiple of the number of pixels of the target image;
Classifying the images to be detected according to the defect classification model to obtain defect types of each image to be detected;
And when the defect categories corresponding to the plurality of images to be detected are the same, taking the defect category as the defect category of the original image.
Optionally, the defect classification model is trained by a model integration method.
Optionally, the image defect determining method of the embodiment of the present disclosure further includes:
And training to obtain the defect classification model through stacking integration method according to a pre-trained average model, a classification model, an index model and a training set.
Optionally, the defect classification model includes one or more groups of different fully connected layers and normalized layers, each group of fully connected layers and normalized layers corresponding to a different classification task.
Optionally, the average model, the classification model and the index model are all trained based on an ImageNet pre-training model.
According to a second aspect of the present disclosure, there is provided an image defect determining apparatus including:
the defect enhancement processor is configured to perform defect enhancement processing on the acquired image to be detected to obtain a defect enhanced image;
a downsampling processor configured to downsample the defect-enhanced image to obtain a low-resolution image;
a binarization processor configured to binarize the low resolution image, and determine a position of a defect in the low resolution image;
an image defect determining processor configured to determine a position of a defect in the image to be detected according to the position of the defect in the low resolution image and a mapping relationship between the low resolution image and the image to be detected.
Optionally, the image defect determining apparatus of the embodiment of the present disclosure further includes:
the preprocessing device comprises a preprocessor, a display device and a display device, wherein the preprocessor is configured to acquire an original image, and perform downsampling processing and gray processing on the original image to obtain a gray image; and determining the image to be detected according to the gray level image.
Optionally, the preprocessor determines the image to be detected according to the gray level image by the following steps:
when the number of pixels in the gray level image is not an integer multiple of the number of pixels of the target image, cutting the gray level image to obtain a plurality of images to be detected, and enabling the number of pixels in the images to be detected to be an integer multiple of the number of pixels of the target image.
Optionally, the preprocessor performs clipping on the gray scale image by:
If the number of pixels in the horizontal direction and the vertical direction of the gray image are a and C, respectively, and the number of pixels in the horizontal direction and the vertical direction of the target image are B and D, respectively, then according to the formula:
Δx=b-mod (a, B), Δy=d-mod (C, D), determining the number of pixels cropped in the horizontal direction Δx of the grayscale image, and the number of pixels cropped in the vertical direction Δy, where mod represents a remainder function;
and clipping the gray scale image according to deltax and deltay.
Optionally, the defect enhancement processor is specifically configured to perform defect enhancement processing on the obtained image to be detected through a mask image to obtain a defect enhanced image, where a value corresponding to each pixel in the mask image represents probability of occurrence of the defect.
Optionally, the defect enhancement processor performs defect enhancement processing on the acquired image to be detected through the mask image to obtain a defect enhanced image by the following steps:
performing time-frequency transformation, amplitude normalization and frequency transformation on the acquired image to be detected to obtain a transformed image;
Carrying out noise reduction treatment on the transformed image to obtain a noise-reduced image;
processing the noise reduction image through a mask image with the same resolution as the noise reduction image to obtain a mask processed image;
Normalizing the image after mask processing to obtain a defect enhanced image.
Optionally, the gray value corresponding to each pixel in the mask image is inversely related to the distance between the pixel and the center of the mask image.
Optionally, the gray values corresponding to the pixels in the mask image obey normal distribution, and the gray values corresponding to the center of the mask image are the largest.
Optionally, the downsampling processor is specifically configured to perform multiple downsampling processing, multiple noise reduction processing and multiple normalization processing on the defect enhanced image to obtain a low-resolution image.
Optionally, the binarization processor is specifically configured to take, as the location of the defect, the location of the pixel where the corresponding value is smaller than the preset threshold value in the low resolution image.
Optionally, the image defect determining apparatus of the embodiment of the present disclosure further includes:
and the defect classification processor is configured to classify the image to be detected according to a defect classification model to obtain defect types of the image to be detected, wherein the defect classification model is used for identifying one or more defect types.
Optionally, the defect classification processor is further configured to classify the plurality of images to be detected according to a defect classification model to obtain a defect class of each image to be detected; and when the defect categories corresponding to the plurality of images to be detected are the same, taking the defect category as the defect category of the original image.
Optionally, the defect classification model is trained by a model integration method.
Optionally, the image defect determining apparatus of the embodiment of the present disclosure further includes:
a defect classification model determination processor configured to train to obtain the defect classification model by stacking integration method based on a pre-trained average model, a classification model, and an index model, and a training set.
Optionally, the defect classification model includes one or more groups of different fully connected layers and normalized layers, each group of fully connected layers and normalized layers corresponding to a different classification task.
Optionally, the average model, the classification model and the index model are all trained based on an ImageNet pre-training model.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
A processor; and
A memory configured to store executable instructions of the processor;
Wherein the processor is configured to perform the method of any of the above via execution of the executable instructions.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the method of any of the above.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 illustrates a schematic diagram of a system architecture of an exemplary application environment in which an image defect determination method and apparatus of embodiments of the present disclosure may be applied;
FIG. 2 illustrates a flowchart of a method of image defect determination in an embodiment of the present disclosure;
FIG. 3 (a) is a flowchart showing a method for acquiring an image to be detected in an embodiment of the present disclosure;
FIG. 3 (b) shows a schematic representation of a defect image;
FIG. 4 illustrates a schematic diagram of image cropping in an embodiment of the present disclosure;
FIG. 5 illustrates yet another schematic diagram of image cropping in an embodiment of the disclosure;
FIG. 6 illustrates yet another schematic diagram of image cropping in an embodiment of the disclosure;
FIG. 7 illustrates yet another schematic diagram of image cropping in an embodiment of the disclosure;
FIG. 8 illustrates a flow chart of defect enhancement processing in an embodiment of the present disclosure;
FIG. 9 shows a schematic representation of an amplitude image in an embodiment of the present disclosure;
FIG. 10 illustrates a schematic of a phase image in an embodiment of the present disclosure;
FIG. 11 illustrates a schematic of a transformed image in an embodiment of the present disclosure;
FIG. 12 illustrates a schematic diagram of a noise reduction image in an embodiment of the present disclosure;
FIG. 13 illustrates a schematic diagram of a mask image in an embodiment of the present disclosure;
FIG. 14 shows a schematic representation of the image after masking in an embodiment of the present disclosure;
FIG. 15 shows a schematic diagram after binarization processing in an embodiment of the present disclosure;
FIG. 16 illustrates a schematic of a defect location determined in an embodiment of the present disclosure;
FIG. 17 is a schematic diagram of a defect classification model in an embodiment of the disclosure;
FIG. 18 illustrates yet another structural schematic of a defect classification model in an embodiment of the disclosure;
FIG. 19 is a diagram showing a network architecture of a base model in an embodiment of the present disclosure;
FIG. 20 is a diagram showing a network architecture of a defect classification model in an embodiment of the disclosure;
fig. 21 is a schematic diagram showing a configuration of an image defect determining apparatus in an embodiment of the present disclosure;
fig. 22 shows a schematic structural diagram of a computer system for implementing an electronic device of an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In the present disclosure, the terms "comprising," "including," "having," "disposed in" and "having" are intended to be open-ended and mean that there may be additional elements/components/etc. in addition to the listed elements/components/etc.; the terms "first," "second," and the like, are used merely as labels, and do not limit the number or order of their objects.
At present, defect detection is difficult because of more kinds of display panel products, complex process, larger background film layer structure change of defect images and non-uniform automatic optical detection equipment for collecting images. Aiming at defect images acquired by automatic optical detection equipment in the production process of display panels, defects in the images are mostly identified manually. However, the efficiency of manually identifying defects in an image is low, and particularly, as the complexity of the production process increases, the higher the complexity of the obtained defect image is, the lower the speed of manually identifying defects is, and the average identification speed is in the order of seconds.
In order to solve the above problems, the present disclosure provides an image defect determining method and apparatus, an electronic device, and a computer readable storage medium, which can improve the efficiency of image defect determination and reduce the labor cost.
FIG. 1 illustrates a schematic diagram of a system architecture of an exemplary application environment in which an image defect determination method and apparatus of embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include an image acquisition device 101, a network 102, and a server 103. The image acquisition device 101 may include: an image sensor, an automated optical inspection device, etc., the network 102 serves as a medium to provide a communication link between the image acquisition device 101 and the server 103. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables, among others. The image acquisition apparatus 101 is an apparatus that detects common defects encountered in welding production based on optical principles. It should be understood that the number of image acquisition devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of image acquisition devices, networks, and servers, as desired for implementation. For example, the server 103 may be a server cluster formed by a plurality of servers.
The image defect determining method provided by the embodiments of the present disclosure is generally performed by the server 103, and accordingly, the image defect determining apparatus is generally provided in the server 103. For example, the image of the defect may be uploaded to the server 103 by the image acquisition device 101, and the server 103 identifies the defect in the defect image by the image defect determining method provided by the embodiment of the present disclosure, so as to determine the position of the defect.
Referring to fig. 2, fig. 2 shows a flowchart of an image defect determining method in an embodiment of the present disclosure, which may include the steps of:
Step S210, performing defect enhancement processing on the acquired image to be detected to obtain a defect enhanced image.
Step S220, performing downsampling processing on the defect enhanced image to obtain a low-resolution image.
Step S230, binarizing the low-resolution image to determine the position of the defect in the low-resolution image.
Step S240, determining the position of the defect in the image to be detected according to the position of the defect in the low-resolution image and the mapping relation between the low-resolution image and the image to be detected.
According to the image defect determining method, the defect enhancement processing is carried out on the image to obtain the defect enhancement image, so that the defect in the image can be better highlighted. The defect enhanced image is subjected to downsampling processing to reduce the resolution of the defect enhanced image, so that a low-resolution image is obtained, and the position of the defect in the low-resolution image can be more conveniently determined. And then, determining the position of the defect in the image to be detected according to the position of the defect in the low-resolution image and the mapping relation between the low-resolution image and the image to be detected. Therefore, the defect position in the image can be determined without manually identifying the defect, so that the defect detection efficiency can be improved. And, through processes such as image defect enhancement processing and downsampling processing, the accuracy of the determined defect position can be improved.
The image defect determining method of the embodiment of the present disclosure is described in more detail below.
In step S210, defect enhancement processing is performed on the acquired image to be detected, and a defect enhanced image is obtained.
In the embodiment of the disclosure, the image to be detected refers to an image containing a defect, for example, the image may be a defect image obtained directly, or may be obtained by preprocessing a defect image obtained by an image obtaining device. Optionally, the method for acquiring the image to be detected may refer to fig. 3 (a), which includes the following steps:
Step S310, an original image is obtained, and downsampling processing and gray processing are performed on the original image to obtain a gray image.
In the embodiment of the disclosure, the original image may be a defect image acquired by the image acquisition device, referring to fig. 3 (b), fig. 3 (b) shows a schematic diagram of the defect image, and it can be seen that a defect exists in a position marked as an ellipse shape in the image. Typically, the resolution of the original image is high, for example 1024×1024. The original image can be subjected to downsampling processing to obtain a downsampled image with lower resolution so as to improve the defect identification efficiency. The downsampling is to reduce the number of sampling points, and if the downsampling coefficient is k for an image with n×m, each line and each column of each point in the original image take one point to form an image, and the obtained image is (N/k) ×m/k. For example, for an image of 1024×1024, if the downsampling coefficient is 2, the downsampled image obtained after the downsampling process is 512×512.
Because the defect forms of the original images are different, the background film layer has larger difference, and in order to well highlight the defects in the images, the images can be subjected to gray processing. For different images, different gray scale coefficients can be adopted to perform image format conversion according to the color distribution. In one exemplary embodiment of the present disclosure, for each pixel in a downsampled image, the gray value of the pixel may be updated to an average of the gray values of the channels and the average may be normalized. For any pixel, if the coordinates of the pixel are (x, y), the following formula can be used:
The Gray value Gray (x, y) of the pixel is calculated, wherein R (x, y), G (x, y), and B (x, y) respectively represent the Gray values of three channels of the pixel R, G, B.
Step S320, determining an image to be detected according to the gray level image.
In the embodiment of the disclosure, the gray image can be directly used as the image to be detected. And when the number of pixels in the gray level image is not an integer multiple of the number of pixels of the target image, cutting the gray level image to obtain a plurality of images to be detected, so that the number of pixels in the images to be detected is an integer multiple of the number of pixels of the target image.
It should be noted that, the target image may be an image taken from a certain area of the display panel, and the number of image pixels included in the target image of different display panels is different due to different product types of the display panels, and the image acquired by the image acquisition device may be an array arrangement of the target image, as can be seen in fig. 3 (b). In addition, the number of image pixels in the target image may also be varied for the same display panel due to differences in image acquisition devices, or differences in cut-out areas, or the like.
Assuming that an image acquired by the image acquisition apparatus contains a×c image pixels, a and C are the numbers of image pixels in the horizontal direction and the vertical direction of the image, respectively, a target image contains b×d image pixels, and B and D are the numbers of image pixels in the horizontal direction and the vertical direction of the target image, respectively. A may not be an integer multiple of B and C may not be an integer multiple of D, at which time it may be determined that Δx and Δy image pixels may be subtracted in the horizontal and vertical directions, respectively, based on Δx=b-mod (a, B), Δy=d-mod (C, D), where mod represents a remainder function. Of course, if the number of pixels of the target image contained in the image is an integer, clipping may not be performed.
For example, the target image may include 70×70 image pixels, and the image acquired by the image acquisition device may include 150×150 image pixels, and then it may be known from 70-mod (150,70) =60 that 60 image pixels may be clipped in the horizontal direction and the vertical direction, respectively. The method and the device can cut the original image, cut the gray level image and take the cut image as the image to be detected. Therefore, the edge information can be removed through cutting, so that the cut image contains an array arrangement image of a complete target image, and the situation that the edge position is misjudged as a defect position due to the incompleteness of the edge information is avoided.
In an exemplary embodiment disclosed, the gray image may be cut from different directions to obtain a plurality of images, and as can be seen in fig. 4 to 7, the blank area and the shadow area form the gray image, the shadow area is a cut area, and the blank area is a cut image. It can be seen that, when cutting each time, one direction can be selected in the horizontal direction and the vertical direction respectively for cutting, so that 4 images are obtained, and each image is an image to be detected. In the following steps, each image to be detected may be processed in the same manner. Of course, the image clipping method is not limited to the above 4 types, and may be adjusted according to the number of pixels in the grayscale image and the number of pixels in the target image.
It should be noted that, the image enhancement can purposefully emphasize the whole or partial characteristics of the image, make the original unclear image clear or emphasize some interesting features, enlarge the differences between different object features in the image, inhibit the uninteresting features, improve the image quality, enrich the information content, strengthen the image interpretation and recognition effects, and meet the needs of some special analysis. Here, the defect enhancement processing is one of the processing modes in image enhancement, that is, enhancement processing is performed on defects in an image. Optionally, defect enhancement processing can be performed on the obtained image to be detected through a mask image to obtain a defect enhanced image, wherein a numerical value corresponding to each pixel in the mask image represents the probability of occurrence of the defect. Or the defect enhancement processing can be performed on the image to be detected by other related methods.
Referring to fig. 8, fig. 8 shows a flowchart of defect enhancement processing in an embodiment of the present disclosure, which may include the steps of:
Step S810, performing time-frequency transformation, amplitude normalization and frequency transformation on the acquired image to be detected to obtain a transformed image.
The distribution of the signals in the frequency domain can be obtained through time-frequency transformation, the image is also a signal and is a discrete signal, and the frequency spectrum data can be obtained through time-frequency transformation on the image. The time-frequency transformation may be discrete fourier transformation, wavelet transformation, or the like, and the magnitude of the frequency indicates the intensity of the signal change, and the larger the frequency is, the more intense the signal change is, the smaller the frequency is, and the signal is flatter. In the image, the high-frequency signal is often an edge signal and a noise signal in the image, and the low-frequency signal includes a signal such as an image contour and a background, which frequently change. It should be noted that, there is no one-to-one correspondence between the points on the spectrogram obtained by the time-frequency conversion and the points on the original image.
The absolute value of the frequency domain data corresponding to the image to be detected is obtained and the amplitude is normalized to obtain an amplitude image, and fig. 9 may be referred to, where fig. 9 shows a schematic diagram of the amplitude image in the embodiment of the disclosure. The abscissa and the ordinate in the figure respectively represent the number of pixels in the horizontal direction and the vertical direction, and it can be seen that the number of pixels in the horizontal direction and the vertical direction is 512. Dividing the frequency domain data by the amplitude gives phase data, corresponding phase images can be seen in fig. 10. Then, the frequency-time conversion (for example, inverse discrete fourier transform) is performed, and the real part is taken as a converted image, which can be seen in fig. 11.
Step S820, performing noise reduction processing on the transformed image to obtain a noise-reduced image.
In the embodiment of the disclosure, noise reduction processing may also be performed on the image. For example, the transformed image may be gaussian filtered to obtain a smoothed noise-reduced image, which may be seen in fig. 12. The filtering radius of the Gaussian filter can be a value of 1-10, and experiments show that the noise reduction effect is good when the filtering radius is 2.
In step S830, the noise-reduced image is processed through the mask image with the same resolution as the noise-reduced image, so as to obtain a mask-processed image.
Where a mask is a region or process that controls image processing by masking a selected image, graphic or object, image to be processed (either wholly or partially). In the embodiment of the disclosure, the noise reduction image may be processed through the mask image to enhance defects in the image. The probability of defect occurrence is increased as the positions of the pixels in the noise reduction image are closer to the center, so that the probability of defect occurrence at different positions can be set to obtain a mask image, namely, the probability of defect occurrence is represented by the numerical value corresponding to each pixel in the mask image. The gray value corresponding to each pixel in the mask image is inversely related to the distance between the pixel and the center of the image. That is, the closer to the center in the mask image, the larger the gradation value corresponding to the pixel. Alternatively, the gray values corresponding to the pixels in the mask image may follow a normal distribution, and the gray value corresponding to the center of the mask image is the largest.
In addition, the probability of occurrence of defects at the edge position is low, so that the value corresponding to the preset number (for example, 5, 6, etc.) of pixels at the edge in the mask image can be set to zero. Referring to fig. 13, fig. 13 shows a schematic diagram of a mask image in an embodiment of the disclosure, and it can be seen that the brightness of the center position is higher, which indicates a larger value; the brightness of the edge position is lower, indicating a smaller value.
In the embodiment of the disclosure, the resolution of the mask image and the noise reduction image is the same, for example, 512×512. The processing procedure of the mask image on the noise reduction image may specifically be that, for each pixel in the noise reduction image, a product of a value corresponding to the pixel and a value corresponding to the pixel in the mask image is taken as a value corresponding to the pixel. It will be appreciated that the enhancement is performed on the central region of the noise reduced image, i.e. on defects in the noise reduced image.
The noise reduction image is an image subjected to time-frequency conversion and frequency-time conversion, and the numerical value corresponding to each pixel is not a pixel value, and has no physical meaning. The masked image can be seen in fig. 14, where the center region is highlighted compared to the edge regions.
Step S840, normalizing the image after mask processing to obtain a defect enhanced image.
In the embodiment of the disclosure, the image after mask processing may also be normalized. For example, the standard deviation of the pixels in the image after mask processing may be calculated first, and then normalized according to the standard deviation, to obtain the defect enhanced image.
In step S220, the defect-enhanced image is downsampled to obtain a low-resolution image.
In the embodiment of the disclosure, the defect enhanced image can be post-processed to accurately determine the defects in the image. Optionally, the defect enhanced image may be subjected to multiple downsampling, multiple denoising and multiple normalization to obtain a low resolution image, which may specifically include the following steps:
first, a maximum value sample may be performed using 4*4 of the verification defect-enhanced image, resulting in a maximum value sample image. If the resolution of the defect enhanced image is 512 x 512, then the resolution of the maximum sampled image is 128 x 128. Of course, the maximum value of the verification defect enhanced image of 8×8 may be used for sampling, which is not limited herein. After that, the upper limit value max_value of the maximum value sampling image can be obtained, and experiments show that when half of the upper limit value is taken as the lower limit value min_value and the lower limit value min_value is taken as the threshold value of defect position detection, the determined defect position is higher in accuracy.
Second, the maximum sampled image is divided by the minimum, and then may be logarithmically processed, normalized, and Gaussian filtered, where the Gaussian filter may be 3*3 Gaussian or other types of filters.
Again, a 2x 2 average downsampling may be performed to obtain a defect image of smaller resolution (64 x 64), which may be performed two or more times, resulting in a defect image of 32 x 32 or smaller resolution.
And finally, carrying out Gaussian filtering and normalization processing on the defect image to obtain a low-resolution image. It can be seen that in case the resolution of the original image is 1024×1024, the resolution of the obtained low resolution image may be 32×32.
In step S230, binarization processing is performed on the low resolution image to determine the position of the defect in the low resolution image.
In the embodiment of the disclosure, experiments show that the larger the value corresponding to a pixel in a low-resolution image is, the smaller the probability that the position of the pixel is a defect is; the smaller the value corresponding to a pixel in the low resolution image, the greater the probability that the pixel is defective. Alternatively, the position of the pixel having a corresponding value smaller than the preset threshold (for example, the lower limit value min_value) in the low resolution image may be used as the position of the defect.
In addition, the maximum value filtering of 4*4 can be performed on the low-resolution image to obtain a defect image of 8 x 8. And then, binarizing the defect image by adopting a preset threshold value. Specifically, the corresponding pixel with a value smaller than the preset threshold may be marked as 1, the corresponding pixel with a value not smaller than the preset threshold may be marked as 0, and the binarized image may be seen in fig. 15. It can be seen that the number of pixels marked 1 is two.
In step S240, the position of the defect in the image to be detected is determined according to the position of the defect in the low resolution image and the mapping relationship between the low resolution image and the image to be detected.
It should be noted that, after determining the position of the defect in the low resolution image, the position of the defect in the image to be detected may be determined according to the mapping relationship between the low resolution image and the image to be detected. Since the resolution difference between the low resolution image and the image to be detected is large, if mapping is directly performed, the determined defect position error is large, and here, layer-by-layer mapping can be performed.
For example, when the resolution of the low resolution image is 8×8, the low resolution image is mapped to an image with a resolution of 32×32. Specifically, the location of the defect in the image with the resolution of 32×32 may be determined according to the location of the defect in the image with the resolution of 8×8. Then, the values corresponding to the pixels in the position are normalized, and the probability of representing a defect is greater as the normalized value is closer to 1, so that the position where the pixel with the normalized value larger than the preset normalized value is located can be determined as the defect position, wherein the normalized value can be 0.9, 0.8, and the like, and the invention is not limited herein. The normalized values of the values corresponding to the pixels in the image with the resolution of 32×32 can be seen in fig. 16, and fig. 16 includes values corresponding to a part of the pixels in the image with the resolution of 32×32, and values of other areas are 0, which are not shown in fig. 16.
After determining the location of the defect in the image with a resolution of 32×32, the location of the defect in the image with a resolution of 128×128 may be determined again by the same method, and so on, and finally the location of the defect in the image with a resolution of 512×512 is determined. Of course, the location of the defect in the original image may also be determined. Experiments show that the accuracy of image defect determination of the embodiment of the disclosure is 99.99%.
Therefore, in the image defect determining method of the embodiment of the disclosure, defects in the image can be automatically identified, and manual operation is avoided, so that the efficiency of image defect detection is improved.
In the embodiment of the disclosure, an image to be detected is classified according to a defect classification model to obtain defect types of the image to be detected, wherein the defect classification model is used for identifying one or more defect types. It is also possible to intercept a target defect image from the image to be detected according to the position of the defect in the image to be detected and the image intercept size after step S240. More specifically, the target defect image can be cut from the image to be detected by taking the center point of the defect as a reference according to the defect size and the image size expected to be cut according to the position of the defect. Or the target defect image may be truncated from the original image. In this way, since the proportion of defects in the target defect image in the image is large, the problem of low classification accuracy in defect classification caused by excessive background information identification can be avoided.
And classifying the target defect image according to the defect classification model to obtain the defect type of the target defect image. The defect classification model can be trained by a machine learning mode. Because the training set may have maldistribution, in order to improve accuracy of the defect classification model, the defect classification model may be trained by a model integration method.
Among them, the model integration method is an algorithm that combines several basic models (commonly called weak learners) into one prediction model to achieve the effect of reducing variance, deviation, or improving prediction. The weak learner may be referred to as a level 0 learner, and the prediction model obtained by combining the weak learners may be referred to as a level 1 learner. Wherein the Stacking integration method can train one model, and the model is used for combining other models. First, a plurality of different base models are trained, and then one model is trained using the output of each base model as input to obtain a final output.
In one exemplary embodiment of the present disclosure, the defect classification model may be trained by stacking integration methods based on a pre-trained average model, a classification model, and an index model, as well as a training set. That is, the base model may include: average model, bi-classification model, and exponential model. Referring to fig. 17, fig. 17 shows a schematic structural diagram of a defect classification model in an embodiment of the disclosure, where an average model, a classification model, and an index model are connected by features, so as to obtain the defect classification model. Of course, the basic model may be other models, and the model integration method is not limited to the stacking integration method.
When the defect classification model is trained, model parameters of a pre-trained basic model can be locked, epoch is set to be 120, basic size is set to be 256, a training optimizer can adopt adadelta optimizers and the like, and the learning rate is 0.3, so that the defect classification model is obtained. Wherein when a complete dataset passes through the neural network once and returns once, the process is called an epoch, bachsize, which represents the number of datasets per pass through the neural network. adadelta optimizer is an optimization method for adjusting learning rate, and the convergence rate is high.
In the embodiment of the disclosure, the training set includes a correspondence between images and defect categories, and the defect category of each image may be pre-labeled. The defect categories of the image can include a plurality of types, and the training set used in training can be different for each model according to the model in order to improve the accuracy of the basic model. For example, the number of images corresponding to each defect class in the training set corresponding to the average model may be equal or nearly equal. The input of the average model is an image, and the output is the defect type of the image. The two types of defect types in the training set corresponding to the two types of classification models are the defect type with the highest proportion, and the other defect types except the defect type with the highest proportion are the other defect types. The input of the classification model is an image, and the output is two defect types of the image. The number of images corresponding to each defect class in the training set corresponding to the exponential model can be distributed in an exponential manner, and normalized when a specific proportion is calculated. The input of the index model is an image, and the output is the defect type of the image.
When the model is trained, under the condition that the data volume of the training set is small, the pre-training model can be used for fast convergence, and better precision can be obtained; with sufficient data volume, faster convergence can be achieved using the pre-trained model. Therefore, in order to improve the speed and precision of model training, the average model, the classification model and the index model can be obtained by training based on the ImageNet pre-training model. The pre-trained model is already trained and can perform specific tasks on a large amount of data (e.g., identify classification problems for images). The ImageNet pre-training model is a huge database of millions of pictures containing millions of classes, which can be used for classification of images.
It should be noted that after the image is cut, the number of the obtained images to be detected may be plural, and then the plural images to be detected may be classified according to the defect classification model to obtain the defect type of each image to be detected; and when the defect categories corresponding to the plurality of images to be detected are the same, taking the defect category as the defect category of the original image. For example, when the target defect images corresponding to the 4 images to be detected are classified by the defect classification model, the target defect images corresponding to the 4 images to be detected can be respectively input into the defect classification model, so that defect types corresponding to the target defect images corresponding to the 4 images to be detected can be obtained. And determining the final defect category by judging the consistency of the 4 defect categories. For example, when the 4 defect categories are the same, it may be determined that the defect category is the final defect category; when the 4 defect categories are different, the classification accuracy can be further improved by manually determining the defect categories.
Optionally, the defect classification model of the embodiment of the present disclosure may also process multiple tasks in parallel, that is, may identify defect classes from different dimensions, for example, including: defect type at process angle, defect type at repair angle, etc. For example, the categories of images may include: the first category and the second category each comprise a plurality of defect categories, and each image can correspond to one defect category in the first category or one defect category in the second category. The defect classification model may determine a defect class in a first class and a defect class in a second class corresponding to the image simultaneously. Correspondingly, during training, the correspondence between the images in the training set and the defect categories refers to the correspondence between the images and the defect categories in the first category and the defect category in the second category, that is, the correspondence between one image and two defect categories. It will be appreciated that depending on the classification of defect categories, an image may correspond to more than two defect categories as well as more than two defect categories. Thus, the defect classification model may include one or more different sets of fully connected layers and normalized layers, each set corresponding to a different classification task.
Referring to fig. 18, fig. 18 shows still another schematic structural diagram of a defect classification model in an embodiment of the disclosure, it can be seen that the defect classification model includes: the system comprises an integrated network, a first full-connection layer, a first normalization layer, a second full-connection layer and a second normalization layer. The integrated network may include: the input layer, the convolution layer, the pooling layer and the like, after the defect image passes through the integrated network, the defect type in the first type corresponding to the image can be determined through the first full-connection layer and the first normalization layer, and the defect type in the second type corresponding to the image can be determined through the second full-connection layer and the second normalization layer.
Referring to fig. 19, fig. 19 shows a network structure schematic of each basic model in an embodiment of the disclosure, including: input layer, convolution layer, max pooling layer, batch regularization layer, activation layer, max pooling layer, etc. The first column indicates the names of the layers (layers) in the model, the second column indicates the Output of the layers (Output shape), and the third column indicates the number of network parameters (parameters #) in the layers. Referring to fig. 20, fig. 20 shows a network structure schematic diagram of a defect classification model in an embodiment of the disclosure, including: input layer, connection layer, full connection layer, etc. It can be seen that the defect image, after being processed by the input layer, enters the connection layer after passing through three basic models (model_1_1, model_4_1, and model_7_2), respectively. Then, the defect type in the first type and the defect type in the second type corresponding to the defect image can be determined by passing through the first full connection layer (dense_1) and the second full connection layer (dense_2) respectively. Through verification, the accuracy of the defect classification model can be determined to be more than 95%. It can be seen that the method has higher accuracy.
The image defect determining method of the embodiment of the disclosure adopts a defect enhancing processing method, for example, defects in an image can be highlighted through discrete Fourier transform, inverse discrete Fourier transform and mask image. Then, a low-resolution image is obtained through downsampling, so that the position of the defect in the image can be conveniently and accurately determined. Finally, the accuracy of the defect position in the finally determined original image can be improved through a layer-by-layer mapping method. Therefore, the method and the device can automatically identify the position of the defect in the image, do not need manual identification, and can improve the identification efficiency. And the defect classification model is obtained through training by a model integration method, and the target defect image is classified by the defect classification model, so that the classification accuracy can be improved.
It should be noted that although the steps of the methods in the present disclosure are depicted in the accompanying drawings in a particular order, this does not require or imply that the steps must be performed in that particular order, or that all illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
Corresponding to the above method embodiments, the present disclosure further provides an image defect determining apparatus, referring to fig. 21, fig. 21 is a schematic structural diagram 2100 of the image defect determining apparatus according to the embodiment of the present disclosure, including:
a defect enhancement processor 2110 configured to perform defect enhancement processing on the acquired image to be detected, to obtain a defect enhanced image;
A downsampling processor 2120 configured to downsample the defect-enhanced image to obtain a low-resolution image;
a binarization processor 2130 configured to binarize the low resolution image to determine a location of the defect in the low resolution image;
An image defect determining processor 2140 configured to determine a position of a defect in the image to be detected based on the position of the defect in the low resolution image and a mapping relationship between the low resolution image and the image to be detected.
Optionally, the image defect determining apparatus of the embodiment of the present disclosure further includes:
The preprocessor is configured to acquire an original image, and perform downsampling and gray processing on the original image to obtain a gray image; and determining an image to be detected according to the gray level image.
Optionally, the preprocessor determines the image to be detected from the gray scale image by:
When the number of pixels in the gray level image is not an integer multiple of the number of pixels of the target image, the gray level image is cut to obtain a plurality of images to be detected, and the number of pixels in the images to be detected is an integer multiple of the number of pixels of the target image.
Optionally, the preprocessor performs cropping on the gray scale image by:
If the number of pixels in the horizontal direction and the vertical direction of the grayscale image are a and C, respectively, and the number of pixels in the horizontal direction and the vertical direction of the target image are B and D, respectively, then according to the formula:
Δx=b-mod (a, B), Δy=d-mod (C, D), and the number of pixels clipped Δx in the horizontal direction and the number of pixels clipped Δy in the vertical direction of the gray scale image are determined, where mod represents a remainder function;
the gray scale image is cropped according to deltax and deltay.
Optionally, the defect enhancement processor is specifically configured to perform defect enhancement processing on the obtained image to be detected through the mask image to obtain a defect enhanced image, where a value corresponding to each pixel in the mask image represents probability of occurrence of the defect.
Optionally, the defect enhancement processor performs defect enhancement processing on the acquired image to be detected through the mask image to obtain a defect enhanced image by the following steps:
performing time-frequency transformation, amplitude normalization and frequency transformation on the acquired image to be detected to obtain a transformed image;
carrying out noise reduction treatment on the transformed image to obtain a noise-reduced image;
Processing the noise reduction image through a mask image with the same resolution as the noise reduction image to obtain a mask processed image;
normalizing the image after mask processing to obtain a defect enhanced image.
Optionally, the gray value corresponding to each pixel in the mask image is inversely related to the distance between the pixel and the center of the defect image.
Optionally, the gray values corresponding to the pixels in the mask image follow normal distribution, and the gray value corresponding to the center of the mask image is the largest.
Optionally, the downsampling processor is specifically configured to perform multiple downsampling processing, multiple noise reduction processing and multiple normalization processing on the defect enhanced image, so as to obtain a low-resolution image.
Optionally, the binarization processor is specifically configured to take, as the location of the defect, the location of the pixel where the corresponding value is smaller than the preset threshold value in the low resolution image.
Optionally, the image defect determining apparatus of the embodiment of the present disclosure further includes:
The defect classification processor is configured to classify the image to be detected according to a defect classification model to obtain defect types of the image to be detected, wherein the defect classification model is used for identifying one or more defect types.
Optionally, the defect classification processor is further configured to classify the plurality of images to be detected according to a defect classification model to obtain a defect class of each image to be detected; and when the defect categories corresponding to the plurality of images to be detected are the same, taking the defect category as the defect category of the original image.
Optionally, the defect classification model is trained by a model integration method.
Optionally, the image defect determining apparatus of the embodiment of the present disclosure further includes:
And the defect classification model determining processor is configured to train to obtain the defect classification model through stacking integration method according to the pre-trained average model, the classification model and the index model and the training set.
Optionally, the defect classification model includes one or more sets of different fully connected layers and normalized layers, each set of fully connected layers and normalized layers corresponding to a different classification task.
Optionally, the average model, the classification model and the index model are all trained based on an ImageNet pre-training model.
Each processor in the above apparatus may be a general purpose processor, including: a central processor, a network processor, etc.; but also digital signal processors, application specific integrated circuits, field programmable gate arrays or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The processors in the device may be independent processors or may be integrated together.
In an exemplary embodiment of the present disclosure, there is also provided an electronic apparatus including: a processor; a memory configured to store processor-executable instructions; wherein the processor is configured to perform the method of any of the present example embodiments.
Fig. 22 shows a schematic structural diagram of a computer system for implementing an electronic device of an embodiment of the present disclosure. It should be noted that the computer system 2200 of the electronic device shown in fig. 22 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 22, the computer system 2200 includes a central processor 2201, which can perform various appropriate actions and processes according to programs stored in a read-only memory 2202 or programs loaded from a storage portion 2208 into a random access memory 2203. In the random access memory 2203, various programs and data necessary for the system operation are also stored. The cpu 2201, the rom 2202, and the ram 2203 are connected to each other via a bus 2204. An input/output interface 2205 is also connected to bus 2204.
The following components are connected to the input/output interface 2205: an input portion 2206 including a keyboard, a mouse, and the like; an output portion 2207 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage portion 2208 including a hard disk or the like; and a communication section 2209 including a network interface card such as a Local Area Network (LAN) card, a modem, or the like. The communication section 2209 performs communication processing via a network such as the internet. The drive 2210 is also connected to the input/output interface 2205 as needed. A removable medium 2211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 2210 as needed, so that a computer program read out therefrom is mounted into the storage section 2208 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program can be downloaded and installed from a network via the communication portion 2209, and/or installed from the removable medium 2211. The computer programs, when executed by the central processor 2201, perform the various functions defined in the apparatus of the present application.
In an exemplary embodiment of the present disclosure, there is also provided a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a computer, performs the method of any of the above.
The non-volatile computer readable storage medium shown in the present disclosure may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory, a read-only memory, an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, radio frequency, and the like, or any suitable combination of the foregoing.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (19)

1. An image defect determining method, comprising:
Performing defect enhancement processing on the acquired image to be detected through a mask image to obtain a defect enhanced image, wherein the numerical value corresponding to each pixel in the mask image represents the probability of occurrence of the defect;
Performing downsampling processing on the defect enhanced image to obtain a low-resolution image;
Performing binarization processing on the low-resolution image, and determining the position of a defect in the low-resolution image;
And determining the position of the defect in the image to be detected according to the position of the defect in the low-resolution image and the mapping relation between the low-resolution image and the image to be detected.
2. The method as recited in claim 1, further comprising:
acquiring an original image, and performing downsampling and gray scale processing on the original image to obtain a gray scale image;
And determining the image to be detected according to the gray level image.
3. The method of claim 2, wherein said determining said image to be detected from said gray scale image comprises:
when the number of pixels in the gray level image is not an integer multiple of the number of pixels of the target image, cutting the gray level image to obtain a plurality of images to be detected, and enabling the number of pixels in the images to be detected to be an integer multiple of the number of pixels of the target image.
4. A method according to claim 3, wherein said cropping the grayscale image comprises:
If the number of pixels in the horizontal direction and the vertical direction of the gray image are a and C, respectively, and the number of pixels in the horizontal direction and the vertical direction of the target image are B and D, respectively, then according to the formula:
to x = B-mod (a, B), faty = D-mod (C, D), determining the number of pixels cropped in the horizontal direction of the gray scale image fatx, and the number of pixels cropped in the vertical direction faty, where mod represents a remainder function;
the gray scale image is cropped according to father x and father y.
5. The method according to claim 1, wherein performing defect enhancement processing on the acquired image to be detected through the mask image to obtain a defect enhanced image comprises:
performing time-frequency transformation, amplitude normalization and frequency transformation on the acquired image to be detected to obtain a transformed image;
Carrying out noise reduction treatment on the transformed image to obtain a noise-reduced image;
processing the noise reduction image through a mask image with the same resolution as the noise reduction image to obtain a mask processed image;
Normalizing the image after mask processing to obtain a defect enhanced image.
6. The method of claim 1, wherein the gray value for each pixel in the mask image is inversely related to the distance between the pixel and the center of the mask image.
7. The method of claim 6, wherein the gray values corresponding to pixels in the mask image follow a normal distribution and the gray value corresponding to a center of the mask image is the largest.
8. The method of claim 1, wherein downsampling the defect-enhanced image to obtain a low resolution image comprises:
And performing multiple downsampling treatment, multiple noise reduction treatment and multiple normalization treatment on the defect enhanced image to obtain a low-resolution image.
9. The method of claim 1, wherein binarizing the low resolution image to determine the location of defects in the low resolution image comprises:
and taking the position of the pixel with the corresponding value smaller than the preset threshold value in the low-resolution image as the position of the defect.
10. The method according to claim 1, wherein the method further comprises:
Classifying the image to be detected according to a defect classification model to obtain defect types of the image to be detected, wherein the defect classification model is used for identifying one or more defect types.
11. The method according to claim 10, wherein the method further comprises:
acquiring an original image, and performing downsampling and gray scale processing on the original image to obtain a gray scale image;
When the number of pixels in the gray level image is not an integer multiple of the number of pixels of the target image, cutting the gray level image to obtain a plurality of images to be detected, and enabling the number of pixels in the images to be detected to be an integer multiple of the number of pixels of the target image;
Classifying the images to be detected according to the defect classification model to obtain defect types of each image to be detected;
And when the defect categories corresponding to the plurality of images to be detected are the same, taking the defect category as the defect category of the original image.
12. The method of claim 10, wherein the defect classification model is trained by a model integration method.
13. The method according to claim 12, wherein the method further comprises:
And training to obtain the defect classification model through stacking integration method according to a pre-trained average model, a classification model, an index model and a training set.
14. The method of claim 10, wherein the defect classification model includes one or more different sets of fully connected layers and normalized layers, each set corresponding to a different classification task.
15. The method of claim 13, wherein the average model, the bi-classification model, and the index model are each trained based on an ImageNet pre-training model.
16. An image defect determining apparatus, comprising:
the defect enhancement processor is configured to perform defect enhancement processing on the acquired image to be detected through a mask image to obtain a defect enhanced image, wherein a numerical value corresponding to each pixel in the mask image represents the probability of occurrence of the defect;
a downsampling processor configured to downsample the defect-enhanced image to obtain a low-resolution image;
a binarization processor configured to binarize the low resolution image, and determine a position of a defect in the low resolution image;
an image defect determining processor configured to determine a position of a defect in the image to be detected according to the position of the defect in the low resolution image and a mapping relationship between the low resolution image and the image to be detected.
17. The apparatus of claim 16, wherein the apparatus further comprises:
and the defect classification processor is configured to classify the image to be detected according to a defect classification model to obtain defect types of the image to be detected, wherein the defect classification model is used for identifying one or more defect types.
18. An electronic device, comprising:
A processor; and
A memory configured to store executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1-15 via execution of the executable instructions.
19. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the method of any of claims 1-15.
CN202080000055.6A 2020-01-21 2020-01-21 Image defect determining method and device, electronic equipment and storage medium Active CN113498528B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073571 WO2021146935A1 (en) 2020-01-21 2020-01-21 Image defect determining method and apparatus, and electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN113498528A CN113498528A (en) 2021-10-12
CN113498528B true CN113498528B (en) 2024-07-23

Family

ID=76992775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080000055.6A Active CN113498528B (en) 2020-01-21 2020-01-21 Image defect determining method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113498528B (en)
WO (1) WO2021146935A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658167B (en) * 2021-08-24 2024-03-26 凌云光技术股份有限公司 Training image generation method and device containing defects
CN113838025A (en) * 2021-09-22 2021-12-24 联想(北京)有限公司 Processing method and processing apparatus
CN113920538B (en) * 2021-10-20 2023-04-14 北京多维视通技术有限公司 Object detection method, device, equipment, storage medium and computer program product
CN114119472A (en) * 2021-10-21 2022-03-01 东方晶源微电子科技(北京)有限公司 Defect classification method and device, equipment and storage medium
CN114549448B (en) * 2022-02-17 2023-08-11 中国空气动力研究与发展中心超高速空气动力研究所 Complex multi-type defect detection evaluation method based on infrared thermal imaging data analysis
CN114299066B (en) * 2022-03-03 2022-05-31 清华大学 Defect detection method and device based on salient feature pre-extraction and image segmentation
CN115100110A (en) * 2022-05-20 2022-09-23 厦门微亚智能科技有限公司 Defect detection method, device and equipment for polarized lens and readable storage medium
CN115049621A (en) * 2022-06-17 2022-09-13 清华大学 Micropipe defect detection method, device, equipment, storage medium and program product
CN115311448B (en) * 2022-09-02 2024-07-12 敬科(深圳)机器人科技有限公司 Method, device and storage medium for positioning net-shaped material
CN115661159B (en) * 2022-12-29 2023-03-07 成都数联云算科技有限公司 Panel defect enhancement detection method, system, device and medium
CN116468726B (en) * 2023-06-13 2023-10-03 厦门福信光电集成有限公司 Online foreign matter line detection method and system
CN116630322B (en) * 2023-07-24 2023-09-19 深圳市中翔达润电子有限公司 Quality detection method of PCBA (printed circuit board assembly) based on machine vision
CN117218097B (en) * 2023-09-23 2024-04-12 宁波江北骏欣密封件有限公司 Method and device for detecting surface defects of shaft sleeve type silk screen gasket part
CN117576105B (en) * 2024-01-17 2024-03-29 高科建材(咸阳)管道科技有限公司 Pipeline production control method and system based on artificial intelligence

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866427A (en) * 2010-07-06 2010-10-20 西安电子科技大学 Method for detecting and classifying fabric defects
CN104616255A (en) * 2015-01-11 2015-05-13 北京工业大学 Adaptive enhancement method based on mammographic image

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4502186B2 (en) * 2004-04-02 2010-07-14 大日本スクリーン製造株式会社 Defect detection apparatus and defect detection method
WO2017172611A1 (en) * 2016-03-28 2017-10-05 General Dynamics Mission Systems, Inc. System and methods for automatic solar panel recognition and defect detection using infrared imaging
US10810721B2 (en) * 2017-03-14 2020-10-20 Adobe Inc. Digital image defect identification and correction
CN110458791B (en) * 2018-05-04 2023-06-06 圆周率科技(常州)有限公司 Quality defect detection method and detection equipment
CN110276750A (en) * 2019-06-17 2019-09-24 浙江大学 A kind of extraction of any inclination angle wafer straight line side length and crystal grain area partition method
CN110672620B (en) * 2019-10-08 2022-08-26 英特尔产品(成都)有限公司 Chip defect detection method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866427A (en) * 2010-07-06 2010-10-20 西安电子科技大学 Method for detecting and classifying fabric defects
CN104616255A (en) * 2015-01-11 2015-05-13 北京工业大学 Adaptive enhancement method based on mammographic image

Also Published As

Publication number Publication date
WO2021146935A1 (en) 2021-07-29
CN113498528A (en) 2021-10-12

Similar Documents

Publication Publication Date Title
CN113498528B (en) Image defect determining method and device, electronic equipment and storage medium
CN116168026B (en) Water quality detection method and system based on computer vision
CN114372983B (en) Shielding box coating quality detection method and system based on image processing
CN112819772B (en) High-precision rapid pattern detection and recognition method
CN107341488B (en) SAR image target detection and identification integrated method
CN111161222B (en) Printing roller defect detection method based on visual saliency
CN110766689A (en) Method and device for detecting article image defects based on convolutional neural network
CN111598869B (en) Method, equipment and storage medium for detecting Mura of display screen
CN113505865A (en) Sheet surface defect image recognition processing method based on convolutional neural network
CN115797314B (en) Method, system, equipment and storage medium for detecting surface defects of parts
CN109741322A (en) A kind of visibility measurement method based on machine learning
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN115689948B (en) Image enhancement method for detecting cracks of building water supply pipeline
CN111833369A (en) Alum image processing method, system, medium and electronic device
CN116660286A (en) Wire harness head peeling measurement and defect detection method and system based on image segmentation
DE102021004572A1 (en) Denoise images rendered using Monte Carlo renditions
CN114926374B (en) Image processing method, device and equipment based on AI and readable storage medium
CN116385380A (en) Defect detection method, system, equipment and storage medium based on depth characteristics
CN116152115A (en) Garbage image denoising processing method based on computer vision
CN113673396B (en) Spore germination rate calculation method, device and storage medium
CN114863189A (en) Intelligent image identification method based on big data
CN114998311A (en) Part precision detection method based on homomorphic filtering
CN116934762B (en) System and method for detecting surface defects of lithium battery pole piece
CN116883987A (en) Pointer instrument reading identification method for unmanned inspection of transformer substation
CN116958122A (en) SAR image evaluation method, SAR image evaluation device, SAR image evaluation equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant