CN111476750A - Method, device and system for carrying out stain detection on imaging module and storage medium - Google Patents

Method, device and system for carrying out stain detection on imaging module and storage medium Download PDF

Info

Publication number
CN111476750A
CN111476750A CN201910006562.XA CN201910006562A CN111476750A CN 111476750 A CN111476750 A CN 111476750A CN 201910006562 A CN201910006562 A CN 201910006562A CN 111476750 A CN111476750 A CN 111476750A
Authority
CN
China
Prior art keywords
pixel
test image
pixel value
taint
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910006562.XA
Other languages
Chinese (zh)
Other versions
CN111476750B (en
Inventor
马江敏
黄宇
吴高德
金壮壮
廖海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Sunny Opotech Co Ltd
Original Assignee
Ningbo Sunny Opotech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Sunny Opotech Co Ltd filed Critical Ningbo Sunny Opotech Co Ltd
Priority to CN201910006562.XA priority Critical patent/CN111476750B/en
Publication of CN111476750A publication Critical patent/CN111476750A/en
Application granted granted Critical
Publication of CN111476750B publication Critical patent/CN111476750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application provides a method, a device, a system and a storage medium for carrying out stain detection on an imaging module, wherein the method comprises the following steps: obtaining a test image through an imaging module; performing image enhancement processing on the test image to obtain an enhanced test image; performing binarization processing on the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set to be a taint pixel having a taint pixel value or a non-taint pixel having a non-taint pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined taint threshold value; and determining adjacent or communicated taint pixels to determine the size and the position of a taint area of the test image as a detection result, wherein the two communicated taint pixels are taint pixels connected through other taint pixels.

Description

Method, device and system for carrying out stain detection on imaging module and storage medium
Technical Field
The application relates to the field of imaging module quality detection, in particular to a method, a device, a system and a storage medium for carrying out stain detection on an imaging module.
Background
With the increasing demand for electronic products equipped with a camera function, such as smartphones, the demand for imaging modules is also increasing, and the demand for quality thereof is also increasing. In order to ensure the quality of the imaging module, the detection is necessary in the production process of the imaging module, and the stain detection of the imaging module is an important detection. When the imaging module is subjected to stain detection, the stain test result is misjudged under the influence of factors such as image noise, ambient brightness and the like. In order to meet the detection requirements, a method for accurately judging stains of the imaging module under the influence of factors such as image noise, ambient brightness and the like is needed.
Disclosure of Invention
The invention provides a method for carrying out stain detection on an imaging module, which comprises the following steps: obtaining a test image through an imaging module; performing image enhancement processing on the test image to obtain an enhanced test image; performing binarization processing on the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set to be a taint pixel having a taint pixel value or a non-taint pixel having a non-taint pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined taint threshold value; and determining adjacent or communicated taint pixels to determine the size and the position of a taint area of the test image as a detection result, wherein the two communicated taint pixels are taint pixels connected through other taint pixels.
In one embodiment, image enhancement processing the test image to obtain an enhanced test image comprises: reducing the minimum pixel value in the test image to a target minimum pixel value; increasing the maximum pixel value in the test image to a target maximum pixel value; adjusting pixel values between the minimum pixel value and the maximum pixel value to obtain an enhanced test image by:
stretch pixel value ═ stretch coefficient (pixel value-minimum pixel value) + target minimum pixel value
Wherein the stretch factor is a ratio of a difference between the target maximum pixel value and the target minimum pixel value to a difference between the maximum pixel value and the minimum pixel value, and the stretch pixel value represents the adjusted pixel value.
In one embodiment, image enhancement processing the test image to obtain an enhanced test image comprises: performing Fourier transform on the test image to obtain a Fourier spectrum; moving a zero frequency point of a Fourier spectrum to a central position; removing predetermined frequencies in the fourier spectrum; moving the zero frequency point of the Fourier spectrum back to the original position; and performing inverse fourier transform on the fourier spectrum, and performing one of real part taking, absolute value taking, and square root taking on a pixel value of each pixel in the image obtained by the inverse fourier transform to obtain an enhanced test image.
In one embodiment, removing the predetermined frequency in the fourier spectrum comprises: the predetermined frequencies in the fourier spectrum are removed by a gaussian low-pass filter function or a gaussian band-pass filter function.
In one embodiment, before performing image enhancement processing on the test image to obtain an enhanced test image, the method further includes: and carrying out dimensionality reduction on the test image by one of a region average dimensionality reduction method, a downsampling dimensionality reduction method and a bilinear dimensionality reduction method.
In one embodiment, the method further comprises: expanding the boundary of the dimension-reduced test image outward by a predetermined number of pixels, wherein the pixel values of the pixels in the expanded region are determined by: determining an optical center of the test image; obtaining the brightness decreasing relation of the imaging module according to the pixel value of the pixel in the test image, the distance from the optical center and the brightness value of the optical center; and determining the pixel value of the pixel in the extended area according to the decreasing brightness relation and the distance between the pixel in the extended area and the optical center.
In one embodiment, the method further comprises: and expanding the boundary of the dimension-reduced test image outwards by a preset number of pixels, wherein the pixel value of the pixel in the expansion area is determined according to the pixel at the boundary of the dimension-reduced test image or the pixel within a preset range at the boundary.
In one embodiment, the method further comprises: and adjusting the pixel value of each pixel in the binary test image according to the pixel value in a preset range around each pixel in the binary test image.
In one embodiment, adjusting the pixel value of each pixel in the binarized test image based on pixel values within a predetermined range around each pixel in the binarized test image comprises: setting the average pixel value in a preset range around each pixel in the binary test image as the pixel value of each pixel in the binary test image, or carrying out weighted average on the pixel values of all the pixels in the preset range around each pixel in the binary test image, and determining the pixel value of each pixel in the binary test image according to the weighted average result.
In one embodiment, adjusting the pixel value of each pixel in the binarized test image based on pixel values within a predetermined range around each pixel in the binarized test image comprises: and setting the median value of the pixel values in a preset range around each pixel in the binary test image as the pixel value of each pixel in the binary test image.
In one embodiment, the method further comprises: for each pixel in the test image subjected to the dimension reduction processing, comparing the pixel value of the pixel with the average pixel value of other pixels in a preset window around the pixel, and judging whether the pixel is a stain pixel according to the comparison result; finding a taint pixel adjacent to or communicated with the taint pixel to obtain the size and the position of the taint area; and outputting the obtained size and position as a luminance difference detection result.
In one embodiment, the method further comprises: and combining the detection result and the brightness difference detection result to be used as a final detection result.
In one embodiment, comparing the pixel value of the pixel with an average pixel value of other pixels within a predetermined window around the pixel comprises: summing pixel values of pixels within a predetermined window around the pixel to obtain a sum of pixel values within the window; and subtracting the pixel value of the pixel from the intra-window pixel value sum and dividing by the number of other pixels within the predetermined window around the pixel to obtain an average pixel value of the other pixels within the predetermined window around the pixel, wherein when the predetermined window of the pixel overlaps the predetermined window of the other pixel, the intra-window pixel value sum of the pixel is obtained by subtracting the pixel value sum within the non-overlapping region from the intra-window pixel value sum of the other pixel and adding the pixel to the predetermined window of the other pixel and adding the pixel to increase the pixel value sum within the region relative to the predetermined window of the other pixel.
In one embodiment, the method further comprises verifying the detection result and removing pixel stains determined to be erroneously detected from the detection result, wherein verifying the detection result comprises verifying each pixel stain by: determining a pixel in the test image corresponding to each taint pixel; comparing the determined pixel value of each pixel with the average pixel values of other pixels in a predetermined window around the pixel; and judging whether the verified taint pixel is false detection or not according to the comparison result.
The invention also provides a device for detecting stains on the imaging module, which comprises: the image intensifier is used for carrying out image intensification processing on the test image obtained by the imaging module to obtain an intensified test image; a binarizer for performing binarization processing on the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set to a taint pixel having a taint pixel value or a non-taint pixel having a non-taint pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined taint threshold value; and a taint area determiner for determining taint pixels adjacent to or connected to each taint pixel to determine a size and a position of a taint area of the test image as a detection result, wherein the two connected taint pixels are taint pixels connected through other taint pixels.
The invention also provides a system for detecting stains on the imaging module, which comprises: a processor; and a memory coupled to the processor and storing machine readable instructions executable by the processor to: obtaining a test image through an imaging module; performing image enhancement processing on the test image to obtain an enhanced test image; performing binarization processing on the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set to be a taint pixel having a taint pixel value or a non-taint pixel having a non-taint pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined taint threshold value; and determining adjacent or communicated taint pixels to determine the size and the position of a taint area of the test image as a detection result, wherein the two communicated taint pixels are taint pixels connected through other taint pixels.
The present invention also provides a non-transitory machine-readable storage medium having stored thereon machine-readable instructions executable by a processor to: obtaining a test image through an imaging module; performing image enhancement processing on the test image to obtain an enhanced test image; performing binarization processing on the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set to be a taint pixel having a taint pixel value or a non-taint pixel having a non-taint pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined taint threshold value; and determining adjacent or communicated taint pixels to determine the size and the position of a taint area of the test image as a detection result, wherein the two communicated taint pixels are taint pixels connected through other taint pixels.
Drawings
Other features, objects and advantages of the present application will become more apparent by describing in detail non-limiting embodiments thereof with reference to the following drawings:
FIG. 1 illustrates a flow chart of a method of smut detection for an imaging module according to an exemplary embodiment of the invention;
FIG. 2 illustrates a flow chart of a method of smut detection for an imaging module according to another exemplary embodiment of the invention;
FIG. 3 shows a schematic diagram illustrating a bilinear interpolation dimension reduction method;
FIG. 4 is a diagram for explaining boundary extension of a test image;
FIG. 5 is a diagram for explaining the boundary expansion using the decreasing brightness characteristic of the imaging module;
FIG. 6 illustrates a flow chart of a method of smut detection for an imaging module according to yet another exemplary embodiment of the invention;
FIG. 7 shows a diagram illustrating obtaining a sum of pixel values within a predetermined window around a pixel; and
fig. 8 shows a schematic structural diagram of a computer system suitable for implementing the terminal device or the server of the present application.
Detailed Description
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various exemplary embodiments. It may be evident, however, that the various exemplary embodiments may be practiced without these specific details or with one or more equivalent arrangements. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the various exemplary embodiments.
In the drawings, the size and relative sizes of layers, films, panels, regions, and the like may be exaggerated for clarity and description. Further, like reference numerals refer to like elements.
When an element or layer is referred to as being "on," "connected to," or "coupled to" another element or layer, it can be directly on, connected or coupled to the other element or layer, or intervening elements or layers may be present. However, when an element or layer is referred to as being "directly on," "directly connected to" or "directly coupled to" another element or layer, there are no intervening elements or layers present. For purposes of this disclosure, "at least one of X, Y and Z" and "at least one selected from the group consisting of X, Y and Z" can be construed as any combination of two or more of only X, only Y, only Z, or X, Y and Z (such as, for example, XYZ, XYY, YZ, and ZZ). Like reference numerals refer to like elements throughout. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Although the terms first, second, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer, and/or section discussed below could be termed a second element, component, region, layer, and/or section without departing from the teachings of the present disclosure.
Spatially relative terms, such as "below," "lower," "above," "upper," and the like, may be used herein for descriptive purposes and to thereby describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms are intended to encompass different orientations of the device in use, operation, and/or manufacture in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "below" or "beneath" other elements or features would then be oriented "above" the other elements or features. Thus, the exemplary term "below" can encompass both an orientation of above and below. Further, the devices may be otherwise oriented (e.g., rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and the like are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Various embodiments are described herein with reference to cross-sectional illustrations that are schematic illustrations of idealized exemplary embodiments and/or intermediate structures. As such, variations from the shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Thus, example embodiments disclosed herein should not be construed as limited to the particular illustrated shapes of regions but are to include deviations in shapes that result, for example, from manufacturing. For example, an implanted region illustrated as a rectangle will typically have rounded or curved features and/or a gradient of implant concentration at its edges, rather than a binary change from implanted to non-implanted regions. Likewise, a buried region formed by implantation may result in some implantation in the region between the buried region and the surface through which implantation occurs. Thus, the regions illustrated in the figures are schematic in nature and their shapes are not intended to illustrate the actual shape of a region of a device and are not intended to be limiting.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. Unless expressly so defined herein, terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense.
In the invention, when the imaging module is subjected to stain detection, the imaging module is used for shooting a test image and detecting whether stains exist in the image so as to detect whether the imaging module has stains.
FIG. 1 shows a flow chart of a method of smut detection for an imaging module according to an exemplary embodiment of the invention.
Referring to fig. 1, a method 100 of performing stain detection on an imaging module according to an exemplary embodiment of the present invention includes: step S101, obtaining a test image through an imaging module; step S102, carrying out image enhancement processing on the test image; step S103, carrying out binarization processing on the enhanced test image; and step S104, determining a taint area in the test image subjected to the binarization processing.
The above steps will be described in detail below.
In step S101, a test image may be obtained by shooting with the imaging module, and the obtained test image may include brightness information. In some embodiments, a test image including only luminance information, in which the pixel value of each pixel may be a luminance value, may be obtained by extracting the luminance information.
In step S102, image enhancement processing may be performed on the test image to obtain an enhanced test image. The degree and kind of the stain are different due to different production line environments and causes of the stain, for example, the kind of the stain may include a deep stain, a light stain, an ultra-light stain, etc., and the position of the stain may include an edge position and a center position, etc. To distinguish these complex stains from the test image, it is necessary to effectively distinguish the foreground stain from the background. The image enhancement processing of the test image aims to make the stain more prominent with respect to the background for stain detection in the subsequent steps.
The enhancement processing of the test image can be realized by, for example, a linear stretching method or a frequency domain-based enhancement method.
In the linear stretching method, a maximum pixel value and a minimum pixel value in the test image may be calculated first, and then a target maximum pixel value and a target minimum pixel value in the enhanced test image are determined, where the target maximum pixel value may be greater than the maximum pixel value in the test image, and the target minimum pixel value may be less than the minimum pixel value in the test image, and the value ranges of the target maximum pixel value and the target minimum pixel value may be: 0 to 255. After the target maximum pixel value and the target minimum pixel value in the enhanced test image are determined, the linear stretch coefficient may be determined by the following equation (1):
lineCoef=(dstImgMax-dstImgMin)/(imgMax-imgMin)(1)
where lineCoef denotes a linear stretching coefficient, dstImgMax and dstImgMin denote a target maximum pixel value and a target minimum pixel value, respectively, and imgMaxValue and imgmminvaue denote a maximum pixel value and a minimum pixel value, respectively.
After determining the linear stretch coefficient, the pixel value of each pixel in the enhanced test image may be determined by the following equation (2):
dstValuek=lineCoef*(srcValuek-imgMin)+dstImgMin(2)
wherein, dstValuekPixel value, srcValue, representing the kth pixel in the enhanced test imagekRepresenting the pixel value of the kth pixel in the test image without enhancement.
After linear stretching, the difference between the pixel values of the pixels in the test image is further increased, so that the difference between pixels having different pixel values in the image is more obvious, which is advantageous for subsequent smear determination.
In the frequency domain based enhancement method, the test image may first be fourier transformed to obtain a fourier spectrum F (u, v). In the obtained fourier spectrum, the zero frequency point is not generally located at the center position, and therefore, the zero frequency point can be moved to the center position of the fourier spectrum after the fourier spectrum is obtained. Then, predetermined frequencies in the fourier spectrum may be removed, for example, frequencies below the predetermined frequencies, or frequencies within a certain frequency range.
Removing the predetermined frequency in the fourier spectrum may be achieved by gaussian low pass filtering. The Gaussian low-pass filtering frequency domain design function can be given by the following equation (3):
Figure BDA0001935682230000091
wherein, (u, v) represents the coordinate in the Fourier spectrum, M and N represent the size of the Fourier spectrum, (M/2, N/2) is the center of the spectrum, sigma is the standard deviation of the Gaussian function, and the range can be: 2 to 200.
Removing predetermined frequencies in the fourier spectrum may also be achieved by gaussian bandpass filtering. The gaussian bandpass filtering frequency domain design function may be as follows (4):
Figure BDA0001935682230000092
wherein the content of the first and second substances,
Figure BDA0001935682230000093
to know
Figure BDA0001935682230000094
For the two gaussian low-pass filtered frequency domains,
Figure BDA0001935682230000095
sigma contained in1And
Figure BDA0001935682230000096
sigma contained in2The value ranges of (a) may be: 2 to 200, and σ1>σ2
The removal of the predetermined frequency in the Fourier spectrum may be obtained by multiplying F (u, v) by H (u, v), where H (u, v) may be HGL(u, v) or HHB
After obtaining G (u, v), the zero frequency point in G (u, v) may be shifted back to the original position and inverse fourier transform may be performed on G (u, v) to obtain an image G (u, v) in which the pixel values are complex numbers, and thus, one of the processes of taking the real part, taking the absolute value, and taking the square root may be performed on each pixel value to obtain an enhanced test image.
After the image enhancement processing is performed on the test image, binarization processing may be performed on the enhanced test image in step S103 to obtain a binarized test image. In this step, a predetermined taint threshold may be determined first, and pixel values above the predetermined taint threshold may be set to taint pixel values, e.g. to 1, and pixel values below the predetermined taint threshold may be set to non-taint pixel values, e.g. to 0. However, the present invention is not limited thereto, and for example, the dirty pixel value may be 0 and the non-dirty pixel value may be 1. After the binarization process, the pixel values in the enhanced test image are adjusted to 0 or 1, i.e., the test image is binarized. Since the pixel values in the binarized test image include only two values, the distinction between stains and background becomes more apparent.
Next, in step S104, a stain area may be detected based on the binarized test image.
Specifically, the dirty region may be detected by a region growing method, and first, each pixel in the binarized test image may be traversed to detect a pixel value thereof, and for a pixel having a pixel value of a dirty pixel value, a dirty pixel adjacent to or connected to the pixel may be searched for at the start of the pixel, where the adjacent dirty pixel refers to a directly connected dirty pixel, the connected dirty pixel refers to a dirty pixel that may be connected by other dirty pixels, and a region composed of adjacent or connected dirty pixels may be referred to as a dirty region. The size and position of the smear region in the test image can be determined by the above detection as a detection result.
Fig. 2 shows a flow chart of a method of stain detection for an imaging module according to another exemplary embodiment of the present invention.
The method shown in fig. 2 differs from the method shown in fig. 1 in that the method in fig. 2 adds the steps of image dimensionality reduction, boundary extension, and image filtering. Steps S201, S204, S205, and S207 in fig. 2 are the same as steps S101 to S104 in fig. 1, respectively, and a description thereof will not be repeated here, and therefore, differences of fig. 2 from fig. 1 will be mainly described hereinafter.
As the number of pixels of the imaging module is higher and higher, the size of the image obtained therefrom is larger and larger, and the time required for processing the image is increased. In order to save processing time and computational resources, the test image may be subjected to a dimension reduction process, i.e., step S202 is performed.
The dimension reduction process may be implemented by, for example, a region-averaged dimension reduction method, a downsampled dimension reduction method, or a bilinear dimension reduction method.
In the region-averaged dimensionality reduction method, the width reduction multiple and the height reduction multiple of the image can be determined, and then the width and the height of the reduced-dimensionality image are obtained according to the following formulas (5) and (6):
resImgW=round(imgW/zoom_W)(5)
resImgH=round(imgH/zoom_H)(6)
wherein resImgW and resImgH represent the width and height of the dimension-reduced image, respectively, imgW and imgH represent the width and height of the test image before dimension reduction, zoom _ W and zoom _ H represent reduction multiples in width and height, respectively, and round () is a rounding function.
Then, for each pixel in the dimension-reduced image, a corresponding region in the test image can be obtained according to the position and the reduction factor, and the region can be determined by the following formulas (7) to (10):
startW=x*zoom_W(7)
endW=(x+1)*zoom_W(8)
startH=y*zoom_H(9)
endH=(y+1)*zoom_H(10)
wherein, x and y represent the coordinates of the pixel in the dimension-reduced image, startW and endW represent the width start position and the width end position of the region in the test image corresponding to the pixel, and startH and endH represent the height start position and the height end position of the region in the test image corresponding to the pixel, respectively. By the width start position, the width end position, the height start position and the height end position, a region in the test image can be determined, wherein the region corresponds to a pixel in the dimension reduction image.
For each pixel in the dimension-reduced image, its pixel value may be an average value of pixels in a region in the test image corresponding thereto.
The dimension-reduced image with a size smaller than that of the original test image can be obtained by reducing the test image and filling pixels in the dimension-reduced image with the average pixel value of the corresponding area in the test image, and the dimension-reduced image can be used as the test image to perform various processing in subsequent processing.
In the down-sampling dimension reduction method, a reduction factor of the image, which is an nth power of 2, may be determined first, and then the number of times down-sampling is performed, which corresponds to N, may be determined according to the reduction factor. After determining the number of downsamplings, the image may be downsampled. Specifically, odd or even rows and columns are extracted from the original test image, and a dimension-reduced image is composed using these rows and columns. If the down-sampling times is more than 1, in the down-sampling after the first down-sampling, the operation of extracting odd or even rows and columns is performed based on the dimension-reduced image obtained by the last down-sampling. After the down-sampling number of times reaches the determined number of times, the finally obtained dimension-reduced image is taken as a final dimension-reduced image, and the dimension-reduced image can be used as a test image to perform various kinds of processing.
In the bilinear interpolation dimension reduction method, the width reduction factor and the height reduction factor of the image may be determined first, and then the width and the height of the dimension reduced image may be obtained through equations (5) and (6) as described above. Next, for each pixel in the dimension-reduced image, the position in the original test image corresponding thereto can be determined by the following equations (11) and (12):
i=x*zoom_W(11)
j=y*zoom_H(12)
where x and y represent coordinates of pixels in the dimension-reduced image, zoom _ W and zoom _ H represent reduction factors in width and height, respectively, and i and j represent positions in the original test image corresponding to the pixels in the dimension-reduced image.
Then, the position of four pixels nearest to the position is searched in the test image according to the determined position, and the pixel value of the pixel in the dimension reduction image corresponding to the position is determined according to the pixel values of the four pixels.
A process of determining a pixel value of a corresponding pixel in the dimension-reduced image from the determined four pixels will be described below with reference to fig. 3.
Fig. 3 schematically shows a part of a test image. As shown in fig. 3, a point P represents a position corresponding to one pixel in the dimension-reduced image determined by the above-described equations (11) and (12), and points I11, I21, I12, and I22 represent the four pixels closest to the point P, respectively.
First, from the pixel values of the pixels I11 and I21, linear interpolation is performed in the X direction to obtain a pixel value at the position R2. Similarly, linear interpolation may be performed in the X direction from the pixel values of the pixels I12 and I22 to obtain a pixel value at the position R1. Then, linear interpolation may be performed in the Y direction from the pixel values at the positions R1 and R2 to obtain a pixel value at the position P, which may be a pixel value of a corresponding pixel in the dimension reduced image. It should be noted that, although it is described in the above-described process that linear interpolation in the X direction is performed first and then linear interpolation in the Y direction is performed, the present invention is not limited to this, and linear interpolation in the Y direction may be performed first, for example, a pixel value at a position between the pixels I11 and I12 and a pixel value at a position between the pixels I21 and I22 are obtained first, respectively, and then linear interpolation in the X direction is performed based on these pixel values to obtain a pixel value at the point P.
The pixel value of each pixel in the dimension reduced image may be determined by the above method. After the pixel values of all the pixels are determined, a final dimension-reduced image is obtained, and various processes can be performed using the dimension-reduced image as a test image.
After the dimension reduction processing is performed on the test image, the boundary of the dimension-reduced test image may be expanded in step S203.
Fig. 4 shows a schematic diagram of boundary extension of a test image. As shown in FIG. 4, area 400 represents the unexpanded test image and areas 401 and 404 represent the expanded outward areas. In the expansion process, the number of pixels whose boundaries are expanded in the width direction and the height direction may be determined, and then the width and height of the expanded test image may be determined according to the following equations (13) and (14):
newImgW=imgW+2*edgeNum_W(13)
newImgH=imgH+2*edgeNum_H(14)
wherein newImgW and newImgH represent the width and height of the expanded test image, respectively, imgW and imgH represent the width and height of the unexpanded test image, respectively, and edgeNum _ W and edgeNum _ H are the numbers of pixels whose boundaries are expanded in the width direction and the height direction, respectively.
The boundary extension can be realized, for example, by a fixed value filling boundary extension method, a duplicate outer boundary value extension method, a mirror image boundary extension method, or a boundary extension method based on the module luminance characteristics.
In the fixed value filling boundary extension method, the extension region 401 and 404 may be respectively filled with fixed pixel values, which may have a value range as follows: 0 to 255.
In the copy outer boundary value extension method, as shown in fig. 4, the extension region 401 may be filled with pixel values of C1 columns (i.e., pixel values of a column of pixels located at the left edge of the test image), the extension region 402 may be filled with pixel values of C2 columns in the test image (i.e., pixel values of a column of pixels located at the right edge of the test image), the extension region 403 may be filled with pixel values of R1 rows in the test image (i.e., pixel values of a row of pixels located at the upper edge of the test image), and the extension region 404 may be filled with pixel values of R2 rows in the test image (i.e., pixel values of a row of pixels located at the lower edge of the test image).
In the mirror boundary extension method, as shown in fig. 4, C1-1 column (i.e., the column closest to the test image in the extension area 401) may be filled with pixel values of C1 column. The C1-2 column (i.e., the column next adjacent to the test image in the extended region 401) may be filled with pixel values of the C1+1 column. By analogy, the entire extension region 401 may be filled. In other words, in the mirror-image boundary extension method, pixel values in the test image are symmetrically filled into extension regions that are symmetric about the symmetry axis, with the four edges of the test image as the symmetry axes, respectively. Regions 402-404 may be filled using a similar method.
The boundary extension method based on the module luminance characteristics will be described below by referring to fig. 5, in which a test image and an extended portion are shown in fig. 5.
First, the optical center of the test image, optical center O, can be determined, schematically illustrated in fig. 5, region 500 represents the test image and region 501 represents the extended region. Then, the brightness decreasing characteristic of the imaging module is determined according to the pixel value of a pixel (e.g., the pixel P1) in the test image and the brightness value of the optical center O and the distance between the pixel and the optical center O. Taking the pixel as the pixel P1 for example, the difference between the pixel value of the pixel P1 and the brightness value of the optical center O is divided by the distance D1 between the pixel P1 and the optical center O to determine the brightness decreasing characteristic of the imaging module. Then, the pixel value to be filled at the position P2 can be determined according to the distance D2 from the optical center O of the position P2 in the expansion area and the decreasing brightness characteristic of the imaging module. The pixel values at other positions in the extended area 501 can be determined using a method similar to the pixel value determination method of the position P2 described above.
It should be noted that, although the above describes performing the boundary extension on the dimension-reduced test image, the present invention is not limited thereto. For example, in the method according to the invention, the dimension reduction process can be omitted and the boundary extension can be performed directly on the test image obtained by the imaging module.
By the boundary extension, pixels originally positioned at the boundary of the test image are changed into pixels in the boundary, so that whether stains exist at the boundary of the original test image or not can be detected, and misjudgment at the boundary can be reduced.
For the test image subjected to the above-described degradation and/or boundary expansion, the enhancement processing and binarization processing (i.e., steps S204 and S205) as described above may be performed.
In step S206, the binarization-processed test image may be subjected to a filtering process. In the filtering process, the pixel value of each pixel in the binarized test image may be adjusted according to the pixel values within a predetermined range around each pixel in the binarized test image.
The filtering process may be implemented by a template filtering method or a statistical ordering filtering method.
In the stencil filtering method, a width and a length of a stencil may be first confirmed, wherein the width and the length are odd numbers. The template may then be designed, for example, an average or Gaussian template may be used.
The average template can be represented by the following formula (15):
Figure BDA0001935682230000151
wherein, W represents the template function, and tempW and tempH represent the template width and the template height, respectively. The effect of the average stencil is to average the pixel values within the stencil.
The gaussian template can be represented by the following equation (16):
Figure BDA0001935682230000152
wherein, w (i, j) represents the template function, and (i, j) represents the position in the template, and σ is the standard deviation of the gaussian function, and the value range is: 0.1 to 20. The gaussian template functions to weight average pixel values over the range of the template.
For each pixel in the binarized test image, its pixel value can be determined by the following equation (17):
Figure BDA0001935682230000153
where filt (i, j) is the pixel value of the pixel with coordinates (i, j) in the binarized test image, and srcImg () represents taking the pixel value in the binarized test image. Equation (17) represents: for a pixel at (i, j) in the binarized test image, its pixel value can be determined by the pixel values within the range of the stencil w. The pixel values of all pixels in the binarized test image can be adjusted by the above equation (17).
In the statistical ordering filtering method, a width and a length of a template may be first confirmed, wherein the width and the length are odd numbers. Then, the pixel value of each pixel in the binarized test image can be determined by the following equation (18):
Figure BDA0001935682230000161
wherein, (i, j) is the coordinate in the binary test image, filtImg is the filtering image, srcmimg is the pixel value of the binary test image, SxyFor the coordinate set centered at (i, j) and having a size of tempW × tempH, mean () is a filter function, and the calculation S is performedxyThe pixel value of which is located at the middle position within the region, i.e., the median pixel value. Equation (18) represents: and setting the median value of the pixel values in a preset range around each pixel in the binary test image as the pixel value of each pixel in the binary test image. The pixel values of all pixels in the binarized test image can be adjusted by equation (18) above.
Through the filtering processing, the influence of external environments such as light sources and noise on the test image in the production process can be reduced.
After the filtering process, the size and the dimension of the stain area in the binarized test image may be determined as a detection result in step S207.
Although the method shown in fig. 2 adds three steps S202, S203 and S206 to the method shown in fig. 1, the present invention is not limited thereto, and only one or more of the above steps may be added on the basis of the method shown in fig. 1.
Although not shown in fig. 2, the method according to the present disclosure may further include verification of the detection result to prevent false positives or false detections.
The verification may be performed by a test image based on no enhancement processing. Specifically, first, the corresponding pixel in the test image without enhancement processing can be determined from the dirty pixel contained in the inspection result. And for each corresponding pixel, comparing the pixel value of the corresponding pixel with the average pixel value of other pixels in a preset window around the corresponding pixel, and judging whether the verified stain pixel is detected wrongly or not according to the comparison result. In general, the pixel value of a dirty pixel is lower than the pixel values of surrounding normal pixels, and therefore, when the pixel value of one pixel is lower than the pixel average value of other pixels within a surrounding predetermined window, the pixel can be considered as a dirty pixel, whereas, the pixel can be considered as a normal pixel. When a pixel in the test image without enhancement processing corresponding to a dirty pixel in the detection result is judged to be a normal pixel, the dirty pixel in the detection result can be considered as misjudgment or misdetection. Through the verification method, false detection stain pixels in the detection result can be found out, and the stain pixels can be removed from the detection result.
Since the above-described verification process is to verify the result in the enhanced test image by the non-enhanced test image, it is possible to avoid the erroneous judgment of the dirty pixel due to the enhancement processing of the image.
The detection methods described above are generally effective for detecting shallow or very shallow stains. In order to comprehensively detect various types of stains, another detection process may be performed in parallel while performing the above-described method.
Fig. 6 shows a flow chart for performing two detection processes simultaneously. Here, steps S6011 to S6017 are the same as steps S201 to S207 shown in fig. 2, and a detailed description thereof will not be repeated again. Fig. 6 is different from fig. 2 in that steps S6021 to S6023 and step S603 are included in fig. 6. Hereinafter, differences of fig. 6 from fig. 1 will be mainly described in detail.
Referring to fig. 6, in step S6021, the test image obtained by the imaging module may be subjected to a dimension reduction process, which is the same as step S202 in fig. 2, and thus, a detailed description thereof is omitted here.
In step S6022, each pixel in the dimension-reduced test image may be detected by a luminance difference detection method. In the luminance difference detection method, for each pixel, the pixel value of the pixel is compared with the average pixel values of other pixels in its surrounding predetermined window, and if the pixel value of the pixel is lower than the average pixel values of other pixels in its surrounding predetermined window, the pixel can be regarded as a dirty pixel.
In some embodiments, step S6022 may borrow the calculation results in a predetermined window around the pixel average value in calculating the pixel average value in a predetermined window to increase the calculation speed and save the calculation resources.
Specifically, when calculating the pixel average value of other pixels in a predetermined window around one pixel, the sum of the pixel values of all pixels in the predetermined window may be calculated, and then the pixel value of the one pixel is subtracted from the sum of the pixel values and divided by the number of other pixels in the predetermined window to obtain the pixel average value in the predetermined window. When the predetermined window of one pixel overlaps with the predetermined window of another pixel, the sum of the pixel values of the pixels in the overlapping region may not be repeatedly calculated. This embodiment will be described below with reference to fig. 7.
As shown in fig. 7, block 701 represents a predetermined window around a pixel, and the sum of pixel values in the predetermined window has been calculated. After the detection of the pixel corresponding to the block 701 is completed, the detection of the pixel adjacent thereto may be performed, and at this time, a predetermined window corresponding to the adjacent pixel (for example, the pixel located at the right side of the one pixel) may be represented by, for example, a block 702, and as can be seen from fig. 7, the block 702 partially overlaps with the block 701, and when the sum of pixel values in the overlapping portion is calculated, the sum of pixel values in the overlapping portion may be retained, and only the sum of pixel values in the newly added portion is calculated during the calculation. For example, the sum of pixel values in block 702 may be calculated by equation (19) below:
Sum702=Sum701+(Sumright-Sumleft) (19)
wherein, Sum701And Sum702The Sum of the pixel values, Sum, in blocks 701 and 702, respectivelyleftAnd SumrightRespectively, block 701 and block 702 are shown as two non-overlapping parts.
If the sum of pixel values within the box above box 701 is to be calculated, equation (19) above may be modified to equation (20) below:
Sumnew=Sum701+(Sumupper-Sumlower) (20)
wherein, SumnewIndicating the Sum of pixel values in the new box to be calculated, SumlowerAnd SumupperEach represents two portions of a frame located above the frame 701 and not overlapping the frame 701.
If the frame to be calculated has both left-right and up-down movements relative to the frame 701, the formula for calculating the sum of the pixel values within the frame may be:
Sumnew=Sum701+(SumLeft-SumRight)+(Sumlower-Sumupper) (21)
by the method, unnecessary repeated calculation can be reduced when the pixel value in the preset window around the pixel is calculated, the calculation efficiency is improved, the calculation time is reduced, and the current required inspection speed can be better met.
In step S6023 following step S6022, other taint pixels adjacent to or connected with the detected taint pixel may be found to determine the taint area in the test image, for example, the size and the dimension of the taint area as a brightness difference detection result.
Next, in step S603, the detection results obtained through steps S6011 to S6017 and the luminance difference detection results obtained through steps S6021 to S6023 may be combined to obtain a final detection result.
By the method shown in fig. 6, not only more obvious stains but also shallower stains can be effectively detected, so that the detection result is more comprehensive.
Although steps S6011 to S6017 shown in fig. 6 are the same as steps S201 to S207 shown in fig. 2, the present invention is not limited thereto, and for example, steps S6011 to S6017 in fig. 6 may be replaced with steps S101 to S104 in fig. 1.
The application also provides a device that stain detected is carried out imaging module, and it includes: the image intensifier is used for carrying out image intensification processing on the test image obtained by the imaging module to obtain an intensified test image; a binarizer for performing binarization processing on the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set to a taint pixel having a taint pixel value or a non-taint pixel having a non-taint pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined taint threshold value; and a taint area determiner for determining taint pixels adjacent to or connected to each taint pixel to determine a size and a position of a taint area of the test image as a detection result, wherein the two connected taint pixels are taint pixels connected through other taint pixels.
In one embodiment, the image intensifier is for: reducing the minimum pixel value in the test image to a target minimum pixel value; increasing the maximum pixel value in the test image to a target maximum pixel value; adjusting pixel values between the minimum pixel value and the maximum pixel value to obtain an enhanced test image by:
stretch pixel value ═ stretch coefficient (pixel value-minimum pixel value) + target minimum pixel value
Wherein the stretch factor is a ratio of a difference between the target maximum pixel value and the target minimum pixel value to a difference between the maximum pixel value and the minimum pixel value, and the stretch pixel value represents the adjusted pixel value.
In one embodiment, the image intensifier is for: performing Fourier transform on the test image to obtain a Fourier spectrum; moving a zero frequency point of a Fourier spectrum to a central position; removing predetermined frequencies in the fourier spectrum; moving the zero frequency point of the Fourier spectrum back to the original position; and performing inverse fourier transform on the fourier spectrum, and performing one of real part taking, absolute value taking, and square root taking on a pixel value of each pixel in the image obtained by the inverse fourier transform to obtain an enhanced test image.
In one embodiment, the image intensifier is for: the predetermined frequencies in the fourier spectrum are removed by a gaussian low-pass filter function or a gaussian band-pass filter function.
In one embodiment, the apparatus further comprises: and the dimensionality reducer is used for carrying out dimensionality reduction on the test image by one of a region average dimensionality reduction method, a downsampling dimensionality reduction method and a bilinear dimensionality reduction method.
In one embodiment, the apparatus further comprises: a boundary extender for extending the boundary of the dimension-reduced test image outward by a predetermined number of pixels, wherein the pixel values of the pixels in the extension area are determined by: determining an optical center of the test image; obtaining the brightness decreasing relation of the imaging module according to the pixel value of the pixel in the test image, the distance from the optical center and the brightness value of the optical center; and determining the pixel value of the pixel in the extended area according to the decreasing brightness relation and the distance between the pixel in the extended area and the optical center.
In one embodiment, the apparatus further comprises: and a boundary expander for expanding the boundary of the dimension-reduced test image outward by a predetermined number of pixels, wherein the pixel values of the pixels in the expansion region are determined according to the pixels at the boundary of the dimension-reduced test image or the pixels within a predetermined range at the boundary.
In one embodiment, the apparatus further comprises: and the image filter is used for adjusting the pixel value of each pixel in the binary test image according to the pixel value in a preset range around each pixel in the binary test image.
In one embodiment, the image filter is for: setting the average pixel value in a preset range around each pixel in the binary test image as the pixel value of each pixel in the binary test image, or carrying out weighted average on the pixel values of all the pixels in the preset range around each pixel in the binary test image, and determining the pixel value of each pixel in the binary test image according to the weighted average result.
In one embodiment, the image filter is for: and setting the median value of the pixel values in a preset range around each pixel in the binary test image as the pixel value of each pixel in the binary test image.
In one embodiment, the apparatus further comprises a parallel detection actuator for: for each pixel in the test image subjected to the dimension reduction processing, comparing the pixel value of the pixel with the average pixel value of other pixels in a preset window around the pixel, and judging whether the pixel is a stain pixel according to the comparison result; finding a taint pixel adjacent to or communicated with the taint pixel to obtain the size and the position of the taint area; and outputting the obtained size and position as a luminance difference detection result.
In one embodiment, the apparatus further comprises a combiner for: and combining the detection result and the brightness difference detection result to be used as a final detection result.
In one embodiment, the row detection executor performs the comparison of the pixel value of the pixel with an average pixel value of other pixels within a predetermined window around the pixel by: summing pixel values of pixels within a predetermined window around the pixel to obtain a sum of pixel values within the window; and subtracting the pixel value of the pixel from the intra-window pixel value sum and dividing by the number of other pixels within the predetermined window around the pixel to obtain an average pixel value of the other pixels within the predetermined window around the pixel, wherein when the predetermined window of the pixel overlaps the predetermined window of the other pixel, the intra-window pixel value sum of the pixel is obtained by subtracting the pixel value sum within the non-overlapping region from the intra-window pixel value sum of the other pixel and adding the pixel to the predetermined window of the other pixel and adding the pixel to increase the pixel value sum within the region relative to the predetermined window of the other pixel.
In one embodiment, the apparatus further comprises a verifier for verifying the detection result and removing pixel stains determined to be erroneously detected from the detection result, wherein verifying the detection result comprises verifying each pixel stain by: determining a pixel in the test image corresponding to each taint pixel; comparing the determined pixel value of each pixel with the average pixel values of other pixels in a predetermined window around the pixel; and judging whether the verified taint pixel is false detection or not according to the comparison result.
The application also provides a computer system, which can be a mobile terminal, a Personal Computer (PC), a tablet computer, a server and the like. Referring now to FIG. 8, there is shown a schematic block diagram of a computer system 800 suitable for use in implementing the terminal device or server of the present application: as shown in fig. 8, the computer system 800 includes one or more processors, communication sections, and the like, for example: one or more Central Processing Units (CPUs) 801, and/or one or more image processors (GPUs) 813, etc., which may perform various appropriate actions and processes according to executable instructions stored in a Read Only Memory (ROM)802 or loaded from a storage section 808 into a Random Access Memory (RAM) 803. The communication portion 812 may include, but is not limited to, a network card, which may include, but is not limited to, an ib (infiniband) network card.
The processor may communicate with the read-only memory 802 and/or the random access memory 803 to execute the executable instructions, connect with the communication part 812 through the bus 804, and communicate with other target devices through the communication part 812, so as to complete the operations corresponding to any one of the methods provided by the embodiments of the present application, for example: obtaining a test image through an imaging module; performing image enhancement processing on the test image to obtain an enhanced test image; performing binarization processing on the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set to be a taint pixel having a taint pixel value or a non-taint pixel having a non-taint pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined taint threshold value; and determining adjacent or communicated taint pixels to determine the size and the position of a taint area of the test image as a detection result, wherein the two communicated taint pixels are taint pixels connected through other taint pixels.
In addition, in the RAM803, various programs and data necessary for the operation of the apparatus can also be stored. The CPU 801, ROM802, and RAM803 are connected to each other via a bus 804. The ROM802 is an optional module in the case of the RAM 803. The RAM803 stores or writes executable instructions into the ROM802 at runtime, which cause the processor 801 to perform operations corresponding to the above-described communication method. An input/output (I/O) interface 805 is also connected to bus 804. The communication unit 812 may be integrated, or may be provided with a plurality of sub-modules (e.g., a plurality of IB network cards) and connected to the bus link.
To the I/O interface 805, AN input section 806 including a keyboard, a mouse, and the like, AN output section 807 including a network interface card such as a Cathode Ray Tube (CRT), a liquid crystal display (L CD), and the like, a speaker, and the like, a storage section 808 including a hard disk, and the like, and a communication section 809 including a network interface card such as a L AN card, a modem, and the like are connected, the communication section 809 performs communication processing via a network such as the internet, a drive 810 is also connected to the I/O interface 805 as necessary, a removable medium 811 such as a magnetic disk, AN optical disk, a magneto-optical disk, a semiconductor memory, and the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted into the storage section 808 as.
It should be noted that the architecture shown in fig. 8 is only an optional implementation manner, and in a specific practical process, the number and types of the components in fig. 8 may be selected, deleted, added or replaced according to actual needs; in different functional component settings, separate settings or integrated settings may also be used, for example, the GPU and the CPU may be separately set or the GPU may be integrated on the CPU, the communication part may be separately set or integrated on the CPU or the GPU, and so on. These alternative embodiments are all within the scope of the present disclosure.
Further, according to an embodiment of the present application, the processes described above with reference to the flowcharts may be implemented as a computer software program. For example, the present application provides a non-transitory machine-readable storage medium having stored thereon machine-readable instructions executable by a processor to perform instructions corresponding to the method steps provided herein, such as: obtaining a test image through an imaging module; performing image enhancement processing on the test image to obtain an enhanced test image; performing binarization processing on the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set to be a taint pixel having a taint pixel value or a non-taint pixel having a non-taint pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined taint threshold value; and determining adjacent or communicated taint pixels to determine the size and the position of a taint area of the test image as a detection result, wherein the two communicated taint pixels are taint pixels connected through other taint pixels.
In such an embodiment, the computer program may be downloaded and installed from a network via the communication section 809 and/or installed from the removable medium 811. The computer program performs the above-described functions defined in the method of the present application when executed by the Central Processing Unit (CPU) 801.
The method and apparatus, device of the present application may be implemented in a number of ways. For example, the methods and apparatuses, devices of the present application may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present application are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present application may also be embodied as a program recorded in a recording medium, the program including machine-readable instructions for implementing a method according to the present application. Thus, the present application also covers a recording medium storing a program for executing the method according to the present application.
The description of the present application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the application in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the application and the practical application, and to enable others of ordinary skill in the art to understand the application for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (30)

1. A method of stain detection for an imaging module, the method comprising:
obtaining a test image through the imaging module;
performing image enhancement processing on the test image to obtain an enhanced test image;
performing binarization processing on the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set to be a taint pixel having a taint pixel value or a non-taint pixel having a non-taint pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined taint threshold value; and
and determining adjacent or communicated taint pixels to determine the size and the position of the taint area of the test image as a detection result, wherein the two communicated taint pixels are taint pixels connected through other taint pixels.
2. The method of claim 1, wherein image enhancing the test image to obtain an enhanced test image comprises:
reducing a minimum pixel value in the test image to a target minimum pixel value;
increasing a maximum pixel value in the test image to a target maximum pixel value;
adjusting pixel values between the minimum pixel value and the maximum pixel value to obtain the enhanced test image by:
stretch pixel value ═ stretch coefficient (pixel value-minimum pixel value) + target minimum pixel value
Wherein the stretch factor is a ratio of a difference between the target maximum pixel value and the target minimum pixel value to a difference between the maximum pixel value and the minimum pixel value, and the stretched pixel value represents the adjusted pixel value.
3. The method of claim 1, wherein image enhancing the test image to obtain an enhanced test image comprises:
performing a fourier transform on the test image to obtain a fourier spectrum;
moving a zero frequency point of the Fourier spectrum to a center position;
removing predetermined frequencies in the Fourier spectrum;
moving the zero frequency point of the Fourier spectrum back to the original position; and
and performing inverse Fourier transform on the Fourier spectrum, and performing one of real part taking, absolute value taking and square root taking on a pixel value of each pixel in the image obtained through the inverse Fourier transform to obtain the enhanced test image.
4. The method of claim 3, wherein removing predetermined frequencies in the Fourier spectrum comprises:
removing predetermined frequencies in the Fourier spectrum by a Gaussian low-pass filter function or a Gaussian band-pass filter function.
5. The method of any of claims 1-4, further comprising, prior to image enhancement processing the test image to obtain an enhanced test image:
and carrying out dimensionality reduction on the test image by one of a region average dimensionality reduction method, a downsampling dimensionality reduction method and a bilinear dimensionality reduction method.
6. The method of claim 5, wherein the method further comprises:
expanding the boundary of the dimension-reduced test image outward by a predetermined number of pixels,
wherein the pixel values of the pixels in the extension area are determined by:
determining an optical center of the test image;
obtaining a brightness decreasing relation of the imaging module according to the pixel value of the pixel in the test image, the distance from the optical center and the brightness value of the optical center; and
and determining the pixel value of the pixel in the extended area according to the decreasing brightness relation and the distance between the pixel in the extended area and the optical center.
7. The method of claim 5, wherein the method further comprises:
expanding the boundary of the dimension-reduced test image outward by a predetermined number of pixels,
wherein the pixel values of the pixels in the extended area are determined according to the pixels at the boundary of the test image subjected to the dimension reduction processing or the pixels within a predetermined range at the boundary.
8. The method of any one of claims 1-4, further comprising:
and adjusting the pixel value of each pixel in the binary test image according to the pixel value in a preset range around each pixel in the binary test image.
9. The method as claimed in claim 8, wherein adjusting the pixel value of each pixel in the binarized test image based on pixel values within a predetermined range around each pixel in the binarized test image comprises:
setting an average pixel value within a predetermined range around each pixel in the binarized test image as a pixel value of each pixel in the binarized test image, or
And in a preset range around each pixel in the binary test image, carrying out weighted average on the pixel value of each pixel, and determining the pixel value of each pixel in the binary test image according to the result of the weighted average.
10. The method as claimed in claim 8, wherein adjusting the pixel value of each pixel in the binarized test image based on pixel values within a predetermined range around each pixel in the binarized test image comprises:
and setting the median value of the pixel values in a preset range around each pixel in the binary test image as the pixel value of each pixel in the binary test image.
11. The method of claim 5, wherein the method further comprises:
for each pixel in the dimension-reduced test image, comparing the pixel value of the pixel with the average pixel value of other pixels in a predetermined window around the pixel,
judging whether the pixel is a stain pixel according to the comparison result;
finding a taint pixel adjacent to or communicated with the taint pixel to obtain the size and the position of the taint area; and
the obtained size and position are output as a luminance difference detection result.
12. The method of claim 11, wherein the method further comprises:
and combining the detection result and the brightness difference detection result to be used as a final detection result.
13. The method of claim 11, wherein comparing the pixel value of the pixel to an average pixel value of other pixels within a predetermined window around the pixel comprises:
summing pixel values of pixels within a predetermined window around the pixel to obtain a sum of pixel values within the window; and
subtracting the pixel value of the pixel from the sum of the pixel values within the window and dividing by the number of other pixels within a predetermined window around the pixel to obtain an average pixel value of the other pixels within the predetermined window around the pixel,
wherein, when the predetermined window of pixels overlaps with a predetermined window of another pixel, the intra-window pixel value sum of the pixels is obtained by subtracting the pixel value sum in the non-overlapping region from the intra-window pixel value sum of another pixel and adding the pixel value sum in the region in which the predetermined window of pixels is increased with respect to the predetermined window of the other pixel.
14. The method of any of claims 1-4, further comprising validating the detection results and removing pixel stains from the detection results that are determined to be misdetected,
wherein verifying the detection result comprises verifying each pixel smear by:
determining a pixel in the test image corresponding to each taint pixel;
comparing the determined pixel value of each pixel with the average pixel values of other pixels in a predetermined window around the pixel; and
and judging whether the verified taint pixel is false detection or not according to the comparison result.
15. The utility model provides a device that stain detected is carried out imaging module, its characterized in that, the device includes:
the image intensifier is used for carrying out image intensification processing on the test image obtained by the imaging module to obtain an intensified test image;
a binarizer for performing binarization processing on the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set to a taint pixel having a taint pixel value or a non-taint pixel having a non-taint pixel value, respectively, by comparing a pixel value of the enhanced test image with a predetermined taint threshold value; and
and the taint area determiner is used for determining taint pixels adjacent to or communicated with each taint pixel to determine the size and the position of the taint area of the test image as a detection result, wherein the two communicated taint pixels are taint pixels connected through other taint pixels.
16. The apparatus of claim 15, wherein the image intensifier is to:
reducing a minimum pixel value in the test image to a target minimum pixel value;
increasing a maximum pixel value in the test image to a target maximum pixel value;
adjusting pixel values between the minimum pixel value and the maximum pixel value to obtain the enhanced test image by:
stretch pixel value ═ stretch coefficient (pixel value-minimum pixel value) + target minimum pixel value
Wherein the stretch factor is a ratio of a difference between the target maximum pixel value and the target minimum pixel value to a difference between the maximum pixel value and the minimum pixel value, and the stretched pixel value represents the adjusted pixel value.
17. The apparatus of claim 15, wherein the image intensifier is to:
performing a fourier transform on the test image to obtain a fourier spectrum;
moving a zero frequency point of the Fourier spectrum to a center position;
removing predetermined frequencies in the Fourier spectrum;
moving the zero frequency point of the Fourier spectrum back to the original position; and
and performing inverse Fourier transform on the Fourier spectrum, and performing one of real part taking, absolute value taking and square root taking on a pixel value of each pixel in the image obtained through the inverse Fourier transform to obtain the enhanced test image.
18. The apparatus of claim 17, wherein the image intensifier is to:
removing predetermined frequencies in the Fourier spectrum by a Gaussian low-pass filter function or a Gaussian band-pass filter function.
19. The apparatus of any one of claims 15-18, wherein the apparatus further comprises:
and the dimensionality reducer is used for carrying out dimensionality reduction on the test image by one of a region average dimensionality reduction method, a downsampling dimensionality reduction method and a bilinear dimensionality reduction method.
20. The apparatus of claim 19, wherein the apparatus further comprises:
a boundary extender for extending a boundary of the dimension-reduced test image outward by a predetermined number of pixels,
wherein the pixel values of the pixels in the extension area are determined by:
determining an optical center of the test image;
obtaining a brightness decreasing relation of the imaging module according to the pixel value of the pixel in the test image, the distance from the optical center and the brightness value of the optical center; and
and determining the pixel value of the pixel in the extended area according to the decreasing brightness relation and the distance between the pixel in the extended area and the optical center.
21. The apparatus of claim 19, wherein the apparatus further comprises:
a boundary extender for extending a boundary of the dimension-reduced test image outward by a predetermined number of pixels,
wherein the pixel values of the pixels in the extended area are determined according to the pixels at the boundary of the test image subjected to the dimension reduction processing or the pixels within a predetermined range at the boundary.
22. The apparatus of any one of claims 15-18, wherein the apparatus further comprises:
and the image filter is used for adjusting the pixel value of each pixel in the binary test image according to the pixel value in a preset range around each pixel in the binary test image.
23. The apparatus of claim 22, wherein the image filter is to:
setting an average pixel value within a predetermined range around each pixel in the binarized test image as a pixel value of each pixel in the binarized test image, or
And in a preset range around each pixel in the binary test image, carrying out weighted average on the pixel value of each pixel, and determining the pixel value of each pixel in the binary test image according to the result of the weighted average.
24. The apparatus of claim 22, wherein the image filter is to:
and setting the median value of the pixel values in a preset range around each pixel in the binary test image as the pixel value of each pixel in the binary test image.
25. The apparatus of claim 199, further comprising a parallel detection actuator to:
for each pixel in the dimension-reduced test image, comparing the pixel value of the pixel with the average pixel value of other pixels in a predetermined window around the pixel,
judging whether the pixel is a stain pixel according to the comparison result;
finding a taint pixel adjacent to or communicated with the taint pixel to obtain the size and the position of the taint area; and
the obtained size and position are output as a luminance difference detection result.
26. The apparatus of claim 25, wherein the apparatus further comprises a combiner to:
and combining the detection result and the brightness difference detection result to be used as a final detection result.
27. The apparatus of claim 25, wherein the row detection executor is to compare the pixel value of the pixel to an average pixel value of other pixels within a predetermined window around the pixel by:
summing pixel values of pixels within a predetermined window around the pixel to obtain a sum of pixel values within the window; and
subtracting the pixel value of the pixel from the sum of the pixel values within the window and dividing by the number of other pixels within a predetermined window around the pixel to obtain an average pixel value of the other pixels within the predetermined window around the pixel,
wherein, when the predetermined window of pixels overlaps with a predetermined window of another pixel, the intra-window pixel value sum of the pixels is obtained by subtracting the pixel value sum in the non-overlapping region from the intra-window pixel value sum of another pixel and adding the pixel value sum in the region in which the predetermined window of pixels is increased with respect to the predetermined window of the other pixel.
28. The apparatus according to any of claims 15-18, further comprising a verifier for verifying the detection result and removing pixel stains determined to be misdetected from the detection result,
wherein verifying the detection result comprises verifying each pixel smear by:
determining a pixel in the test image corresponding to each taint pixel;
comparing the determined pixel value of each pixel with the average pixel values of other pixels in a predetermined window around the pixel; and
and judging whether the verified taint pixel is false detection or not according to the comparison result.
29. A system for stain detection of an imaging module, the system comprising:
a processor; and
a memory coupled to the processor and storing machine-readable instructions executable by the processor to:
obtaining a test image through the imaging module;
performing image enhancement processing on the test image to obtain an enhanced test image;
performing binarization processing on the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set to be a taint pixel having a taint pixel value or a non-taint pixel having a non-taint pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined taint threshold value; and
and determining adjacent or communicated taint pixels to determine the size and the position of the taint area of the test image as a detection result, wherein the two communicated taint pixels are taint pixels connected through other taint pixels.
30. A non-transitory machine-readable storage medium storing machine-readable instructions executable by a processor to:
obtaining a test image through the imaging module;
performing image enhancement processing on the test image to obtain an enhanced test image;
performing binarization processing on the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set to be a taint pixel having a taint pixel value or a non-taint pixel having a non-taint pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined taint threshold value; and
and determining adjacent or communicated taint pixels to determine the size and the position of the taint area of the test image as a detection result, wherein the two communicated taint pixels are taint pixels connected through other taint pixels.
CN201910006562.XA 2019-01-04 2019-01-04 Method, device, system and storage medium for detecting stain of imaging module Active CN111476750B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910006562.XA CN111476750B (en) 2019-01-04 2019-01-04 Method, device, system and storage medium for detecting stain of imaging module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910006562.XA CN111476750B (en) 2019-01-04 2019-01-04 Method, device, system and storage medium for detecting stain of imaging module

Publications (2)

Publication Number Publication Date
CN111476750A true CN111476750A (en) 2020-07-31
CN111476750B CN111476750B (en) 2023-09-26

Family

ID=71743159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910006562.XA Active CN111476750B (en) 2019-01-04 2019-01-04 Method, device, system and storage medium for detecting stain of imaging module

Country Status (1)

Country Link
CN (1) CN111476750B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862832A (en) * 2020-12-31 2021-05-28 重庆盛泰光电有限公司 Dirt detection method based on concentric circle segmentation positioning
CN112967208A (en) * 2021-04-23 2021-06-15 北京恒安嘉新安全技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113838003A (en) * 2021-08-30 2021-12-24 歌尔科技有限公司 Speckle detection method, device, medium, and computer program product for image
CN116008294A (en) * 2022-12-13 2023-04-25 无锡微准科技有限公司 Key cap surface particle defect detection method based on machine vision

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0518909A (en) * 1991-07-15 1993-01-26 Fuji Electric Co Ltd Apparatus for inspecting inner surface of circular container
JP2003032490A (en) * 2001-07-11 2003-01-31 Ricoh Co Ltd Image processing apparatus
JP2008170325A (en) * 2007-01-12 2008-07-24 Seiko Epson Corp Stain flaw detection method and stain flaw detection device
JP2008232639A (en) * 2007-03-16 2008-10-02 Seiko Epson Corp Stain defect detection method and device
JP2008241407A (en) * 2007-03-27 2008-10-09 Mitsubishi Electric Corp Defect detecting method and defect detecting device
JP2008292256A (en) * 2007-05-23 2008-12-04 Fuji Xerox Co Ltd Device, method and program for image quality defect detection
JP2010055815A (en) * 2008-08-26 2010-03-11 Sony Corp Fuel cartridge, fuel cell and electronic equipment
US7903864B1 (en) * 2007-01-17 2011-03-08 Matrox Electronic Systems, Ltd. System and methods for the detection of irregularities in objects based on an image of the object
CN104104945A (en) * 2014-07-22 2014-10-15 西北工业大学 Star sky image defective pixel robustness detection method
CN104867159A (en) * 2015-06-05 2015-08-26 北京大恒图像视觉有限公司 Stain detection and classification method and device for sensor of digital camera
KR20160108644A (en) * 2015-03-04 2016-09-20 주식회사 에이치비테크놀러지 Device for detecting defect of device
CN106412573A (en) * 2016-10-26 2017-02-15 歌尔科技有限公司 Method and device for detecting lens stain
CN106815821A (en) * 2017-01-23 2017-06-09 上海兴芯微电子科技有限公司 The denoising method and device of near-infrared image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010055815A1 (en) * 2008-11-13 2010-05-20 株式会社 日立メディコ Medical image-processing device and method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0518909A (en) * 1991-07-15 1993-01-26 Fuji Electric Co Ltd Apparatus for inspecting inner surface of circular container
JP2003032490A (en) * 2001-07-11 2003-01-31 Ricoh Co Ltd Image processing apparatus
JP2008170325A (en) * 2007-01-12 2008-07-24 Seiko Epson Corp Stain flaw detection method and stain flaw detection device
US7903864B1 (en) * 2007-01-17 2011-03-08 Matrox Electronic Systems, Ltd. System and methods for the detection of irregularities in objects based on an image of the object
JP2008232639A (en) * 2007-03-16 2008-10-02 Seiko Epson Corp Stain defect detection method and device
JP2008241407A (en) * 2007-03-27 2008-10-09 Mitsubishi Electric Corp Defect detecting method and defect detecting device
JP2008292256A (en) * 2007-05-23 2008-12-04 Fuji Xerox Co Ltd Device, method and program for image quality defect detection
JP2010055815A (en) * 2008-08-26 2010-03-11 Sony Corp Fuel cartridge, fuel cell and electronic equipment
CN104104945A (en) * 2014-07-22 2014-10-15 西北工业大学 Star sky image defective pixel robustness detection method
KR20160108644A (en) * 2015-03-04 2016-09-20 주식회사 에이치비테크놀러지 Device for detecting defect of device
CN104867159A (en) * 2015-06-05 2015-08-26 北京大恒图像视觉有限公司 Stain detection and classification method and device for sensor of digital camera
CN106412573A (en) * 2016-10-26 2017-02-15 歌尔科技有限公司 Method and device for detecting lens stain
CN106815821A (en) * 2017-01-23 2017-06-09 上海兴芯微电子科技有限公司 The denoising method and device of near-infrared image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
廖苗;刘毅志;欧阳军林;余建勇;肖文辉;彭理;: "基于自适应局部增强的手机TFT-LCD屏Mura缺陷自动检测", 液晶与显示, no. 06, pages 121 - 122 *
胡章芳, 北京:北京航空航天大学出版社 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862832A (en) * 2020-12-31 2021-05-28 重庆盛泰光电有限公司 Dirt detection method based on concentric circle segmentation positioning
CN112967208A (en) * 2021-04-23 2021-06-15 北京恒安嘉新安全技术有限公司 Image processing method and device, electronic equipment and storage medium
CN112967208B (en) * 2021-04-23 2024-05-14 北京恒安嘉新安全技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113838003A (en) * 2021-08-30 2021-12-24 歌尔科技有限公司 Speckle detection method, device, medium, and computer program product for image
CN113838003B (en) * 2021-08-30 2024-04-30 歌尔科技有限公司 Image speckle detection method, apparatus, medium and computer program product
CN116008294A (en) * 2022-12-13 2023-04-25 无锡微准科技有限公司 Key cap surface particle defect detection method based on machine vision
CN116008294B (en) * 2022-12-13 2024-03-08 无锡微准科技有限公司 Key cap surface particle defect detection method based on machine vision

Also Published As

Publication number Publication date
CN111476750B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
CN110766736B (en) Defect detection method, defect detection device, electronic equipment and storage medium
CN111476750B (en) Method, device, system and storage medium for detecting stain of imaging module
CN108805023B (en) Image detection method, device, computer equipment and storage medium
US7783103B2 (en) Defect detecting device, image sensor device, image sensor module, image processing device, digital image quality tester, and defect detecting method
CN108090886B (en) High dynamic range infrared image display and detail enhancement method
CN109191387B (en) Infrared image denoising method based on Butterworth filter
CN112381727B (en) Image denoising method and device, computer equipment and storage medium
CN113781406B (en) Scratch detection method and device for electronic component and computer equipment
CN117152165B (en) Photosensitive chip defect detection method and device, storage medium and electronic equipment
CN107895371B (en) Textile flaw detection method based on peak coverage value and Gabor characteristics
Ma et al. An automatic detection method of Mura defects for liquid crystal display
CN114298985B (en) Defect detection method, device, equipment and storage medium
CN113744142B (en) Image restoration method, electronic device and storage medium
CN111415365B (en) Image detection method and device
Zhang et al. A LCD screen Mura defect detection method based on machine vision
CN113763380B (en) Vector gradient-based reference-free image definition evaluation method
CN115018817A (en) Scratch detection method, scratch detection device, electronic equipment and readable storage medium
Prabha et al. Defect detection of industrial products using image segmentation and saliency
CN115564727A (en) Method and system for detecting abnormal defects of exposure development
CN112541507B (en) Multi-scale convolutional neural network feature extraction method, system, medium and application
CN114998186A (en) Image processing-based method and system for detecting surface scab defect of copper starting sheet
US10679336B2 (en) Detecting method, detecting apparatus, and computer readable storage medium
CN114596210A (en) Noise estimation method, device, terminal equipment and computer readable storage medium
CN113674144A (en) Image processing method, terminal equipment and readable storage medium
CN113962907B (en) Image denoising method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant