CN110288566B - Target defect extraction method - Google Patents

Target defect extraction method Download PDF

Info

Publication number
CN110288566B
CN110288566B CN201910434060.7A CN201910434060A CN110288566B CN 110288566 B CN110288566 B CN 110288566B CN 201910434060 A CN201910434060 A CN 201910434060A CN 110288566 B CN110288566 B CN 110288566B
Authority
CN
China
Prior art keywords
gray
histogram
target image
gray level
intersection point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910434060.7A
Other languages
Chinese (zh)
Other versions
CN110288566A (en
Inventor
刘畅
易礼燕
周宇华
张玉成
周一青
石晶林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sylincom Technology Co ltd
Original Assignee
Beijing Sylincom Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sylincom Technology Co ltd filed Critical Beijing Sylincom Technology Co ltd
Priority to CN201910434060.7A priority Critical patent/CN110288566B/en
Publication of CN110288566A publication Critical patent/CN110288566A/en
Application granted granted Critical
Publication of CN110288566B publication Critical patent/CN110288566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a target defect extraction method, which comprises the following steps: step 1) selecting data from an actual gray level histogram of a target image for fitting to obtain a standard gray level histogram of the target image; step 2) acquiring a first intersection point and a last intersection point of the actual gray level histogram and the standard gray level histogram; step 3) determining two gray values as threshold values according to the first intersection point and the last intersection point; and performing threshold segmentation on the target image according to the threshold to obtain suspected defects. The method can extract the suspected defects more completely, has higher precision and reduces the phenomenon of over-segmentation.

Description

Target defect extraction method
Technical Field
The invention relates to the technical field of image processing, in particular to a method for extracting suspected defects of a target from an image containing the target.
Background
With the development of intellectualization, industrialization and mechanization, the bearing becomes an indispensable part in mechanical equipment. Since the application field of the bearing is all over the corners of life, once the bearing has a problem, hidden dangers can be buried for mechanical faults, even mechanical failure is caused, time and economic losses are caused, and a series of accidents can be caused even more seriously, so that how to ensure the quality of the bearing is very important. In order to ensure the quality of the bearing, the current common technology is to detect the defects of the shot bearing image so as to eliminate and replace the problematic bearing.
At present, defect detection on a target in an image generally comprises two steps of defect extraction and defect identification, wherein the defect extraction is to segment suspected defects of the target from an image background. If the suspected defect obtained by defect extraction is incomplete, the detection is missed, and the overall result of defect detection is affected. Common defect extraction methods include a threshold segmentation method, which can be further divided into single-threshold segmentation and multi-threshold segmentation, wherein the former (including OTSU segmentation method, maximum entropy segmentation method, etc.) can only separately extract suspected defects with low gray levels or high gray levels, which may cause the problem of missing extraction of defects; multi-threshold segmentation with more than two thresholds leads to problems of complex computation, low real-time performance, etc., and too many thresholds may also lead to incomplete extracted defect information.
Disclosure of Invention
In order to solve the above problems in the prior art, according to an embodiment of the present invention, there is provided a target defect extraction method, including:
step 1) selecting data from an actual gray level histogram of a target image for fitting to obtain a standard gray level histogram of the target image;
step 2) acquiring a first intersection point and a last intersection point of the actual gray level histogram and the standard gray level histogram;
step 3) determining two gray values as threshold values according to the first intersection point and the last intersection point; and performing threshold segmentation on the target image according to the threshold to obtain suspected defects.
In the above method, in step 3), determining two gray values as thresholds according to the first intersection and the last intersection includes: in the actual gray level histogram, gray levels corresponding to the maximum number of pixels are searched for on the left side of the first intersection point and on the right side of the last intersection point, and the gray levels are respectively used as a minimum threshold and a maximum threshold.
In the above method, step 1) may include: from the gray value corresponding to the maximum pixel number of the actual gray histogram, firstly extending pixels of a gray level to the right, if the sum of the pixel numbers obtained by extension is smaller than a preset threshold value, then extending pixels of a gray level to the left, and repeating the process until the sum of the pixel numbers obtained by extension is not smaller than the preset threshold value; and performing Gaussian fitting according to the data obtained by expansion to obtain a standard gray level histogram of the target image. Wherein the predetermined threshold is 80% of the total number of pixels in the target image.
In the above method, the performing gaussian fitting according to the data obtained by the expanding includes: converting the Gaussian function into a nonlinear quadratic equation; fitting the nonlinear quadratic equation using a least squares polynomial curve; and performing inverse derivation on the nonlinear quadratic equation to obtain the Gaussian function.
In the above method, in step 2), the first intersection point is obtained according to the following formula:
Figure BDA0002069959590000021
wherein, graylowRepresenting a first intersection of the actual gray-scale histogram and the standard gray-scale histogram; i represents a gray value, which is an incremental variable from 1 to 255; hist [ i ]]Representing the corresponding pixel number in the actual gray histogram when the gray value is i;
Figure BDA0002069959590000022
representing the number of pixels corresponding to the standard gray histogram when the gray value is i; and
the last intersection point is obtained according to the following formula:
Figure BDA0002069959590000023
wherein, graylowRepresenting the last intersection point of the actual gray level histogram and the standard gray level histogram; i represents a gray value, which is a decreasing variable from 255 to 0; hist [ i ]]Representing the actual gray value when the gray value is iThe corresponding pixel number in the degree histogram;
Figure BDA0002069959590000024
and when the gray value is i, the corresponding pixel number in the standard gray histogram is represented.
The method further comprises the following steps before the step 1): extracting a target image; and carrying out illumination correction on the target image.
In the above method, extracting the target image includes: segmenting a target image from an image containing the target by using threshold segmentation; and performing morphological operation on the segmented target image.
The above method may further comprise: before the illumination correction is carried out on the target image, whether the area and the roundness of the target image meet the preset standards or not is judged, and if not, the defect extraction is ended.
There is also provided, in accordance with an embodiment of the present invention, electronic apparatus including: one or more processors; a storage device for storing one or more computer programs that, when executed by the one or more processors, cause the electronic device to implement the target defect extraction method described above.
The embodiment of the invention has the following beneficial effects:
according to the method, part of data of the actual gray level histogram of the target image is selected for fitting to obtain the standard gray level histogram, so that the intersection point of the actual gray level histogram and the standard gray level histogram is obtained, and the minimum threshold value and the maximum threshold value are searched in the actual gray level histogram according to the intersection point for carrying out threshold segmentation so as to obtain the suspected defect. The method can extract suspected defects more completely, has higher precision and reduces the phenomenon of over-segmentation; on one hand, the problem of missed extraction caused by single threshold segmentation is avoided, and on the other hand, the problem of incomplete defect information caused by excessive thresholds is prevented. In addition, the invention utilizes the gray level envelope of the histogram, and the calculated amount is smaller; and two thresholds are adopted for threshold segmentation, so that the running time of the method meets the requirement of industrial real-time property, and the method has good practical value.
Drawings
Example embodiments will be described in detail below with reference to the attached drawings, which are intended to depict example embodiments and should not be construed as limiting the intended scope of the claims. The drawings are not to be considered as drawn to scale unless explicitly indicated.
FIG. 1 shows a flow diagram of a target defect extraction method according to one embodiment of the invention;
FIG. 2 illustrates a schematic view of a target image obtained by segmenting a bearing roller from an original image according to one embodiment of the present invention;
FIG. 3 is a schematic diagram of a resulting image after illumination correction of the target image of FIG. 2;
FIG. 4 illustrates an actual grayscale histogram of the target image of FIG. 3;
FIG. 5 illustrates a comparison of an actual grayscale histogram of a target image to a standard grayscale histogram in accordance with one embodiment of the invention;
fig. 6 is a diagram illustrating suspected defects obtained by threshold segmentation using intersection points as thresholds according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail by embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The gray histogram of an image describes statistical characteristics of the image at 256 gray levels, the abscissa of which represents the gray values corresponding to the 256 gray levels and is 0 to 255 from left to right, and the ordinate of which represents the number of pixels of the image at each gray value or the frequency of occurrence of pixels (herein, the ordinate represents the number of pixels of the image at each gray value). In practical applications, it is often difficult to achieve complete uniformity of illumination when capturing an image, and therefore a gray-scale histogram of an object image (e.g., an image of a bearing roller) may have a symmetrical shape with respect to a certain gray-scale value, i.e., a gaussian distribution. However, when the target has a defect, the actual gray-scale histogram envelope of the target image may exhibit a nonstandard gaussian distribution. The inventors found through a lot of experiments that, in comparison with a standard gaussian distribution (a distribution that a standard gray histogram of a target image without defects shows), in a gray histogram of a target image with defects (hereinafter, also referred to as an actual gray histogram), the number of pixels corresponding to the gray level of the defect is larger than the number of pixels corresponding to the gray level in the standard gray histogram. Therefore, if a standard gray histogram of the target image is derived and compared with the actual gray histogram, and the threshold for threshold segmentation is determined by the intersection point thereof, a suspected defect with higher accuracy can be obtained. In addition, the inventors found that, in the actual gray-scale histogram of the target image with a small defect, there is almost no defect corresponding to the gray scale corresponding to the larger number of pixels.
In view of the above, according to an embodiment of the present invention, a method for extracting a target defect is provided. In summary, the target defect extraction method includes: selecting partial data from the actual gray level histogram of the target image to perform Gaussian fitting to obtain a standard gray level histogram of the target image; acquiring a first intersection point and a last intersection point of the actual gray level histogram and the standard gray level histogram; determining two gray values as threshold values according to the first intersection point and the last intersection point; and performing threshold segmentation on the target image according to the threshold to obtain suspected defects.
Fig. 1 schematically shows a target defect extraction method according to an embodiment of the present invention, and now with reference to fig. 1, the steps of the method are described in detail:
and S11, extracting a target image.
An object image is extracted from the captured image containing the object (i.e., the object to be detected for defects, such as a bearing roller, etc.). According to one embodiment of the present invention, the ROI (region of interest) is obtained by segmenting the target image from the original image by a threshold segmentation method.
According to one embodiment of the present invention, image segmentation may be performed using existing threshold segmentation techniques. According to another embodiment of the present invention, further morphological operations (phony) may be included on the obtained target image, the morphological operations including: firstly, opening an object image to remove part of information connected with the object; then, a closing operation is performed to connect the target information and to prevent the target information from being lost as much as possible.
And S12, judging whether the area and the roundness of the target image meet the preset standards, if not, outputting the semantic meaning and finishing the defect extraction, and if so, executing the step S13.
Step S12 is used to determine whether there is a large-area problem in the target image, and when the target includes a large-area problem, such as a large-area rust or a large-area defect, even if the morphological operation is performed on the target image after threshold segmentation (see step S11), complete target information may not be obtained, and then accurate suspected defects may not be obtained. Therefore, the segmented target image is detected according to the area feature and the roundness feature, according to an embodiment of the present invention, whether the target area and the roundness in the target image satisfy a predetermined feature range (for example, whether the area and the roundness are within the predetermined range) is detected, and if the predetermined feature range is satisfied, the defect extraction may be continued; otherwise, outputting the semantic meaning, wherein the target has a problem, and finishing the defect extraction.
And S13, carrying out illumination correction on the target image.
Usually, people will wash the target strongly before detecting the defect, and the interference of dust or fiber can be basically eliminated, the interference to the target image mainly comes from illumination and camera, and the two interference usually cause the light receiving unevenness of the image, as shown in fig. 2. Further, such interference may affect the accuracy of defect extraction. Therefore, according to an embodiment of the present invention, in order to reduce the influence of illumination, illumination correction is performed on a target image, including: the method comprises the steps of extracting an illumination component from a target image by using large-size median filtering (for example, a 21 × 21 median filtering convolution kernel), and then removing the illumination component from the target image to obtain an illumination-corrected target image, so that the overall illumination of the target image can be as uniform as possible. Specifically, the target image is regarded as the product of the illumination component and the reflection component, the illumination component of the image is estimated by using median filtering, then the interference area in the illumination component is removed, and finally the reflection component is calculated according to the target image and the illumination component and is used as the target image after illumination correction. Fig. 3 shows the target image obtained by performing illumination correction on the target image (ROI obtained in step S11) of fig. 2.
And S14, selecting partial data from the actual gray level histogram of the target image to perform Gaussian fitting to obtain a standard gray level histogram of the target image. According to one embodiment of the present invention, step S14 includes the following sub-steps:
and S141, selecting partial data from the actual gray level histogram of the target image.
After undergoing illumination correction, the target image is typically only likely to have minor defects or no defects. Generally, when an image is taken by a camera with a light source just above the center of a target and is subjected to illumination correction, its gray-scale histogram appears to be approximately gaussian, as shown in fig. 4. The inventors found through a lot of experiments that at least about 80% of the pixels in the actual gray-scale histogram of the target image do not belong to the defective portion after going through the above steps S11-S13, i.e. about 80% of the pixels are standard and defect-free. Further, according to the characteristics of the gray histogram, the range in which the defect occurs is generally on the left or right of the histogram (because the defect is generally dark or bright in the target image), and therefore data at a larger number of pixels (i.e., the ordinate is larger) of the gray histogram (e.g., those pixels corresponding to the gray value of the larger number of pixels) can be used as the standard data to construct the standard gray histogram of the target image. In view of this, a standard gray-scale histogram may be constructed by selecting pixel data occupying 80% of the total number of pixels from the actual gray-scale histogram of the target image in descending order of the ordinate.
According to an embodiment of the present invention, extending from the peak of the actual gray histogram of the target image, i.e. from the maximum number of pixels in the actual gray histogram, to both sides until the total number of pixels extended reaches 80% of all the pixels in the target image comprises:
from the maximum pixel number of the actual gray level histogram of the target image, firstly extending a pixel of a gray level to the right, if the sum of the pixel numbers obtained by extension is less than 80% of all the pixel numbers of the target image, then extending a pixel of a gray level to the left, and so on, until the sum of the pixel numbers extended is not less than 80% of all the pixel numbers of the target image, the corresponding algorithm flow is as follows:
Figure BDA0002069959590000061
Figure BDA0002069959590000071
the idea of the above algorithm is as follows:
after obtaining an actual gray level histogram of the target image, obtaining the maximum pixel number and a gray level value corresponding to the maximum pixel number according to the actual gray level histogram; then, starting from the gray value corresponding to the maximum pixel number, firstly moving the gray level to the right by one gray level, and updating the maximum pixel number to the original maximum pixel number plus the pixel number corresponding to the gray level; and (3) judging: if the maximum pixel number obtained by moving is less than 80% of all the pixel numbers of the actual gray level histogram, moving two gray levels to the left, otherwise, exiting the circulation; then, if the circulation is not exited, adding the maximum pixel number to the pixel number corresponding to the two gray levels moved leftwards; and (3) judging: if the maximum number of pixels obtained by shifting is less than 80% of all the pixels of the actual gray histogram, shifting three gray levels to the right, otherwise, exiting the loop. And by analogy, a group of data is finally obtained, wherein the data comprises the gray value and the corresponding pixel number.
The logical sources of the above algorithm are represented as follows:
Figure BDA0002069959590000072
sumnew_pix=sumnew_pix+Histnewsumnew_pix≤sumpix
wherein, sumpixRepresents 80% of all the pixel numbers of the target image; sumnew_pixRepresenting the maximum pixel number obtained by calculation after expansion; hist [ j]Expressing the pixel number corresponding to the gray value j in the actual gray histogram; maxgray represents a gray value corresponding to the maximum number of pixels of the actual gray histogram; i represents a variable which is incremented in execution and has an initial value of 0, wherein i is incremented by 1 each time i is executed; x represents a variable that is incremented in execution and has an initial value of 1, where x is incremented by 1 every two executions.
And S142, carrying out Gaussian fitting on the selected data to obtain a standard gray level histogram of the target image.
Specifically, after partial data of the actual gradation histogram is obtained, gaussian fitting is performed based on these discrete data. Wherein the gaussian function is represented as follows:
Figure BDA0002069959590000073
wherein y represents the number of pixels, x represents the gray scale value, and a, b, and c are three parameters, respectively.
The gaussian function is an exponential function and there are three parameters a, b, c, which make it difficult to fit directly. In this regard, according to one embodiment of the present invention, the gaussian is logarithmically transformed into a nonlinear quadratic equation for fitting, thereby simplifying the calculation. The specific derivation formula is as follows:
Figure BDA0002069959590000081
order to
Figure BDA0002069959590000082
Finally, the following is obtained:
Z=A2x2+A1x+A0 (5)
thus, the gaussian function is converted to a general nonlinear quadratic equation.
For this non-linear quadratic equation, according to one embodiment of the invention, a least squares polynomial curve fit is used. Specifically, the method comprises the following steps:
first, from the partial data selected in step S141, Hist [ x ] is calculatedj]And Z (x)j) The sum of the distances of (a), i.e., the sum of squared deviations, is given by:
Figure BDA0002069959590000083
wherein x isjRepresenting gray values, i.e. xj=j;Hist[xj]The number of pixels in the corresponding actual gradation histogram when the gradation value is j is indicated.
Then, in order to make R2With the smallest value, the partial derivative is calculated for the above equation to obtain the following equation:
Figure BDA0002069959590000084
simplifying equation (7) and expressing it in matrix form yields the following equation:
Figure BDA0002069959590000085
obtaining optimal polynomial parameter A according to the formula0、A1、A2And then, obtaining a Gaussian function parameter (the Gaussian function is shown in a formula 2) according to the inverse derivation of the conversion of the nonlinear quadratic equation, finishing the estimation of the parameters a, b and c, and obtaining a fitted Gaussian function which is used as a distribution function of the standard gray level histogram envelope of the target image.
And S15, acquiring the first and last intersection points of the standard gray level histogram and the actual gray level histogram of the target image.
Specifically, the gaussian function obtained in step S14 and the actual gray level histogram envelope distribution function of the target image are calculated as shown in the following formula, and the sum c (x) of the difference values is obtained.
Figure BDA0002069959590000091
Where i represents the gray value increment variable in the summation process, and ranges from 0, x]And x represents [0,255 ]]Gray value of freal(i) Representing the actual gray histogram data (number of pixels), f, for a gray value of ifit(i) The standard gradation histogram data (number of pixels) when the gradation value is i is shown.
When a defect is present, the gray values x, f for those corresponding to the defectreal(x) Is generally greater than ffit(x) A value of (d); in addition, the gray scale value corresponding to the defect is usually lower or higher than the normal region (i.e. the defect-free region), i.e. there is generally no defect in the gray scale of the central region of the gray scale histogram. Thus, according to one embodiment of the invention, the occurrence of the first x value that maximizes C (x) and the last x value that maximizes C (x) is selected to determine the two thresholds for threshold splitting in subsequent steps, i.e., the maximum and minimum thresholds. Fig. 5 schematically shows an example of an actual gray histogram of a target image and a standard gray histogram, and as can be seen from fig. 5, the first x value that maximizes c (x) and the last x value that maximizes c (x) correspond to the first intersection point and the last intersection point of the actual gray histogram and the standard gray histogram in fig. 5, respectively.
In the gradation histogram, since it is not actually a physical meaning that the number of pixels is 0, the range of the intersection point can be limited to the gradation value corresponding to the number of pixels other than 0. In addition, since the gray histogram is actually a series of discrete data, there are almost no real intersections, and therefore the discrete intersections are calculated by using the curve characteristics when the intersections occur, i.e., curve a before the intersection is on curve B and curve a after the intersection is under curve B, the first intersection of the actual gray histogram and the standard gray histogram is defined as follows:
Figure BDA0002069959590000092
wherein, graylowA first intersection point representing the actual gray level histogram and the standard gray level histogram; i denotes an incremental variable (gradation value) from 1 to 255 in execution; hist [ i ]]Representing the corresponding pixel number in the actual gray histogram when the gray value is i;
Figure BDA0002069959590000093
representing the corresponding number of pixels in the standard gray-scale histogram when the gray-scale value is i.
The principle of equation (10) is as follows: when i is executed, the corresponding pixel number in the actual gray level histogram is found to be larger than the corresponding pixel number in the standard gray level histogram for the first time, and the pixel number meeting the standard gray level histogram is larger than the corresponding pixel number in the actual gray level histogram at i +1, then the first intersection gray is madelowEqual to i, and ends the above process; otherwise, the above process is performed until the first intersection point occurs.
The calculation principle of the last intersection point of the actual gray level histogram and the standard gray level histogram is similar to that of the first intersection point, and is expressed as follows:
Figure BDA0002069959590000101
wherein, graylowRepresents the last intersection of the actual and standard gray histograms, and i represents a decreasing variable (gray value) from 255 to 0 in the execution.
And S16, determining two gray values as threshold values according to the first intersection point and the last intersection point in the actual gray level histogram, wherein the threshold values are used for carrying out threshold segmentation on the target image.
For convenience, in one embodiment, the first and last intersections may be referred to as a minimum threshold and a maximum threshold, respectively. However, the threshold thus obtained may not be accurate enough for the following reasons:
curve fitting of discrete points is a method of selecting a suitable expression to approximate discrete data and obtaining a series of approximate values. Although this method can yield a function that describes discrete points, there is some deviation. Since the standard gray level histogram obtained by using the fitting method has a certain deviation from the actual standard gray level histogram, the intersection point of the actual gray level histogram and the standard gray level histogram has a certain deviation from the actual threshold (for threshold segmentation to extract the suspected defect).
In contrast, the inventors have conducted a large number of experiments and found that the divided pseudo defects are almost complete and have no missing defect, but the total number of pseudo defects is large, and as shown in fig. 6, the results of threshold division of two target images using the intersection as a threshold are shown, where the middle part of fig. 6 is a defect region obtained by complete division, and the right part of fig. 6 contains a large number of pseudo defects. That is, when threshold division is performed using the intersection as a threshold, an over-division phenomenon may occur. The inventor finds, by analysis, that the first intersection graylowGreater than the actual minimum threshold, and the last intersection grayhightIs smaller than the actual maximum threshold. Thus, the threshold value may be further determined on the basis of the first intersection point and the last intersection point. Furthermore, the inventors also analyzed a plurality of defect gray-scale histograms (regions before the first intersection and regions after the last intersection in the actual gray-scale histogram) determined by the first intersection and the last intersection, in which the shapes of the respective parts tend to rise and fall. Among them, the point with the largest ordinate in the defect gray histogram is considered to have the most drastic change in pixel and the highest probability of occurrence of a defect.
In view of this, according to an embodiment of the present invention, the gray values corresponding to the point with the largest ordinate (i.e. the maximum number of pixels) are searched for on the left side of the first intersection point and on the right side of the last intersection point in the actual gray histogram as the minimum threshold value and the maximum threshold value, respectively. Thus, a more accurate threshold value can be obtained for extracting the suspected defects, and meanwhile, the phenomenon of over-segmentation is reduced.
And S17, performing threshold segmentation on the target image according to the two obtained thresholds to obtain suspected defects.
In one embodiment, the target image may be binarized to achieve threshold segmentation. Specifically, according to the obtained two thresholds (the minimum threshold and the maximum threshold), performing threshold segmentation on the image to obtain a defect binary image, thereby obtaining a suspected defect, where the formula is as follows:
Figure BDA0002069959590000111
wherein, T1Denotes a minimum threshold value, T2The maximum threshold value is shown, f is the target image, and g is the suspected defect image obtained after threshold segmentation.
The inventor proves through a large number of experiments that the suspected defects of the target can be extracted more completely and the precision is higher by using the target defect extraction method provided by the invention. On the one hand, the problem of missed extraction is avoided, and on the other hand, the problem of incomplete defect information is prevented. In addition, the invention utilizes the gray level envelope of the histogram, and the calculated amount is smaller; and two thresholds are adopted for threshold segmentation, so that the running time of the method meets the requirement of industrial real-time property, and the method has good practical value.
It should be noted that some exemplary methods are depicted as flowcharts. Although a flowchart may describe the operations as being performed serially, it can be appreciated that many of the operations can be performed in parallel, concurrently, or with synchronization. In addition, the order of the operations may be rearranged. A process may terminate when an operation is completed, but may have additional steps not included in the figure or embodiment.
The above-described methods may be implemented by hardware, software, firmware, middleware, pseudocode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or pseudo code, the program code or code segments to perform the tasks may be stored in a computer readable medium such as a storage medium, and a processor may perform the tasks.
It should be appreciated that the software-implemented exemplary embodiment is typically encoded on some form of program storage medium or implemented over some type of transmission medium. The program storage medium may be any non-transitory storage medium such as a magnetic disk (e.g., a floppy disk or a hard drive) or an optical disk (e.g., a compact disk read only memory or "CD ROM"), and may be read only or random access. Similarly, the transmission medium may be twisted wire pairs, coaxial cable, optical fiber, or some other suitable transmission medium known to the art.
Although the present invention has been described by way of preferred embodiments, the present invention is not limited to the embodiments described herein, and various changes and modifications may be made without departing from the scope of the present invention.

Claims (9)

1. A target defect extraction method, comprising:
step 1) selecting data from an actual gray level histogram of a target image for fitting to obtain a standard gray level histogram of the target image;
step 2) acquiring a first intersection point and a last intersection point of the actual gray level histogram and the standard gray level histogram;
step 3) determining two gray values as threshold values according to the first intersection point and the last intersection point, including: in the actual gray level histogram, searching gray levels corresponding to the maximum pixel number on the left side of the first intersection point and the right side of the last intersection point, and respectively taking the gray levels as a minimum threshold and a maximum threshold; and
and performing threshold segmentation on the target image according to the threshold to obtain suspected defects.
2. The method of claim 1, wherein step 1) comprises:
from the gray value corresponding to the maximum pixel number of the actual gray histogram, firstly extending pixels of a gray level to the right, if the sum of the pixel numbers obtained by extension is smaller than a preset threshold value, then extending pixels of a gray level to the left, and repeating the process until the sum of the pixel numbers obtained by extension is not smaller than the preset threshold value;
and performing Gaussian fitting according to the data obtained by expansion to obtain a standard gray level histogram of the target image.
3. The method of claim 2, wherein the predetermined threshold is 80% of the total number of pixels in the target image.
4. The method of claim 2, wherein performing a gaussian fit based on the extended data comprises:
converting the Gaussian function into a nonlinear quadratic equation;
fitting the nonlinear quadratic equation using a least squares polynomial curve;
and performing inverse derivation on the nonlinear quadratic equation to obtain the Gaussian function.
5. The method according to claim 1, wherein in step 2), the first intersection point is obtained according to the following formula:
Figure FDA0003061854350000011
wherein, graylowRepresenting a first intersection of the actual gray-scale histogram and the standard gray-scale histogram; i represents a gray value, which is an incremental variable from 1 to 255; hist [ i ]]Representing the corresponding pixel number in the actual gray histogram when the gray value is i;
Figure FDA0003061854350000022
representing the number of pixels corresponding to the standard gray histogram when the gray value is i; and
the last intersection point is obtained according to the following formula:
Figure FDA0003061854350000021
wherein, graylowRepresenting the last intersection point of the actual gray level histogram and the standard gray level histogram; i represents a gray value, which is a decreasing variable from 255 to 1; hist [ i ]]Representing the corresponding pixel number in the actual gray histogram when the gray value is i;
Figure FDA0003061854350000023
and when the gray value is i, the corresponding pixel number in the standard gray histogram is represented.
6. The method of claim 1, further comprising, prior to step 1):
extracting a target image;
and carrying out illumination correction on the target image.
7. The method of claim 6, wherein extracting a target image comprises:
segmenting a target image from an image containing the target by using threshold segmentation;
and performing morphological operation on the segmented target image.
8. The method of claim 6, further comprising:
before the illumination correction is carried out on the target image, whether the area and the roundness of the target image meet the preset standards or not is judged, and if not, the defect extraction is ended.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more computer programs that, when executed by the one or more processors, cause the electronic device to implement the method of any of claims 1-8.
CN201910434060.7A 2019-05-23 2019-05-23 Target defect extraction method Active CN110288566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910434060.7A CN110288566B (en) 2019-05-23 2019-05-23 Target defect extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910434060.7A CN110288566B (en) 2019-05-23 2019-05-23 Target defect extraction method

Publications (2)

Publication Number Publication Date
CN110288566A CN110288566A (en) 2019-09-27
CN110288566B true CN110288566B (en) 2021-12-07

Family

ID=68002443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910434060.7A Active CN110288566B (en) 2019-05-23 2019-05-23 Target defect extraction method

Country Status (1)

Country Link
CN (1) CN110288566B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991082B (en) * 2019-12-19 2023-11-28 信利(仁寿)高端显示科技有限公司 Mura quantification method based on excimer laser annealing
CN111598801B (en) * 2020-05-11 2023-04-28 苏州佳智彩光电科技有限公司 Identification method for weak Mura defect
CN111986195B (en) * 2020-09-07 2024-02-20 凌云光技术股份有限公司 Appearance defect detection method and system
CN112683533A (en) * 2020-12-18 2021-04-20 合肥工业大学 Signal enhancement method and system for bearing fault diagnosis
CN112700414A (en) * 2020-12-30 2021-04-23 广东德诚大数据科技有限公司 Blank answer detection method and system for examination paper marking
CN114581362B (en) * 2021-07-22 2023-11-07 正泰集团研发中心(上海)有限公司 Photovoltaic module defect detection method and device, electronic equipment and readable storage medium
CN116420159A (en) * 2021-11-05 2023-07-11 宁德时代新能源科技股份有限公司 Defect detection method, device and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010085096A (en) * 2008-09-29 2010-04-15 Toshiba Corp Surface inspecting device
CN105069790A (en) * 2015-08-06 2015-11-18 潍坊学院 Rapid imaging detection method for gear appearance defect
CN107610119A (en) * 2017-09-26 2018-01-19 河北工业大学 The accurate detection method of steel strip surface defect decomposed based on histogram
CN109187581A (en) * 2018-07-12 2019-01-11 中国科学院自动化研究所 The bearing finished products plate defects detection method of view-based access control model

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02268388A (en) * 1989-04-10 1990-11-02 Hitachi Ltd Character recognizing method
JP4401590B2 (en) * 2000-06-19 2010-01-20 キヤノン株式会社 Image data processing method and image data processing apparatus
TWI489420B (en) * 2012-02-29 2015-06-21 Chroma Ate Inc Method for detecting surface patterns of a sample
CN109308705B (en) * 2018-09-27 2021-11-05 上海交通大学 Real-time extraction method for image contour of welding pool
CN109740595B (en) * 2018-12-27 2022-12-30 武汉理工大学 Oblique vehicle detection and tracking system and method based on machine vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010085096A (en) * 2008-09-29 2010-04-15 Toshiba Corp Surface inspecting device
CN105069790A (en) * 2015-08-06 2015-11-18 潍坊学院 Rapid imaging detection method for gear appearance defect
CN107610119A (en) * 2017-09-26 2018-01-19 河北工业大学 The accurate detection method of steel strip surface defect decomposed based on histogram
CN109187581A (en) * 2018-07-12 2019-01-11 中国科学院自动化研究所 The bearing finished products plate defects detection method of view-based access control model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于双阈值自适应分割的轴承滚子表面缺陷提取技术研究;易礼燕 等;《计算机科学与应用》;20190213;第9卷(第2期);316-319、图1,2 *

Also Published As

Publication number Publication date
CN110288566A (en) 2019-09-27

Similar Documents

Publication Publication Date Title
CN110288566B (en) Target defect extraction method
CN109242853B (en) PCB defect intelligent detection method based on image processing
CN111260616A (en) Insulator crack detection method based on Canny operator two-dimensional threshold segmentation optimization
US20230014823A1 (en) Defect detection in image space
CN114972326A (en) Defective product identification method for heat-shrinkable tube expanding process
JP2007510993A (en) Object detection in images
CN111583223A (en) Defect detection method, defect detection device, computer equipment and computer readable storage medium
CN115063430B (en) Electric pipeline crack detection method based on image processing
CN111598801B (en) Identification method for weak Mura defect
CN111046862A (en) Character segmentation method and device and computer readable storage medium
CN111429372A (en) Method for enhancing edge detection effect of low-contrast image
CN110688871A (en) Edge detection method based on bar code identification
CN111723634A (en) Image detection method and device, electronic equipment and storage medium
CN113780110A (en) Method and device for detecting weak and small targets in image sequence in real time
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
CN115587966A (en) Method and system for detecting whether parts are missing or not under condition of uneven illumination
CN116342586A (en) Road surface quality detection method based on machine vision
CN115272362A (en) Method and device for segmenting effective area of digital pathology full-field image
CN113129265B (en) Method and device for detecting surface defects of ceramic tiles and storage medium
CN113888456B (en) Corner detection method based on contour
CN113298775A (en) Self-priming pump double-sided metal impeller appearance defect detection method, system and medium
CN114913112A (en) Method, device and equipment for detecting double edges of wafer
CN117496109A (en) Image comparison and analysis method and device, electronic equipment and storage medium
Tabatabaei et al. A novel method for binarization of badly illuminated document images
CN113538500B (en) Image segmentation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant