CN116228746A - Defect detection method, device, electronic apparatus, storage medium, and program product - Google Patents

Defect detection method, device, electronic apparatus, storage medium, and program product Download PDF

Info

Publication number
CN116228746A
CN116228746A CN202310449479.6A CN202310449479A CN116228746A CN 116228746 A CN116228746 A CN 116228746A CN 202310449479 A CN202310449479 A CN 202310449479A CN 116228746 A CN116228746 A CN 116228746A
Authority
CN
China
Prior art keywords
image
image block
difference
determining
candidate frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310449479.6A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Moore Thread Intelligence Technology Shanghai Co ltd
Moore Threads Technology Co Ltd
Original Assignee
Moore Thread Intelligence Technology Shanghai Co ltd
Moore Threads Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Moore Thread Intelligence Technology Shanghai Co ltd, Moore Threads Technology Co Ltd filed Critical Moore Thread Intelligence Technology Shanghai Co ltd
Priority to CN202310449479.6A priority Critical patent/CN116228746A/en
Publication of CN116228746A publication Critical patent/CN116228746A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The present disclosure relates to the field of image data processing technologies, and in particular, to a defect detection method, a defect detection apparatus, an electronic device, a storage medium, and a program product. The method comprises the following steps: acquiring an image to be detected and a template image corresponding to the image to be detected; determining candidate frames of defects in the image to be detected according to the image to be detected and the template image; and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the image block corresponding to the candidate frame before and after morphological transformation. According to the method and the device, the candidate frame is subjected to defect detection by utilizing the difference information of the image blocks corresponding to the candidate frame before and after morphological transformation, so that the accuracy of defect detection can be improved.

Description

Defect detection method, device, electronic apparatus, storage medium, and program product
The present application is a divisional application of chinese patent application filed at 2022, 12 and 29, with application number 202211701346.5, and entitled "defect detection method, apparatus, electronic device, storage medium, and program product".
Technical Field
The present disclosure relates to the field of image data processing technologies, and in particular, to a defect detection method, a defect detection apparatus, an electronic device, a storage medium, and a program product.
Background
Intelligent industrial quality inspection is an important problem in the field of computer vision and industrial quality inspection. How to improve the accuracy of defect detection on products is a technical problem to be solved.
Disclosure of Invention
The present disclosure provides a defect detection technique.
According to an aspect of the present disclosure, there is provided a defect detection method including:
acquiring an image to be detected and a template image corresponding to the image to be detected;
determining candidate frames of defects in the image to be detected according to the image to be detected and the template image;
and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the image block corresponding to the candidate frame before and after morphological transformation.
According to an aspect of the present disclosure, there is provided a training method of a machine learning model for defect detection, including:
acquiring a training image and a template image corresponding to the training image;
determining candidate frames of defects in the training image according to the training image and the template image;
at least inputting difference information of image blocks corresponding to the candidate frames before and after morphological transformation into a machine learning model, and obtaining defect prediction results corresponding to the candidate frames through the machine learning model;
And training the machine learning model according to the labeling information corresponding to the candidate frame and the defect prediction result.
According to an aspect of the present disclosure, there is provided a defect detecting apparatus including:
the first acquisition module is used for acquiring an image to be detected and a template image corresponding to the image to be detected;
the first determining module is used for determining candidate frames of defects in the image to be detected according to the image to be detected and the template image;
and the second determining module is used for determining a defect detection result corresponding to the candidate frame at least according to the difference information of the image block corresponding to the candidate frame before and after morphological transformation.
According to an aspect of the present disclosure, there is provided a training apparatus of a machine learning model for defect detection, including:
the second acquisition module is used for acquiring a training image and a template image corresponding to the training image;
a third determining module, configured to determine a candidate frame of a defect in the training image according to the training image and the template image;
the prediction module is used for inputting at least difference information of the image blocks corresponding to the candidate frames before and after morphological transformation into a machine learning model, and obtaining defect prediction results corresponding to the candidate frames through the machine learning model;
And the training module is used for training the machine learning model according to the labeling information corresponding to the candidate frame and the defect prediction result.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the executable instructions stored by the memory to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
According to an aspect of the present disclosure, there is provided a computer program product comprising a computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the above method.
In the embodiment of the disclosure, the candidate frame of the defect in the image to be detected is determined according to the image to be detected and the template image corresponding to the image to be detected, and the defect detection result corresponding to the candidate frame is determined at least according to the difference information of the image block corresponding to the candidate frame before and after morphological transformation, so that the defect detection accuracy of the candidate frame can be improved by utilizing the difference information of the image block corresponding to the candidate frame before and after morphological transformation.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 shows a flowchart of a defect detection method provided by an embodiment of the present disclosure.
FIG. 2 illustrates a flowchart of a method of training a machine learning model for defect detection provided by an embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of template images in a training method of a machine learning model for defect detection provided by an embodiment of the present disclosure.
Fig. 4 is a schematic diagram of training images and labeling data thereof in a training method of a machine learning model for defect detection provided in an embodiment of the present disclosure.
Fig. 5 shows a block diagram of a defect detection apparatus provided by an embodiment of the present disclosure.
Fig. 6 shows a block diagram of a training apparatus of a machine learning model for defect detection provided by an embodiment of the present disclosure.
Fig. 7 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
The embodiment of the disclosure provides a defect detection method, a device, an electronic device, a storage medium and a program product, wherein a candidate frame of a defect in an image to be detected is determined according to the image to be detected and a template image corresponding to the image to be detected, and a defect detection result corresponding to the candidate frame is determined at least according to difference information of image blocks corresponding to the candidate frame before and after morphological transformation, so that defect detection is performed on the candidate frame by utilizing the difference information of the image blocks corresponding to the candidate frame before and after the morphological transformation, and the accuracy of defect detection can be improved.
The defect detection method provided by the embodiment of the present disclosure is described in detail below with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a defect detection method provided by an embodiment of the present disclosure. In one possible implementation, the main body of performing the defect detection method may be a defect detection apparatus, for example, the defect detection method may be performed by a terminal device or a server or other electronic devices. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or the like. In some possible implementations, the defect detection method may be implemented by way of a processor invoking computer readable instructions stored in a memory. As shown in fig. 1, the defect detection method includes steps S11 to S13.
In step S11, an image to be detected and a template image corresponding to the image to be detected are acquired.
In step S12, a candidate frame of the defect in the image to be detected is determined according to the image to be detected and the template image.
In step S13, a defect detection result corresponding to the candidate frame is determined at least according to difference information of the image block corresponding to the candidate frame before and after morphological transformation.
In the embodiment of the disclosure, the image to be detected may represent an image corresponding to a target object to be subjected to defect detection. For example, image acquisition may be performed on the target object to be subjected to defect detection, so as to obtain an image to be detected. The target object may be any object to be subjected to defect detection.
In one possible implementation manner, the image to be detected is an image to be detected corresponding to the printed circuit board. In this implementation, the printed circuit board may be a hard board PCB (Printed Circuit Board ), or may be an FPC (Flexible Printed Circuit, flexible printed circuit board), etc., which is not limited herein.
In the implementation manner, the candidate frame of the defect in the to-be-detected image is determined according to the to-be-detected image and the template image corresponding to the to-be-detected image, and the defect detection result corresponding to the candidate frame is determined at least according to the difference information of the image block corresponding to the candidate frame before and after morphological transformation, so that the defect detection of the candidate frame is performed by utilizing the difference information of the image block corresponding to the candidate frame before and after morphological transformation, and the accuracy of the defect detection of the printed circuit board can be improved.
In the embodiment of the disclosure, the template image corresponding to the image to be detected may represent a defect-free image corresponding to the image to be detected. By comparing the image to be detected with the template image, candidate frames of defects in the image to be detected can be determined. Wherein the candidate box may represent an area in the image to be detected that may be a defect.
In a possible implementation manner, the determining a candidate frame of the defect in the image to be detected according to the image to be detected and the template image includes: obtaining a difference image of the image to be detected and the template image; determining contours in the difference image; and determining candidate frames of the defects in the image to be detected according to the outline.
In this implementation, pixel values of the same pixel positions of the image to be detected and the template image may be compared, and a difference image of the image to be detected and the template image may be determined. In one example, the image to be detected may be represented by img_test, the template image may be represented by img_temp, and the difference image may be represented by img_diff.
In this implementation manner, a contour searching method such as findContours may be used to search the contour of the difference image, so as to obtain the contour in the difference image. After determining the contour in the difference image, a candidate box of the defect in the image to be detected may be determined according to the contour in the difference image.
In this implementation manner, by obtaining a difference image of the image to be detected and the template image, determining a contour in the difference image, and determining a candidate frame of a defect in the image to be detected according to the contour, the candidate frame of the defect in the image to be detected can be accurately determined by means of conventional image processing.
As an example of this implementation, the obtaining a difference image of the image to be detected and the template image includes: respectively carrying out blurring operation on the image to be detected and the template image to obtain a first blurring image corresponding to the image to be detected and a second blurring image corresponding to the template image; and determining a difference image of the image to be detected and the template image according to the first blurred image and the second blurred image. The first blurred image represents a blurred image corresponding to the image to be detected, and the second blurred image represents a blurred image corresponding to the template image.
In one example, the gaussian blur operation may be performed on the image to be detected and the template image, respectively, to obtain a first blurred image corresponding to the image to be detected and a second blurred image corresponding to the template image. The gaussian kernel of the gaussian blur operation may be 5×5, 3×3, or 7×7, etc., which is not limited herein. In one example, the first blurred image may be represented by img_test_gaussian and the second blurred image may be represented by img_temp_gaussian.
In this example, the first blurred image corresponding to the image to be detected and the second blurred image corresponding to the template image are obtained by performing blurring operation on the image to be detected and the template image, and the difference image between the image to be detected and the template image is determined according to the first blurred image and the second blurred image, so that a smoother difference image can be obtained.
In one example, the determining a difference image between the image to be detected and the template image according to the first blurred image and the second blurred image includes: respectively carrying out binarization operation on the first blurred image and the second blurred image to obtain a first binarized image corresponding to the first blurred image and a second binarized image corresponding to the second blurred image; and determining a difference image of the image to be detected and the template image according to the first binarized image and the second binarized image. The first binarized image represents a binarized image corresponding to the first blurred image, and the second binarized image represents a binarized image corresponding to the second blurred image. In one example, the first binarized image may be represented by img_test_bina and the second binarized image may be represented by img_temp_bina.
In this example, the first blurred image and the second blurred image may be binarized by an OTSU (oxford) method or the like, to obtain a first binarized image corresponding to the first blurred image and a second binarized image corresponding to the second blurred image.
In one example, the determining a difference image between the image to be detected and the template image according to the first binarized image and the second binarized image includes: for any pixel position, if the pixel values of the pixel positions in the first binarized image and the second binarized image are different, the pixel value of the pixel position in the difference image of the image to be detected and the template image is 0; for any pixel position, if the pixel values of the pixel positions in the first binarized image and the second binarized image are the same, the pixel value of the pixel position in the difference image is 255.
For example, for any pixel position, if the pixel values of the pixel positions are different (i.e., one is 0 and the other is 255) in the first binarized image and the second binarized image, the pixel value of the pixel position is 0 (i.e., black) in the difference image; for any pixel position, if the pixel values of the pixel positions in the first binarized image and the second binarized image are the same (i.e., are both 0 or are both 255), then in the difference image, the pixel values of the pixel positions are 255 (i.e., white).
In one example, the following steps may be taken to obtain the difference image img_diff: let img_diff=img_test_bina-img_temp_bina; let the pixel value in img_diff that is not 0 become 255; let the value of each pixel location in img_diff become 255 minus the corresponding pixel value.
In this example, binarization operation is performed on the first blurred image and the second blurred image respectively, so as to obtain a first binarized image corresponding to the first blurred image and a second binarized image corresponding to the second blurred image, and a difference image between the image to be detected and the template image is determined according to the first binarized image and the second binarized image, so that interference of the intensity of ambient light for collecting the image to be detected on defect detection can be reduced.
As another example of this implementation, the obtaining a difference image of the image to be detected and the template image includes: respectively carrying out binarization operation on the image to be detected and the template image to obtain a first binarization image corresponding to the image to be detected and a second binarization image corresponding to the template image; and determining a difference image of the image to be detected and the template image according to the first binarized image and the second binarized image.
As an example of this implementation, the determining the contour in the difference image includes: performing morphological operation on the difference image to obtain a de-interference image corresponding to the difference image; and searching the outline in the interference elimination image to be used as the outline in the difference image. The interference-removed image may represent an image corresponding to the difference image after interference removal.
In this example, a differential image may be subjected to a dilation (scale) operation and/or an erosion (erode) operation, resulting in a de-interference image corresponding to the differential image. In one example, the difference image may be sequentially subjected to an expansion operation of a core size of 3×3, a corrosion operation of a core size of 7×3, an expansion operation of a core size of 7×3, a corrosion operation of a core size of 3×7, and an expansion operation of a core size of 3×7, to obtain the interference-free image corresponding to the difference image.
In this example, by performing morphological operations on the difference image, a de-interference image corresponding to the difference image is obtained, and a contour in the de-interference image is searched and used as a contour in the difference image, so that by performing morphological operations on the difference image, transverse and longitudinal interference lines in the difference image can be removed, and the accuracy of the searched contour can be improved.
As an example of this implementation, the determining, according to the contour, a candidate box of a defect in the image to be detected includes: filtering outlines meeting preset conditions in the difference images; and determining candidate frames of the defects in the image to be detected according to the filtered residual outlines in the difference image.
In this example, if any contour in the difference image satisfies a preset condition, it may be determined that the contour belongs to a preset candidate frame that does not belong to a defect, so that the contour may be filtered. The preset conditions may be empirically set, and are not limited herein.
In one example, the preset condition includes at least one of: the area of the area surrounded by the outline is smaller than a first preset area; the area of the area surrounded by the outline is larger than a second preset area, wherein the second preset area is larger than the first preset area; the aspect ratio of the surrounding rectangle of the outline is smaller than a first preset threshold value; the aspect ratio of the surrounding rectangle of the outline is larger than a second preset threshold value, wherein the second preset threshold value is larger than the first preset threshold value; the average pixel value within the bounding rectangle of the outline is greater than the preset pixel value.
The bounding rectangle of any outline may be a bounding rectangle of the outline, and sides of the bounding rectangle of the outline are parallel to a preset coordinate axis (e.g., xy axis). In some application scenarios, bounding rectangles may also be referred to as bounding boxes. For example, the bounding rectangle of the outline may be represented by (x, y, w, h), where (x, y) is the coordinates of the upper left corner of the bounding rectangle, w is the width of the bounding rectangle, and h is the height of the bounding rectangle.
For example, the first preset AREA area_th_low=20 and the second preset AREA area_th_high=20000. If the area of the area enclosed by any contour is smaller than 20 or larger than 20000, the contour can be filtered.
For another example, the first preset threshold is 0.1 and the second preset threshold is 10. If the aspect ratio of the bounding rectangle of any contour is less than 0.1 or greater than 10, the contour may be filtered.
As another example, the preset PIXEL value pixel_mean_th=240. If the average pixel value within the bounding rectangle of any contour is greater than 240, the contour may be filtered.
In one example, the determining the candidate frame of the defect in the image to be detected according to the filtered residual contour in the difference image includes: and determining an enlarged rectangle corresponding to a surrounding rectangle of the contour as a candidate frame of the defect in the image to be detected for any residual contour after filtering in the difference image, wherein the enlarged rectangle coincides with the geometric center of the surrounding rectangle, the width of the enlarged rectangle is a first preset multiple of the width of the surrounding rectangle, and the height of the enlarged rectangle is a second preset multiple of the height of the surrounding rectangle, and the first preset multiple and the second preset multiple are both larger than 1.
The first preset multiple and the second preset multiple may be the same or different. For example, the first preset multiple and the second preset multiple are both 2, the surrounding rectangle of the outline is (x, y, w, h), and the corresponding enlarged rectangle of the surrounding rectangle of the outline is (x-w/2, y-h/2,2×w,2×h).
In the above example, by filtering the contours in the difference image that meet the preset condition, and determining the candidate frame of the defect in the image to be detected according to the contours remaining after filtering in the difference image, the interference of the abnormal contours in the difference image on defect detection can be reduced.
As another example of this implementation, candidate boxes for defects may be determined from contours in the difference image, respectively. That is, in this example, the found contours may not be filtered.
In another possible implementation manner, the determining a candidate frame of the defect in the image to be detected according to the image to be detected and the template image includes: inputting the image to be detected and the template image into a pre-trained second neural network, and determining candidate frames of defects in the image to be detected through the second neural network. The second neural network is used for determining candidate frames of defects in the image to be detected based on the image to be detected and the template image.
In an embodiment of the present disclosure, the image block corresponding to the candidate frame may include at least one of: the method comprises the steps of selecting a first image block of a candidate frame on the template image, a second image block of the candidate frame on a difference image and a third image block of the candidate frame on the image to be detected. The first image block, the second image block and the third image block can be respectively determined from the template image, the difference image and the image to be detected according to the position of the candidate frame on the image to be detected. That is, the first image block may represent an image block of the candidate frame on the template image, the second image block may represent an image block of the candidate frame on the difference image, and the third image block may represent an image block of the candidate frame on the image to be detected.
Accordingly, the defect detection result corresponding to the candidate frame may be determined according to at least one of the difference information of the first image block before and after the morphological transformation, the difference information of the second image block before and after the morphological transformation, and the difference information of the third image block before and after the morphological transformation.
In one possible implementation manner, the image block corresponding to the candidate frame includes: a first image block of the candidate frame on the template image and a second image block of the candidate frame on a difference image, wherein the difference image represents a difference image of the image to be detected and the template image; the determining a defect detection result corresponding to the candidate frame at least according to difference information of the image block corresponding to the candidate frame before and after morphological transformation comprises the following steps: and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation and the difference information of the second image block before and after morphological transformation. In one example, a first tile may be represented by temp img and a second tile may be represented by test img.
In this example, the defect detection is performed on the candidate frame based on at least the difference information of the first image block before and after the morphological transformation and the difference information of the second image block before and after the morphological transformation, so that the accuracy of the defect detection result corresponding to the candidate frame can be improved.
As an example of this implementation manner, the determining, according to at least the difference information of the first image block before and after the morphological transformation and the difference information of the second image block before and after the morphological transformation, the defect detection result corresponding to the candidate frame includes: obtaining a first binarized image block corresponding to the first image block and a second binarized image block corresponding to the second image block; performing morphological operation on the first binarized image block to obtain a first morphological transformation image block corresponding to the first binarized image block; performing morphological operation on the second binarized image block to obtain a second morphological transformation image block corresponding to the second binarized image block; and determining a defect detection result corresponding to the candidate frame according to a first pixel number with different pixel values between the first binarized image block and the first morphological transformation image block and a second pixel number with different pixel values between the second binarized image block and the second morphological transformation image block.
Wherein the first binarized image block represents a binarized image block corresponding to the first image block, and the second binarized image block represents a binarized image block corresponding to the second image block. In one example, the first binarized image block may be represented by temp_img_bina and the second binarized image block may be represented by test_img_bina.
In this example, an expansion operation and/or a corrosion operation may be performed on the first binarized image block, resulting in a first morphological transformed image block corresponding to the first binarized image block; and performing expansion operation and/or corrosion operation on the second binarized image block to obtain a second morphological transformation image block corresponding to the second binarized image block. The first morphological transformation image block may represent an image block obtained by performing morphological transformation on the first binarized image block, and the second morphological transformation image block may represent an image block obtained by performing morphological transformation on the second binarized image block.
The first binarized image block and the first morphological transformed image block are compared pixel by pixel, and a first pixel number with different pixel values between the first binarized image block and the first morphological transformed image block can be determined, wherein the first pixel number represents the pixel number with different pixel values between the first binarized image block and the first morphological transformed image block. And comparing the second binarized image block with the second morphological transformation image block pixel by pixel, and determining a second pixel number with different pixel values between the second binarized image block and the second morphological transformation image block, wherein the second pixel number represents the pixel number with different pixel values between the second binarized image block and the second morphological transformation image block.
In this example, the defect detection result corresponding to the candidate frame is determined from the first pixel number of the first binarized image block and the first morphologically transformed image block, which are different in pixel value, and the second pixel number of the second binarized image block and the second morphologically transformed image block, which are different in pixel value, whereby the accuracy of the defect detection result corresponding to the candidate frame can be improved.
In one example, the obtaining a first binarized image block corresponding to the first image block and a second binarized image block corresponding to the second image block includes: respectively carrying out blurring operation on a first gray image block corresponding to the first image block and a second gray image block corresponding to the second image block to obtain a first blurring image block corresponding to the first image block and a second blurring image block corresponding to the second image block; and respectively performing binarization operation on the first blurred image block and the second blurred image block to obtain a first binarization image block corresponding to the first image block and a second binarization image block corresponding to the second image block.
In this example, the first image block and the second image block may be respectively converted into gray maps, resulting in a first gray image block corresponding to the first image block and a second gray image block corresponding to the second image block. The first gray image block may represent a gray image corresponding to the first image block, and the second gray image block may represent a gray image corresponding to the second image block. In one example, the first gray scale image block may be represented by temp_img_gray and the second gray scale image block may be represented by test_img_gray.
In this example, the first gray image block and the second gray image block may be respectively subjected to a blurring operation, so as to obtain a first blurred image block corresponding to the first image block and a second blurred image block corresponding to the second image block. The first blurred image block may represent an image obtained by performing a blurring operation on the first image block, and the second blurred image block may represent an image obtained by performing a blurring operation on the second image block. In one example, the first blurred image block may be represented by temp img blu and the second blurred image block may be represented by test img blu. In one example, the first gray scale image block and the second gray scale image block may be subjected to a gaussian blur operation, respectively, to obtain a first blurred image block and a second blurred image block. The gaussian kernel of the gaussian blur operation may be 5×5, 3×3, or 7×7, etc., which is not limited herein.
In one example, an OTSU method may be used to perform binarization operation on the first blurred image block and the second blurred image block, to obtain a first binarized image block corresponding to the first image block and a second binarized image block corresponding to the second image block.
In this example, by performing blurring processing before binarization, smoother processing results can be obtained.
In one example, the performing morphological operations on the first binarized image block to obtain a first morphological transformed image block corresponding to the first binarized image block includes: performing morphological operations on the first binarized image blocks based on kernels of at least two sizes to obtain at least two first morphological transformed image blocks corresponding to the first binarized image blocks; the performing morphological operation on the second binarized image block to obtain a second morphological transformation image block corresponding to the second binarized image block, including: performing morphological operations on the second binarized image blocks based on the kernels of at least two sizes to obtain at least two second morphological transformation image blocks corresponding to the second binarized image blocks; the determining the defect detection result corresponding to the candidate frame according to the first pixel number with different pixel values between the first binarized image block and the first morphological transformed image block and the second pixel number with different pixel values between the second binarized image block and the second morphological transformed image block includes: for the at least two first morphological transformation image blocks, respectively determining the pixel numbers with different pixel values between the first binarization image blocks and the first morphological transformation image blocks to obtain at least two first pixel numbers; for the at least two second morphological transformation image blocks, respectively determining the pixel numbers with different pixel values between the second binarization image blocks and the second morphological transformation image blocks to obtain at least two second pixel numbers; and determining a defect detection result corresponding to the candidate frame according to the at least two first pixel numbers and the at least two second pixel numbers.
For example, the size of the core may include at least two of 3, 5, 7, 9, 11, etc., without limitation. In this example, based on either size, the first and second binarized image blocks may be subjected to a dilation operation and/or an erosion operation, respectively, resulting in corresponding first and second morphologically transformed image blocks.
In this example, morphological operations are performed on the first binarized image blocks based on kernels of at least two sizes to obtain at least two first morphological transformed image blocks corresponding to the first binarized image blocks, morphological operations are performed on the second binarized image blocks based on kernels of at least two sizes to obtain at least two second morphological transformed image blocks corresponding to the second binarized image blocks, for the at least two first morphological transformed image blocks, the number of pixels with different pixel values between the first binarized image blocks and the first morphological transformed image blocks is respectively determined to obtain at least two first pixel numbers, for the at least two second morphological transformed image blocks, the number of pixels with different pixel values between the second binarized image blocks and the second morphological transformed image blocks is respectively determined to obtain at least two second pixel numbers, and thus the determined morphological difference information can more accurately reflect the defect characteristics in the candidate frame; by determining the defect detection result corresponding to the candidate frame according to the at least two first pixel numbers and the at least two second pixel numbers, more accurate defect detection results can be determined for the candidate frame.
As one example of this implementation, the first morphological transformation image block includes a first dilation image block and a first erosion image block, and the second morphological transformation image block includes a second dilation image block and a second erosion image block; the performing morphological operation on the first binarized image block to obtain a first morphological transformed image block corresponding to the first binarized image block, including: performing expansion operation on the first binarized image block to obtain a first expanded image block corresponding to the first binarized image block; performing corrosion operation on the first binarized image block to obtain a first corrosion image block corresponding to the first binarized image block; the performing morphological operation on the second binarized image block to obtain a second morphological transformation image block corresponding to the second binarized image block, including: performing expansion operation on the second binarized image block to obtain a second expanded image block corresponding to the second binarized image block; and executing corrosion operation on the second binarized image block to obtain a second corrosion image block corresponding to the second binarized image block.
In this example, by performing the expansion operation and the erosion operation on the first binarized image block and the second binarized image block, respectively, and determining the difference information based on the corresponding expanded image block and the eroded image block, respectively, the accuracy of defect detection for the candidate frame can be further improved.
In one example, a core size list kernel_size_list= [3,5,7,9,11] may be set.
Each numerical value in the core size list can be used as the core size of the expansion operation, and the expansion difference characteristic v_ki_d under each core size can be obtained. For example, when the core size is 3, v_ki_d=v_k3_d, when the core size is 5, v_ki_d=v_k5_d, and so on. Performing expansion operation with the kernel size of i on the first binarized image block to obtain a first expanded image block; performing expansion operation with the kernel size of i on the second binarized image block to obtain a second expanded image block; the dilation difference feature v_ki_d may be determined from a first number of pixels n1 of a first binarized image block and a first dilated image block that differ in pixel value, and a second number of pixels n2 of a second binarized image block and a second dilated image block that differ in pixel value. For example, if n2 is equal to 0, v_ki_d= [ n1, n2,1]; if n2 is not equal to 0, v_ki_d= [ n1, n2, n1/n2].
Each numerical value in the core size list can be used as the core size of the corrosion operation to obtain the corrosion difference characteristic v_ki_e under each core size. For example, when the core size is 3, v_ki_e=v_k3_e, when the core size is 5, v_ki_e=v_k5_e, and so on. Performing corrosion operation with a kernel size of i on the first binarized image block to obtain a first corroded image block; performing corrosion operation with the kernel size of i on the second binarized image block to obtain a second corroded image block; the erosion difference feature v_ki_e may be determined based on a first number of pixels n1, which differ in pixel value between the first binarized image block and the first eroded image block, and a second number of pixels n2, which differ in pixel value between the second binarized image block and the second eroded image block. For example, if n2 is equal to 0, v_ki_e= [ n1, n2,1]; if n2 is not equal to 0, v_ki_e= [ n1, n2, n1/n2].
After determining the expansion difference feature v_differential= [ v_k3_d, v_k5_d, v_k7_d, v_k9_d, v_k11_d ] and the corrosion difference feature v_error= [ v_k3_e, v_k5_e, v_k7_e, v_k9_e, v_k11_e ], the defect detection result corresponding to the candidate frame may be determined according to the morphological difference feature v5= [ v_differential, v_error ].
In one possible implementation manner, the determining the defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after the morphological transformation and the difference information of the second image block before and after the morphological transformation includes: respectively carrying out gray statistics on the first image block and the second image block to obtain a first gray statistics result corresponding to the first image block and a second gray statistics result corresponding to the second image block; and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation, the difference information of the second image block before and after morphological transformation, the first gray level statistical result and the second gray level statistical result.
In this implementation manner, the first image block and the second image block may be respectively converted into gray maps, so as to obtain a first gray image block corresponding to the first image block and a second gray image block corresponding to the second image block. In one example, the first gray scale image block may be represented by temp_img_gray and the second gray scale image block may be represented by test_img_gray.
In this implementation, the first gray statistic may include the number of pixels of some or all of the gray values in the first image block. For example, the gray histogram of the first gray image block may be counted to obtain the first gray statistics. In one example, the gray histogram of the first gray image block may be represented using temp_hist.
The second gray level statistic may include the number of pixels of some or all gray level values in the second image block. For example, the gray histogram of the second gray image block may be counted to obtain the second gray statistics. In one example, the gray level histogram of the second gray level image block may be represented by test_hist.
In the implementation manner, the accuracy of the defect detection result corresponding to the candidate frame can be improved by combining the first gray level statistical result corresponding to the first image block and the second gray level statistical result corresponding to the second image block to determine the defect detection result corresponding to the candidate frame.
As an example of this implementation, the first gray statistics result includes: the number of pixels with the gray value of 0 and the number of pixels with the gray value of 255 in the first image block; the second gray level statistics include: the second image block has a gray value of 0 and a gray value of 255.
In one example, the number of pixels with a gray value of 0 in the first image block may be represented by temp_hist [0], the number of pixels with a gray value of 255 in the first image block may be represented by temp_hist [255], the number of pixels with a gray value of 0 in the second image block may be represented by test_hist [0], and the number of pixels with a gray value of 255 in the second image block may be represented by test_hist [255 ]. The defect detection result corresponding to the candidate frame can be determined by combining the gray scale feature v1= [ temp_hist [0], temp_hist [255], test_hist [0], test_hist [255 ].
In this example, the defect detection result corresponding to the candidate frame is determined by combining the number of pixels with the gray value of 0 and the number of pixels with the gray value of 255 in the first image block and the number of pixels with the gray value of 0 and the number of pixels with the gray value of 255 in the second image block, and thereby the defect detection of the candidate frame is assisted by using the number of pixels with the most significant gray values in the first image block and the second image block, and the accuracy of the defect detection result corresponding to the candidate frame can be improved.
In one possible implementation manner, the determining the defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after the morphological transformation and the difference information of the second image block before and after the morphological transformation includes: obtaining first contour information corresponding to the first image block and second contour information corresponding to the second image block; and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation, the difference information of the second image block before and after morphological transformation, the first contour information and the second contour information.
In this implementation, the first contour information may represent information of a contour in the first image block, and the second contour information may represent information of a contour in the second image block.
In this implementation manner, contour searching methods such as findContours may be used to perform contour searching on the first image block and the second image block, and determine a contour in the first image block and a contour in the second image block, so as to obtain first contour information corresponding to the first image block and second contour information corresponding to the second image block.
In one example, the first profile information may be represented by v_c_temp and the second profile information may be represented by v_c_test. The defect detection result corresponding to the candidate frame may be determined in combination with the contour feature v2= [ v_c_test, v_c_temp ].
In this implementation manner, the defect detection result corresponding to the candidate frame is determined by combining the first contour information corresponding to the first image block and the second contour information corresponding to the second image block, so that the accuracy of the defect detection result corresponding to the candidate frame can be improved.
As an example of this implementation manner, the obtaining the first contour information corresponding to the first image block and the second contour information corresponding to the second image block includes: determining first contour information corresponding to the first image block according to the contour in the first binarized image block corresponding to the first image block; and determining second contour information corresponding to the second image block according to the contour in the second binarized image block corresponding to the second image block.
In one example, a blurring operation may be performed on a first gray image block corresponding to a first image block and a second gray image block corresponding to a second image block, to obtain a first blurred image block corresponding to the first image block and a second blurred image block corresponding to the second image block; and respectively performing binarization operation on the first blurred image block and the second blurred image block to obtain a first binarized image block corresponding to the first image block and a second binarized image block corresponding to the second image block.
In one example, the first gray scale image block and the second gray scale image block may be subjected to a gaussian blur operation, respectively, to obtain a first blurred image block and a second blurred image block. The gaussian kernel of the gaussian blur operation may be 5×5, 3×3, or 7×7, etc., which is not limited herein.
In one example, an OTSU method may be used to perform binarization operation on the first blurred image block and the second blurred image block, to obtain a first binarized image block corresponding to the first image block and a second binarized image block corresponding to the second image block.
In one example, contour searching methods such as findContours may be used to perform contour searching on the first binarized image block and the second binarized image block to obtain a contour in the first binarized image block and a contour in the second binarized image block.
In one example, the first blurred image block may be represented by temp img blur, the second blurred image block may be represented by test img blur, the first binarized image block may be represented by temp img bina, the second binarized image block may be represented by test img bina, the contours in the first binarized image block may be represented by temp img contours, and the contours in the second binarized image block may be represented by test img contours.
In this example, by determining the first contour information corresponding to the first image block from the contour in the first binarized image block corresponding to the first image block and determining the second contour information corresponding to the second image block from the contour in the second binarized image block corresponding to the second image block, contour finding can be performed more accurately.
In one example, the first profile information includes: geometric information of the largest N outlines in the first binarized image block and the number of the outlines in the first binarized image block, wherein N is an integer greater than or equal to 1; the second profile information includes: geometric information of the largest N contours in the second binarized image block, and the number of contours in the second binarized image block.
In this example, the largest N contours in the first binarized image block may be determined by ordering according to the area size of the region enclosed by the contours in the first binarized image block. The largest N outlines in the second binarized image block can be determined by sorting according to the area size of the area surrounded by the outlines in the second binarized image block.
For example, N is equal to 2. Of course, the size of N can be flexibly set by those skilled in the art according to the actual application scene requirement, which is not limited herein.
For example, the first contour information may be v_c_temp= [ temp_count, v_c_temp_v1, v_c_temp_v2], where temp_count represents the number of contours in the first binarized image block, v_c_temp_v1 represents the geometric information of the largest contour in the first binarized image block, and v_c_temp_v2 represents the geometric information of the second largest contour in the first binarized image block. The second contour information v_c_test= [ test_count, v_c_test_v1, v_c_test_v2], wherein test_count represents the number of contours in the second binarized image block, v_c_test_v1 represents the geometric information of the largest contour in the second binarized image block, and v_c_test_v2 represents the geometric information of the second largest contour in the second binarized image block.
In this example, by combining the geometric information of the largest N contours in the first binarized image block, the number of contours in the first binarized image block, the geometric information of the largest N contours in the second binarized image block, and the number of contours in the second binarized image block, more accurate defect detection can be achieved for the candidate frame.
In one example, the geometric information of the contour includes at least one of: the area of the contour, the bounding rectangle of the contour, the center moment of the contour, the position of the geometric center of the contour, the perimeter of the contour, the non-convexity of the contour, the smallest bounding rectangle of the contour, the smallest bounding circle of the contour, the fitting ellipse of the contour, the fitting rectangle of the contour. Wherein, the fitted ellipse of the contour may represent an ellipse obtained by performing ellipse fitting on the contour. The fitted rectangle of the contour may represent a rectangle obtained by straight line fitting of the contour.
Taking the example of the contour with the largest area in the second binarized image block, the area of the contour may be represented by test_c_area1, the bounding rectangle of the contour may be represented by (a1_x, a1_y, a1_w, a1_h), the center moment of the contour may be represented by M1, the position of the geometric center of the contour may be represented by (c1_x, c1_y), the perimeter of the contour may be represented by persona 1, the non-convexity of the contour may be represented by is_concx1, the smallest bounding rectangle of the contour may be represented by (a1_xr, a1_yr, a1_wr, a1_hr), the smallest bounding circle of the contour may be represented by (cr1_x, cr1_y, cr1_r), the ellipse of the contour (e 11, e12, e13, e14, e 15), and the fitting rectangle of the contour may be represented by (11 l, 12 l, 13 l, 14 l.
In one possible implementation manner, the determining the defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after the morphological transformation and the difference information of the second image block before and after the morphological transformation includes: obtaining the width and the height of the first image block; and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation, the difference information of the second image block before and after morphological transformation, the width and the height.
In one example, the width of the first image block may be expressed in terms of wt, the height of the first image block may be expressed in terms of ht, and the aspect ratio of the first image block may be expressed in terms of wt/ht. The defect detection result corresponding to the candidate frame may be determined in combination with the size characteristic v3= [ wt, ht, wt/ht ] of the first image block.
In the implementation manner, the defect detection result corresponding to the candidate frame is determined by combining the width and the height of the first image block, so that the accuracy of the defect detection result corresponding to the candidate frame can be improved.
In another possible implementation manner, the determining the defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after the morphological transformation and the difference information of the second image block before and after the morphological transformation includes: obtaining the width and the height of the second image block; and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation, the difference information of the second image block before and after morphological transformation, the width and the height.
In one possible implementation manner, the determining the defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after the morphological transformation and the difference information of the second image block before and after the morphological transformation includes: obtaining gradient information of the first image block and gradient information of the second image block; and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation, the difference information of the second image block before and after morphological transformation, the gradient information of the first image block and the gradient information of the second image block.
In one example, the gradient information of the first image block may be a directional gradient histogram (Histogram of Oriented Gradient, HOG) of the first image block, the gradient information of the second image block may be a directional gradient histogram of the second image block, the gradient information of the first image block may be represented by temp_hog_vec, and the gradient information of the second image block may be represented by test_hog_vec. The defect detection result corresponding to the candidate frame may be determined in combination with the gradient feature v4= [ temp_hog_vec, test_hog_vec ].
In this implementation manner, the defect detection result corresponding to the candidate frame is determined by combining the gradient information of the first image block and the gradient information of the second image block, so that the accuracy of the defect detection result corresponding to the candidate frame can be improved.
In one possible implementation manner, the determining the defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after the morphological transformation and the difference information of the second image block before and after the morphological transformation includes: obtaining a difference image block of a first binarization image block corresponding to the first image block and a second binarization image block corresponding to the second image block; obtaining characteristic information of the difference image block; and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation, the difference information of the second image block before and after morphological transformation and the characteristic information of the difference image block.
In one example, a blurring operation may be performed on a first gray image block corresponding to a first image block and a second gray image block corresponding to a second image block, to obtain a first blurred image block corresponding to the first image block and a second blurred image block corresponding to the second image block; and respectively performing binarization operation on the first blurred image block and the second blurred image block to obtain a first binarized image block corresponding to the first image block and a second binarized image block corresponding to the second image block.
In one example, the first gray scale image block and the second gray scale image block may be subjected to a gaussian blur operation, respectively, to obtain a first blurred image block and a second blurred image block. The gaussian kernel of the gaussian blur operation may be 5×5, 3×3, or 7×7, etc., which is not limited herein.
In one example, the first binarized image block may be represented by temp_img_bina, the second binarized image block may be represented by test_img_bina, the difference image block may be represented by diff_img, and the difference image block may be determined according to diff_img=test_img_bina-temp_img_bina. The difference image block may also be referred to as a difference matrix, and is not limited herein.
In this implementation manner, the defect detection is performed on the candidate frame by combining the feature information of the difference image block of the first binarized image block corresponding to the first image block and the difference image block of the second binarized image block corresponding to the second image block, so that the accuracy of the defect detection result corresponding to the candidate frame can be improved by using the difference information between the first binarized image block and the second binarized image block.
As an example of this implementation, the feature information of the difference image block includes: the pixel value in the difference image block is not 0.
In one example, the pixel value of the difference image block that is not 0 may be changed to 1, and then the pixel values in the difference image block are accumulated to determine the number of pixels in the difference image block that is not the pixel value. According to this example, the difference information of the first binarized image block and the second binarized image block can be quickly and efficiently determined.
In one example, the feature information of the difference image block includes: a number of pixels in each row of pixels of the difference image block having a pixel value other than 0, and a number of pixels in each column of pixels of the difference image block having a pixel value other than 0.
In this example, for each row in the difference image block, the number of pixels whose pixel value is not 0 may be determined separately, and for each column in the difference image block, the number of pixels whose pixel value is not 0 may be determined separately. According to the pixel number of which the pixel value is not 0 in each row of pixels of the difference image block and the pixel number of which the pixel value is not 0 in each column of pixels of the difference image block, a difference statistical feature v6= [ diff_project_y, diff_project_x ] can be obtained, wherein, the diff_project_y and the diff_project_x are vectors, the number of elements in the diff_project_y is equal to the number of rows of the difference image block, and the number of elements in the diff_project_x is equal to the number of columns of the difference image block.
According to this example, the accuracy of defect detection of the candidate frame can be further improved.
In another example, the number of pixels in the difference image block having a pixel value other than 0 includes: the pixel value in each row of pixels of the difference image block is not the number of pixels of 0.
In another example, the number of pixels in the difference image block having a pixel value other than 0 includes: the pixel value in each column of pixels of the difference image block is not 0.
In one possible implementation manner, the determining the defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after the morphological transformation and the difference information of the second image block before and after the morphological transformation includes: extracting features of the first image block through a pre-trained first neural network to obtain a first depth feature corresponding to the first image block; extracting features of the second image block through the first neural network to obtain a second depth feature corresponding to the second image block; and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation, the difference information of the second image block before and after morphological transformation, the first depth feature and the second depth feature.
In this implementation, the first neural network may be a deep learning model. For example, the first neural network may adopt a network structure such as LeNet5, alexNet, or the like. The first neural network may be trained in advance using a data set such as MINIST.
In one example, a first gray image block corresponding to a first image block may be input into a first neural network trained in advance, and feature extraction is performed on the first gray image block through the first neural network, so as to obtain a first depth feature corresponding to the first image block; and inputting the second gray image block corresponding to the second image block into the first neural network, and extracting the characteristics of the second gray image block through the first neural network to obtain a second depth characteristic corresponding to the second image block.
For example, the preprocessing operation of the LeNet5 model may be performed on the first grayscale image block temp_img_gray, so as to obtain a first preprocessing feature temp_img_pre corresponding to the first grayscale image block temp_img_gray; the second gray image block test_img_gray can be subjected to preprocessing operation of the LeNet5 model, and second preprocessing characteristics test_img_pre corresponding to the second gray image block test_img_gray are obtained. The first preprocessing feature temp_img_pre may be inferred by the LeNet5 model, and an 84-dimensional feature vector output by the penultimate layer of the LeNet5 model may be used as the first depth feature, or a 10-dimensional feature vector output by the last layer of the LeNet5 model may be used as the first depth feature. The second preprocessing feature test_img_pre may be inferred by the LeNet5 model, and an 84-dimensional feature vector output by the penultimate layer of the LeNet5 model may be used as the second depth feature, or a 10-dimensional feature vector output by the last layer of the LeNet5 model may be used as the second depth feature.
In another example, the first image block may be input into a first neural network trained in advance, and feature extraction is performed on the first image block through the first neural network, so as to obtain a first depth feature corresponding to the first image block; and inputting the second image block into the first neural network, and extracting the characteristics of the second image block through the first neural network to obtain a second depth characteristic corresponding to the second image block.
In one example, the first depth feature may be represented by v_temp_deep, the second depth feature may be represented by v_test_deep, and the defect detection result corresponding to the candidate frame may be determined in combination with the depth feature v_deep= [ v_test_deep, v_temp_deep ].
In the implementation mode, the first image block and the second image block are subjected to feature extraction through the first neural network to obtain the first depth feature corresponding to the first image block and the second depth feature corresponding to the second image block, so that features which are not considered by human priori knowledge can be effectively supplemented, and the accuracy of defect detection can be further improved.
In one possible implementation, the defect detection result corresponding to the candidate frame may be information capable of representing the defect type of the candidate frame. For example, defect types may include Open circuit (Open), short circuit (Short), dummy Copper (coppers), missing holes (Pin-holes), mouse bite (Mousebite), spurs (spurs), and non-defects. Of course, the number of defect types may be more or less, and is not limited herein. By converting the defect detection problem into a defect classification problem, the overall logic is simpler.
In another possible implementation manner, the defect detection result corresponding to the candidate frame may be information capable of indicating whether the candidate frame is a defect.
In one possible implementation manner, the determining, at least according to difference information of the image block corresponding to the candidate frame before and after morphological transformation, a defect detection result corresponding to the candidate frame includes: at least the difference information of the image blocks corresponding to the candidate frames before and after morphological transformation is input into a pre-trained machine learning model, and a defect detection result corresponding to the candidate frames is obtained through the machine learning model. In this implementation manner, the accuracy and speed of performing defect detection on the candidate frame can be improved by performing defect detection on the candidate frame based on difference information of the image block corresponding to the candidate frame before and after morphological transformation by using a machine learning model trained in advance.
In one possible implementation manner, after the determining the defect detection result corresponding to the candidate frame, the method further includes: and outputting the defect type of the candidate frame and the position information of the candidate frame in response to the defect detection result indicating that the candidate frame belongs to a defect.
In another possible implementation manner, after the determining the defect detection result corresponding to the candidate frame, the method further includes: and outputting position information of the candidate frame in response to the defect detection result indicating that the candidate frame belongs to a defect.
The defect detection method provided by the embodiment of the present disclosure is described below through a specific application scenario. In the application scene, an image to be detected corresponding to the PCB and a template image corresponding to the image to be detected are obtained.
In the application scene, a candidate frame of the defect in the image to be detected can be determined according to the image to be detected and the template image.
The manual feature v_traditional= [ v1, v2, v3, v4, v5, v6] may be extracted for the candidate frame using a conventional image processing method.
Wherein v1 is a gray scale feature, v1= [ temp_hist [0], temp_hist [255], test_hist [0], test_hist [255] ];
v2 is a contour feature, v2= [ v_c_test, v_c_temp ];
v3 is the dimensional feature, v3= [ wt, ht, wt/ht ];
v4 is the gradient feature, v4= [ temp_hog_vec, test_hog_vec ];
v5 is a morphological difference feature, v5= [ v_dilate, v_error ];
v6 is the difference statistic, v6= [ diff_project_y, diff_project_x ].
The depth feature v_deep= [ v_test_deep, v_temp_deep ] may be extracted for the candidate block using a first neural network trained in advance.
The manual feature v_traditional and the depth feature v_deep can be input into a pre-trained machine learning model to obtain a defect detection result corresponding to the candidate frame.
FIG. 2 illustrates a flowchart of a method of training a machine learning model for defect detection provided by an embodiment of the present disclosure. In one possible implementation manner, the execution subject of the training method of the machine learning model for defect detection may be a training apparatus of the machine learning model for defect detection, for example, the training method of the machine learning model for defect detection may be executed by a terminal device or a server or other electronic devices. The terminal device may be a user device, a mobile device, a user terminal, a cellular phone, a cordless phone, a personal digital assistant, a handheld device, a computing device, an in-vehicle device, a wearable device, or the like. In some possible implementations, the training method of the machine learning model for defect detection may be implemented by a processor invoking computer readable instructions stored in a memory. As shown in fig. 2, the defect detection method includes steps S21 to S24.
In step S21, a training image and a template image corresponding to the training image are acquired.
In step S22, a candidate frame of a defect in the training image is determined from the training image and the template image.
In step S23, at least difference information of the image block corresponding to the candidate frame before and after morphological transformation is input into a machine learning model, and a defect prediction result corresponding to the candidate frame is obtained via the machine learning model.
In step S24, the machine learning model is trained according to the labeling information corresponding to the candidate frame and the defect prediction result.
In the embodiment of the disclosure, the training image, the labeling data of the training image and the template image corresponding to the training image can be obtained from the data set such as the deepPCB. Fig. 3 shows a schematic diagram of template images in a training method of a machine learning model for defect detection provided by an embodiment of the present disclosure. Fig. 4 is a schematic diagram of training images and labeling data thereof in a training method of a machine learning model for defect detection provided in an embodiment of the present disclosure. In the example shown in fig. 4, the training image includes a plurality of defect boxes, and defect types of the plurality of defect boxes are open circuits, short circuits, dummy copper, missing holes, mouse bites, or strays. Of course, the machine learning model may be used to detect more or fewer types of defects, and is not limited in this regard.
In one possible implementation, the data set may be divided into a training set and a test set according to a preset ratio. For example, the preset ratio may be 8:2.
As an example of this implementation, before the dividing the data set into the training set and the test set according to the preset ratio, the method further includes: the data set is randomly shuffled.
In the embodiment of the disclosure, the template image corresponding to the training image may represent a defect-free image corresponding to the training image. By comparing the training image with the template image, candidate frames of defects in the training image can be determined. Wherein the candidate box may represent an area in the training image that may be defective.
In one possible implementation manner, the determining a candidate frame of the defect in the training image according to the training image and the template image includes: obtaining a difference image of the training image and the template image; determining contours in the difference image; and determining candidate frames of defects in the training image according to the outline.
In this implementation, pixel values for the same pixel locations of the training image and the template image may be compared to determine a difference image of the training image and the template image. In one example, the training image may be represented by img_test, the template image may be represented by img_temp, and the difference image may be represented by img_diff.
In this implementation manner, a contour searching method such as findContours may be used to search the contour of the difference image, so as to obtain the contour in the difference image. After determining the contours in the difference image, candidate boxes for defects in the training image may be determined from the contours in the difference image.
In this implementation, by obtaining a difference image of the training image and the template image, determining a contour in the difference image, and determining a candidate frame of a defect in the training image according to the contour, the candidate frame of the defect in the training image can be accurately determined by means of conventional image processing.
As an example of this implementation, the obtaining a difference image of the training image and the template image includes: respectively carrying out blurring operation on the training image and the template image to obtain a first blurring image corresponding to the training image and a second blurring image corresponding to the template image; and determining a difference image of the training image and the template image according to the first blurred image and the second blurred image. The first blurred image represents a blurred image corresponding to the training image, and the second blurred image represents a blurred image corresponding to the template image.
In one example, a gaussian blur operation may be performed on the training image and the template image, respectively, to obtain a first blurred image corresponding to the training image and a second blurred image corresponding to the template image. The gaussian kernel of the gaussian blur operation may be 5×5, 3×3, or 7×7, etc., which is not limited herein. In one example, the first blurred image may be represented by img_test_gaussian and the second blurred image may be represented by img_temp_gaussian.
In this example, by performing blurring operations on the training image and the template image, respectively, a first blurred image corresponding to the training image and a second blurred image corresponding to the template image are obtained, and a difference image between the training image and the template image is determined from the first blurred image and the second blurred image, thereby enabling a smoother difference image to be obtained.
In one example, the determining a difference image of the training image and the template image from the first blurred image and the second blurred image includes: respectively carrying out binarization operation on the first blurred image and the second blurred image to obtain a first binarized image corresponding to the first blurred image and a second binarized image corresponding to the second blurred image; and determining a difference image of the training image and the template image according to the first binarized image and the second binarized image. The first binarized image represents a binarized image corresponding to the first blurred image, and the second binarized image represents a binarized image corresponding to the second blurred image. In one example, the first binarized image may be represented by img_test_bina and the second binarized image may be represented by img_temp_bina.
In this example, the first blurred image and the second blurred image may be binarized by an OTSU (oxford) method or the like, to obtain a first binarized image corresponding to the first blurred image and a second binarized image corresponding to the second blurred image.
In one example, the determining a difference image between the training image and the template image based on the first binarized image and the second binarized image includes: for any pixel position, if the pixel values of the pixel positions in the first binarized image and the second binarized image are different, the pixel value of the pixel position in the difference image of the training image and the template image is 0; for any pixel position, if the pixel values of the pixel positions in the first binarized image and the second binarized image are the same, the pixel value of the pixel position in the difference image is 255.
For example, for any pixel position, if the pixel values of the pixel positions are different (i.e., one is 0 and the other is 255) in the first binarized image and the second binarized image, the pixel value of the pixel position is 0 (i.e., black) in the difference image; for any pixel position, if the pixel values of the pixel positions in the first binarized image and the second binarized image are the same (i.e., are both 0 or are both 255), then in the difference image, the pixel values of the pixel positions are 255 (i.e., white).
In one example, the following steps may be taken to obtain the difference image img_diff: let img_diff=img_test_bina-img_temp_bina; let the pixel value in img_diff that is not 0 become 255; let the value of each pixel location in img_diff become 255 minus the corresponding pixel value.
In this example, binarization operation is performed on the first blurred image and the second blurred image respectively, so as to obtain a first binarized image corresponding to the first blurred image and a second binarized image corresponding to the second blurred image, and a difference image between the training image and the template image is determined according to the first binarized image and the second binarized image, so that interference of intensity of ambient light for collecting the training image on defect detection can be reduced.
As another example of this implementation, the obtaining a difference image of the training image and the template image includes: respectively carrying out binarization operation on the training image and the template image to obtain a first binarization image corresponding to the training image and a second binarization image corresponding to the template image; and determining a difference image of the training image and the template image according to the first binarized image and the second binarized image.
As an example of this implementation, the determining the contour in the difference image includes: performing morphological operation on the difference image to obtain a de-interference image corresponding to the difference image; and searching the outline in the interference elimination image to be used as the outline in the difference image.
In this example, a differential image may be subjected to a dilation (scale) operation and/or an erosion (erode) operation, resulting in a de-interference image corresponding to the differential image. In one example, the difference image may be sequentially subjected to an expansion operation of a core size of 3×3, a corrosion operation of a core size of 7×3, an expansion operation of a core size of 7×3, a corrosion operation of a core size of 3×7, and an expansion operation of a core size of 3×7, to obtain the interference-free image corresponding to the difference image.
In this example, by performing morphological operations on the difference image, a de-interference image corresponding to the difference image is obtained, and a contour in the de-interference image is searched and used as a contour in the difference image, so that by performing morphological operations on the difference image, transverse and longitudinal interference lines in the difference image can be removed, and the accuracy of the searched contour can be improved.
As an example of this implementation, the determining a candidate box for a defect in the training image according to the contour includes: filtering outlines meeting preset conditions in the difference images; and determining candidate frames of the defects in the training image according to the filtered residual outline in the difference image.
In this example, if any contour in the difference image satisfies a preset condition, it may be determined that the contour belongs to a preset candidate frame that does not belong to a defect, so that the contour may be filtered. The preset conditions may be empirically set, and are not limited herein.
In one example, the preset condition includes at least one of: the area of the area surrounded by the outline is smaller than a first preset area; the area of the area surrounded by the outline is larger than a second preset area, wherein the second preset area is larger than the first preset area; the aspect ratio of the surrounding rectangle of the outline is smaller than a first preset threshold value; the aspect ratio of the surrounding rectangle of the outline is larger than a second preset threshold value, wherein the second preset threshold value is larger than the first preset threshold value; the average pixel value within the bounding rectangle of the outline is greater than the preset pixel value.
The bounding rectangle of any outline may be a bounding rectangle of the outline, and sides of the bounding rectangle of the outline are parallel to a preset coordinate axis (e.g., xy axis). In some application scenarios, bounding rectangles may also be referred to as bounding boxes. For example, the bounding rectangle of the outline may be represented by (x, y, w, h), where (x, y) is the coordinates of the upper left corner of the bounding rectangle, w is the width of the bounding rectangle, and h is the height of the bounding rectangle.
For example, the first preset AREA area_th_low=20 and the second preset AREA area_th_high=20000. If the area of the area enclosed by any contour is smaller than 20 or larger than 20000, the contour can be filtered.
For another example, the first preset threshold is 0.1 and the second preset threshold is 10. If the aspect ratio of the bounding rectangle of any contour is less than 0.1 or greater than 10, the contour may be filtered.
As another example, the preset PIXEL value pixel_mean_th=240. If the average pixel value within the bounding rectangle of any contour is greater than 240, the contour may be filtered.
In one example, the determining the candidate frame of the defect in the training image according to the filtered residual contour in the difference image includes: and for any outline remained after filtering in the difference image, determining an enlarged rectangle corresponding to a surrounding rectangle of the outline as a candidate frame of the defect in the training image, wherein the enlarged rectangle coincides with the geometric center of the surrounding rectangle, the length of the enlarged rectangle is a first preset multiple of the length of the surrounding rectangle, and the width of the enlarged rectangle is a second preset multiple of the width of the surrounding rectangle, and the first preset multiple and the second preset multiple are both larger than 1.
The first preset multiple and the second preset multiple may be the same or different. For example, the first preset multiple and the second preset multiple are both 2, the surrounding rectangle of the outline is (x, y, w, h), and the corresponding enlarged rectangle of the surrounding rectangle of the outline is (x-w/2, y-h/2,2×w,2×h).
In the above example, by filtering the contours in the difference image that satisfy the preset condition, and determining the candidate frame of the defect in the training image according to the contours remaining after filtering in the difference image, the interference of the abnormal contours in the difference image on the defect detection can be reduced.
As another example of this implementation, candidate boxes for defects may be determined from contours in the difference image, respectively. That is, in this example, the found contours may not be filtered.
In another possible implementation manner, the determining a candidate frame of the defect in the training image according to the training image and the template image includes: the training image and the template image are input into a pre-trained second neural network, via which candidate boxes for defects in the training image are determined. Wherein the second neural network is configured to determine candidate boxes for defects in the training image based on the training image and the template image.
In an embodiment of the present disclosure, the image block corresponding to the candidate frame may include at least one of: the method comprises the steps of selecting a first image block of the candidate frame on the template image, a second image block of the candidate frame on a difference image and a third image block of the candidate frame on the training image. Accordingly, the defect prediction result corresponding to the candidate frame may be determined according to at least one of the difference information of the first image block before and after the morphological transformation, the difference information of the second image block before and after the morphological transformation, and the difference information of the third image block before and after the morphological transformation.
In one possible implementation manner, the image block corresponding to the candidate frame includes: a first image block of the candidate frame on the template image, and a second image block of the candidate frame on a difference image, wherein the difference image represents a difference image of the training image and the template image; inputting at least difference information of the image blocks corresponding to the candidate frame before and after morphological transformation into a machine learning model, and obtaining a defect prediction result corresponding to the candidate frame through the machine learning model, wherein the method comprises the following steps: determining morphological difference characteristics corresponding to the candidate frames according to difference information of the first image block before and after morphological transformation and difference information of the second image block before and after morphological transformation; and inputting at least the morphological difference characteristic into a machine learning model, and obtaining a defect prediction result corresponding to the candidate frame through the machine learning model. In one example, a first tile may be represented by temp img and a second tile may be represented by test img.
As an example of this implementation manner, the determining, according to the difference information of the first image block before and after the morphological transformation and the difference information of the second image block before and after the morphological transformation, the morphological difference feature corresponding to the candidate frame includes: obtaining a first binarized image block corresponding to the first image block and a second binarized image block corresponding to the second image block; performing morphological operation on the first binarized image block to obtain a first morphological transformation image block corresponding to the first binarized image block; performing morphological operation on the second binarized image block to obtain a second morphological transformation image block corresponding to the second binarized image block; and determining morphological difference characteristics corresponding to the candidate frame according to a first pixel number with different pixel values between the first binarized image block and the first morphological transformation image block and a second pixel number with different pixel values between the second binarized image block and the second morphological transformation image block.
In one example, the first binarized image block may be represented by temp_img_bina and the second binarized image block may be represented by test_img_bina.
In this example, an expansion operation and/or a corrosion operation may be performed on the first binarized image block, resulting in a first morphological transformed image block corresponding to the first binarized image block; and performing expansion operation and/or corrosion operation on the second binarized image block to obtain a second morphological transformation image block corresponding to the second binarized image block. The first binarized image block and the first morphological transformed image block are compared pixel by pixel, and a first pixel number with different pixel values between the first binarized image block and the first morphological transformed image block can be determined, wherein the first pixel number represents the pixel number with different pixel values between the first binarized image block and the first morphological transformed image block. And comparing the second binarized image block with the second morphological transformation image block pixel by pixel, and determining a second pixel number with different pixel values between the second binarized image block and the second morphological transformation image block, wherein the second pixel number represents the pixel number with different pixel values between the second binarized image block and the second morphological transformation image block.
In this example, the morphological difference feature corresponding to the candidate frame is determined from the first pixel number of the first binarized image block and the first morphologically transformed image block, which are different in pixel value, and the second pixel number of the second binarized image block and the second morphologically transformed image block, which are different in pixel value, whereby the accuracy of the defect prediction result corresponding to the candidate frame can be improved.
In one example, the obtaining a first binarized image block corresponding to the first image block and a second binarized image block corresponding to the second image block includes: respectively carrying out blurring operation on a first gray image block corresponding to the first image block and a second gray image block corresponding to the second image block to obtain a first blurring image block corresponding to the first image block and a second blurring image block corresponding to the second image block; and respectively performing binarization operation on the first blurred image block and the second blurred image block to obtain a first binarization image block corresponding to the first image block and a second binarization image block corresponding to the second image block.
In this example, the first image block and the second image block may be respectively converted into gray maps, resulting in a first gray image block corresponding to the first image block and a second gray image block corresponding to the second image block. In one example, the first gray scale image block may be represented by temp_img_gray and the second gray scale image block may be represented by test_img_gray.
In this example, the first gray image block and the second gray image block may be respectively subjected to a blurring operation, so as to obtain a first blurred image block corresponding to the first image block and a second blurred image block corresponding to the second image block. In one example, the first blurred image block may be represented by temp img blu and the second blurred image block may be represented by test img blu. In one example, the first gray scale image block and the second gray scale image block may be subjected to a gaussian blur operation, respectively, to obtain a first blurred image block and a second blurred image block. The gaussian kernel of the gaussian blur operation may be 5×5, 3×3, or 7×7, etc., which is not limited herein.
In one example, an OTSU method may be used to perform binarization operation on the first blurred image block and the second blurred image block, to obtain a first binarized image block corresponding to the first image block and a second binarized image block corresponding to the second image block.
In this example, by performing blurring processing before binarization, smoother processing results can be obtained.
In one example, the performing morphological operations on the first binarized image block to obtain a first morphological transformed image block corresponding to the first binarized image block includes: performing morphological operations on the first binarized image blocks based on kernels of at least two sizes to obtain at least two first morphological transformed image blocks corresponding to the first binarized image blocks; the performing morphological operation on the second binarized image block to obtain a second morphological transformation image block corresponding to the second binarized image block, including: performing morphological operations on the second binarized image blocks based on the kernels of at least two sizes to obtain at least two second morphological transformation image blocks corresponding to the second binarized image blocks; the determining the morphological difference feature corresponding to the candidate frame according to the first pixel number with different pixel values between the first binarized image block and the first morphological transformed image block and the second pixel number with different pixel values between the second binarized image block and the second morphological transformed image block comprises: for the at least two first morphological transformation image blocks, respectively determining the pixel numbers with different pixel values between the first binarization image blocks and the first morphological transformation image blocks to obtain at least two first pixel numbers; for the at least two second morphological transformation image blocks, respectively determining the pixel numbers with different pixel values between the second binarization image blocks and the second morphological transformation image blocks to obtain at least two second pixel numbers; and determining morphological difference characteristics corresponding to the candidate frames according to the at least two first pixel numbers and the at least two second pixel numbers.
For example, the size of the core may include at least two of 3, 5, 7, 9, 11, etc., without limitation. In this example, based on either size, the first and second binarized image blocks may be subjected to a dilation operation and/or an erosion operation, respectively, resulting in corresponding first and second morphologically transformed image blocks.
The morphological difference features determined according to this example can more accurately reflect the defect features in the candidate box, and more accurate defect prediction results can be determined for the candidate box according to this example.
As one example of this implementation, the first morphological transformation image block includes a first dilation image block and a first erosion image block, and the second morphological transformation image block includes a second dilation image block and a second erosion image block; the performing morphological operation on the first binarized image block to obtain a first morphological transformed image block corresponding to the first binarized image block, including: performing expansion operation on the first binarized image block to obtain a first expanded image block corresponding to the first binarized image block; performing corrosion operation on the first binarized image block to obtain a first corrosion image block corresponding to the first binarized image block; the performing morphological operation on the second binarized image block to obtain a second morphological transformation image block corresponding to the second binarized image block, including: performing expansion operation on the second binarized image block to obtain a second expanded image block corresponding to the second binarized image block; and executing corrosion operation on the second binarized image block to obtain a second corrosion image block corresponding to the second binarized image block.
In this example, by performing the dilation operation and the erosion operation on the first binarized image block and the second binarized image block, respectively, and determining difference information based on the corresponding dilation image block and erosion image block, respectively, the accuracy of defect prediction for the candidate frame can be further improved.
In one example, a core size list kernel_size_list= [3,5,7,9,11] may be set.
Each numerical value in the core size list can be used as the core size of the expansion operation, and the expansion difference characteristic v_ki_d under each core size can be obtained. For example, when the core size is 3, v_ki_d=v_k3_d, when the core size is 5, v_ki_d=v_k5_d, and so on. Performing expansion operation with the kernel size of i on the first binarized image block to obtain a first expanded image block; performing expansion operation with the kernel size of i on the second binarized image block to obtain a second expanded image block; the dilation difference feature v_ki_d may be determined from a first number of pixels n1 of a first binarized image block and a first dilated image block that differ in pixel value, and a second number of pixels n2 of a second binarized image block and a second dilated image block that differ in pixel value. For example, if n2 is equal to 0, v_ki_d= [ n1, n2,1]; if n2 is not equal to 0, v_ki_d= [ n1, n2, n1/n2].
Each numerical value in the core size list can be used as the core size of the corrosion operation to obtain the corrosion difference characteristic v_ki_e under each core size. For example, when the core size is 3, v_ki_e=v_k3_e, when the core size is 5, v_ki_e=v_k5_e, and so on. Performing corrosion operation with a kernel size of i on the first binarized image block to obtain a first corroded image block; performing corrosion operation with the kernel size of i on the second binarized image block to obtain a second corroded image block; the erosion difference feature v_ki_e may be determined based on a first number of pixels n1, which differ in pixel value between the first binarized image block and the first eroded image block, and a second number of pixels n2, which differ in pixel value between the second binarized image block and the second eroded image block. For example, if n2 is equal to 0, v_ki_e= [ n1, n2,1]; if n2 is not equal to 0, v_ki_e= [ n1, n2, n1/n2].
After determining the expansion difference features v_differential= [ v_k3_d, v_k5_d, v_k7_d, v_k9_d, v_k11_d ] and the corrosion difference features v_error= [ v_k3_e, v_k5_e, v_k7_e, v_k9_e, v_k11_e ], the defect prediction result corresponding to the candidate frame may be determined according to the morphological difference features v5= [ v_differential, v_error ].
In one possible implementation manner, the inputting at least the morphological difference feature into a machine learning model, and obtaining the defect prediction result corresponding to the candidate frame through the machine learning model includes: respectively carrying out gray statistics on the first image block and the second image block to obtain a first gray statistics result corresponding to the first image block and a second gray statistics result corresponding to the second image block; determining gray features corresponding to the candidate frames according to the first gray statistics result and the second gray statistics result; inputting at least the morphological difference feature and the gray level feature into a machine learning model, and obtaining a defect prediction result corresponding to the candidate frame through the machine learning model.
In this implementation manner, the first image block and the second image block may be respectively converted into gray maps, so as to obtain a first gray image block corresponding to the first image block and a second gray image block corresponding to the second image block. In one example, the first gray scale image block may be represented by temp_img_gray and the second gray scale image block may be represented by test_img_gray.
In this implementation, the first gray statistic may include the number of pixels of some or all of the gray values in the first image block. For example, the gray histogram of the first gray image block may be counted to obtain the first gray statistics. In one example, the gray histogram of the first gray image block may be represented using temp_hist.
The second gray level statistic may include the number of pixels of some or all gray level values in the second image block. For example, the gray histogram of the second gray image block may be counted to obtain the second gray statistics. In one example, the gray level histogram of the second gray level image block may be represented by test_hist.
As an example of this implementation, the first gray statistics result includes: the number of pixels with the gray value of 0 and the number of pixels with the gray value of 255 in the first image block; the second gray level statistics include: the second image block has a gray value of 0 and a gray value of 255.
In one example, the number of pixels with a gray value of 0 in the first image block may be represented by temp_hist [0], the number of pixels with a gray value of 255 in the first image block may be represented by temp_hist [255], the number of pixels with a gray value of 0 in the second image block may be represented by test_hist [0], and the number of pixels with a gray value of 255 in the second image block may be represented by test_hist [255 ]. The defect prediction result corresponding to the candidate frame can be determined by combining the gray scale feature v1= [ temp_hist [0], temp_hist [255], test_hist [0], test_hist [255 ].
In this example, the defect prediction result corresponding to the candidate frame is determined by combining the number of pixels with the gray value of 0 and the number of pixels with the gray value of 255 in the first image block and the number of pixels with the gray value of 0 and the number of pixels with the gray value of 255 in the second image block, and thus the defect detection of the candidate frame is assisted by using the number of pixels with the most significant gray values in the first image block and the second image block, and the accuracy of the defect prediction result corresponding to the candidate frame can be improved.
In one possible implementation manner, the inputting at least the morphological difference feature into a machine learning model, and obtaining the defect prediction result corresponding to the candidate frame through the machine learning model includes: obtaining first contour information corresponding to the first image block and second contour information corresponding to the second image block; determining contour features corresponding to the candidate frames according to the first contour information and the second contour information; inputting at least the morphological difference features and the contour features into a machine learning model, and obtaining defect prediction results corresponding to the candidate frames through the machine learning model.
In this implementation manner, contour searching methods such as findContours may be used to perform contour searching on the first image block and the second image block respectively, so as to obtain first contour information corresponding to the first image block and second contour information corresponding to the second image block.
In one example, the first profile information may be represented by v_c_temp and the second profile information may be represented by v_c_test. The defect prediction result corresponding to the candidate frame may be determined in combination with the contour feature v2= [ v_c_test, v_c_temp ].
In this implementation manner, the defect prediction result corresponding to the candidate frame is determined by combining the first contour information corresponding to the first image block and the second contour information corresponding to the second image block, so that the accuracy of the defect prediction result corresponding to the candidate frame can be improved.
As an example of this implementation manner, the obtaining the first contour information corresponding to the first image block and the second contour information corresponding to the second image block includes: determining first contour information corresponding to the first image block according to the contour in the first binarized image block corresponding to the first image block; and determining second contour information corresponding to the second image block according to the contour in the second binarized image block corresponding to the second image block.
In one example, a blurring operation may be performed on a first gray image block corresponding to a first image block and a second gray image block corresponding to a second image block, to obtain a first blurred image block corresponding to the first image block and a second blurred image block corresponding to the second image block; and respectively performing binarization operation on the first blurred image block and the second blurred image block to obtain a first binarized image block corresponding to the first image block and a second binarized image block corresponding to the second image block.
In one example, the first gray scale image block and the second gray scale image block may be subjected to a gaussian blur operation, respectively, to obtain a first blurred image block and a second blurred image block. The gaussian kernel of the gaussian blur operation may be 5×5, 3×3, or 7×7, etc., which is not limited herein.
In one example, an OTSU method may be used to perform binarization operation on the first blurred image block and the second blurred image block, to obtain a first binarized image block corresponding to the first image block and a second binarized image block corresponding to the second image block.
In one example, contour searching methods such as findContours may be used to perform contour searching on the first binarized image block and the second binarized image block to obtain a contour in the first binarized image block and a contour in the second binarized image block.
In one example, the first blurred image block may be represented by temp img blur, the second blurred image block may be represented by test img blur, the first binarized image block may be represented by temp img bina, the second binarized image block may be represented by test img bina, the contours in the first binarized image block may be represented by temp img contours, and the contours in the second binarized image block may be represented by test img contours.
In this example, by determining the first contour information corresponding to the first image block from the contour in the first binarized image block corresponding to the first image block and determining the second contour information corresponding to the second image block from the contour in the second binarized image block corresponding to the second image block, contour finding can be performed more accurately.
In one example, the first profile information includes: geometric information of the largest N outlines in the first binarized image block and the number of the outlines in the first binarized image block, wherein N is an integer greater than or equal to 1; the second profile information includes: geometric information of the largest N contours in the second binarized image block, and the number of contours in the second binarized image block.
In this example, the largest N contours in the first binarized image block may be determined by ordering according to the area size of the region enclosed by the contours in the first binarized image block. The largest N outlines in the second binarized image block can be determined by sorting according to the area size of the area surrounded by the outlines in the second binarized image block.
For example, N is equal to 2. Of course, the size of N can be flexibly set by those skilled in the art according to the actual application scene requirement, which is not limited herein.
For example, the first contour information may be v_c_temp= [ temp_count, v_c_temp_v1, v_c_temp_v2], where temp_count represents the number of contours in the first binarized image block, v_c_temp_v1 represents the geometric information of the largest contour in the first binarized image block, and v_c_temp_v2 represents the geometric information of the second largest contour in the first binarized image block. Second contour information v_c_test= [ test_count, v_c_test_v1, v_c_test_v2], wherein test_count represents the number of contours in the second binarized image block, and v_c_test_v1 represents the largest contour in the second binarized image block
Figure BDA0004200007970000261
In this example, by combining the geometric information of the largest N contours in the first binarized image block, the number of contours in the first binarized image block, the geometric information of the largest N contours in the second binarized image block, and the number of contours in the second binarized image block, more accurate defect detection can be achieved for the candidate frame.
In one example, the geometric information of the contour includes at least one of: the area of the contour, the bounding rectangle of the contour, the center moment of the contour, the position of the geometric center of the contour, the perimeter of the contour, the non-convexity of the contour, the smallest bounding rectangle of the contour, the smallest bounding circle of the contour, the fitting ellipse of the contour, the fitting rectangle of the contour. Wherein, the fitted ellipse of the contour may represent an ellipse obtained by performing ellipse fitting on the contour. The fitted rectangle of the contour may represent a rectangle obtained by straight line fitting of the contour.
Taking the example of the contour with the largest area in the second binarized image block, the area of the contour may be represented by test_c_area1, the bounding rectangle of the contour may be represented by (a1_x, a1_y, a1_w, a1_h), the center moment of the contour may be represented by M1, the position of the geometric center of the contour may be represented by (c1_x, c1_y), the perimeter of the contour may be represented by persona 1, the non-convexity of the contour may be represented by is_concx1, the smallest bounding rectangle of the contour may be represented by (a1_xr, a1_yr, a1_wr, a1_hr), the smallest bounding circle of the contour may be represented by (cr1_x, cr1_y, cr1_r), the ellipse of the contour (e 11, e12, e13, e14, e 15), and the fitting rectangle of the contour may be represented by (11 l, 12 l, 13 l, 14 l.
In one possible implementation manner, the inputting at least the morphological difference feature into a machine learning model, and obtaining the defect prediction result corresponding to the candidate frame through the machine learning model includes: obtaining the width and the height of the first image block; determining the size characteristics corresponding to the candidate frames according to the width and the height; inputting at least the morphological difference feature and the dimensional feature into a machine learning model, and obtaining a defect prediction result corresponding to the candidate frame through the machine learning model.
In one example, the width of the first image block may be expressed in terms of wt, the height of the first image block may be expressed in terms of ht, and the aspect ratio of the first image block may be expressed in terms of wt/ht. The defect prediction result corresponding to the candidate frame may be determined in combination with the size characteristic v3= [ wt, ht, wt/ht ] of the first image block.
In this implementation manner, the defect prediction result corresponding to the candidate frame is determined by combining the width and the height of the first image block, so that the accuracy of the defect prediction result corresponding to the candidate frame can be improved.
In one possible implementation manner, the inputting at least the morphological difference feature into a machine learning model, and obtaining the defect prediction result corresponding to the candidate frame through the machine learning model includes: obtaining gradient information of the first image block and gradient information of the second image block; determining gradient characteristics corresponding to the candidate frames according to the gradient information of the first image block and the gradient information of the second image block; inputting at least the morphological difference features and the gradient features into a machine learning model, and obtaining defect prediction results corresponding to the candidate frames through the machine learning model.
In one example, the gradient information of the first image block may be a directional gradient histogram (Histogram of Oriented Gradient, HOG) of the first image block, the gradient information of the second image block may be a directional gradient histogram of the second image block, the gradient information of the first image block may be represented by temp_hog_vec, and the gradient information of the second image block may be represented by test_hog_vec. The defect prediction result corresponding to the candidate frame may be determined in combination with the gradient feature v4= [ temp_hog_vec, test_hog_vec ].
In this implementation manner, the defect prediction result corresponding to the candidate frame is determined by combining the gradient information of the first image block and the gradient information of the second image block, so that the accuracy of the defect prediction result corresponding to the candidate frame can be improved.
In one possible implementation manner, the inputting at least the morphological difference feature into a machine learning model, and obtaining the defect prediction result corresponding to the candidate frame through the machine learning model includes: obtaining a difference image block of a first binarization image block corresponding to the first image block and a second binarization image block corresponding to the second image block; determining a difference statistical feature corresponding to the candidate frame according to the difference image block; inputting at least the morphological difference features and the difference statistical features into a machine learning model, and obtaining a defect prediction result corresponding to the candidate frame through the machine learning model.
In one example, a blurring operation may be performed on a first gray image block corresponding to a first image block and a second gray image block corresponding to a second image block, to obtain a first blurred image block corresponding to the first image block and a second blurred image block corresponding to the second image block; and respectively performing binarization operation on the first blurred image block and the second blurred image block to obtain a first binarized image block corresponding to the first image block and a second binarized image block corresponding to the second image block.
In one example, the first gray scale image block and the second gray scale image block may be subjected to a gaussian blur operation, respectively, to obtain a first blurred image block and a second blurred image block. The gaussian kernel of the gaussian blur operation may be 5×5, 3×3, or 7×7, etc., which is not limited herein.
In one example, the first binarized image block may be represented by temp_img_bina, the second binarized image block may be represented by test_img_bina, the difference image block may be represented by diff_img, and the difference image block may be determined according to diff_img=test_img_bina-temp_img_bina. The difference image block may also be referred to as a difference matrix, and is not limited herein.
In this implementation manner, the defect detection is performed on the candidate frame by combining the feature information of the difference image block of the first binarized image block corresponding to the first image block and the difference image block of the second binarized image block corresponding to the second image block, so that the accuracy of the defect prediction result corresponding to the candidate frame can be improved by using the difference information between the first binarized image block and the second binarized image block.
As an example of this implementation, the feature information of the difference image block includes: the pixel value in the difference image block is not 0.
In one example, the pixel value of the difference image block that is not 0 may be changed to 1, and then the pixel values in the difference image block are accumulated to determine the number of pixels in the difference image block that is not the pixel value.
According to this example, the difference information of the first binarized image block and the second binarized image block can be quickly and efficiently determined.
In one example, the number of pixels in the difference image block having a pixel value other than 0 includes: a number of pixels in each row of pixels of the difference image block having a pixel value other than 0, and a number of pixels in each column of pixels of the difference image block having a pixel value other than 0.
In this example, for each row in the difference image block, the number of pixels whose pixel value is not 0 may be determined separately, and for each column in the difference image block, the number of pixels whose pixel value is not 0 may be determined separately. According to the pixel number of which the pixel value is not 0 in each row of pixels of the difference image block and the pixel number of which the pixel value is not 0 in each column of pixels of the difference image block, a difference statistical feature v6= [ diff_project_y, diff_project_x ] can be obtained, wherein, the diff_project_y and the diff_project_x are vectors, the number of elements in the diff_project_y is equal to the number of rows of the difference image block, and the number of elements in the diff_project_x is equal to the number of columns of the difference image block.
According to this example, the accuracy of defect detection of the candidate frame can be further improved.
In another example, the number of pixels in the difference image block having a pixel value other than 0 includes: the pixel value in each row of pixels of the difference image block is not the number of pixels of 0.
In another example, the number of pixels in the difference image block having a pixel value other than 0 includes: the pixel value in each column of pixels of the difference image block is not 0.
In one possible implementation manner, the inputting at least the morphological difference feature into a machine learning model, and obtaining the defect prediction result corresponding to the candidate frame through the machine learning model includes: extracting features of the first image block through a pre-trained first neural network to obtain a first depth feature corresponding to the first image block; extracting features of the second image block through the first neural network to obtain a second depth feature corresponding to the second image block; inputting at least the morphological difference feature, the first depth feature and the second depth feature into a machine learning model, and obtaining a defect prediction result corresponding to the candidate frame through the machine learning model.
In this implementation, the first neural network may be a deep learning model. For example, the first neural network may adopt a network structure such as LeNet5, alexNet, or the like. The first neural network may be trained in advance using a data set such as MINIST.
In one example, a first gray image block corresponding to a first image block may be input into a first neural network trained in advance, and feature extraction is performed on the first gray image block through the first neural network, so as to obtain a first depth feature corresponding to the first image block; and inputting the second gray image block corresponding to the second image block into the first neural network, and extracting the characteristics of the second gray image block through the first neural network to obtain a second depth characteristic corresponding to the second image block.
For example, the preprocessing operation of the LeNet5 model may be performed on the first grayscale image block temp_img_gray, so as to obtain a first preprocessing feature temp_img_pre corresponding to the first grayscale image block temp_img_gray; the second gray image block test_img_gray can be subjected to preprocessing operation of the LeNet5 model, and second preprocessing characteristics test_img_pre corresponding to the second gray image block test_img_gray are obtained. The first preprocessing feature temp_img_pre may be inferred by the LeNet5 model, and an 84-dimensional feature vector output by the penultimate layer of the LeNet5 model may be used as the first depth feature, or a 10-dimensional feature vector output by the last layer of the LeNet5 model may be used as the first depth feature. The second preprocessing feature test_img_pre may be inferred by the LeNet5 model, and an 84-dimensional feature vector output by the penultimate layer of the LeNet5 model may be used as the second depth feature, or a 10-dimensional feature vector output by the last layer of the LeNet5 model may be used as the second depth feature.
In another example, the first image block may be input into a first neural network trained in advance, and feature extraction is performed on the first image block through the first neural network, so as to obtain a first depth feature corresponding to the first image block; and inputting the second image block into the first neural network, and extracting the characteristics of the second image block through the first neural network to obtain a second depth characteristic corresponding to the second image block.
In one example, the first depth feature may be represented by v_temp_deep, the second depth feature may be represented by v_test_deep, and the defect prediction result corresponding to the candidate frame may be determined in combination with the depth feature v_deep= [ v_test_deep, v_temp_deep ].
In the implementation mode, the first image block and the second image block are subjected to feature extraction through the first neural network to obtain the first depth feature corresponding to the first image block and the second depth feature corresponding to the second image block, so that features which are not considered by human priori knowledge can be effectively supplemented, and the accuracy of defect detection can be further improved.
In the embodiment of the disclosure, the defect frame of the training image may be obtained from the annotation data of the training image. For any candidate frame of the training image, if the intersection of the candidate frame and each defect frame of the training image is empty, determining that the labeling information corresponding to the candidate frame is non-defect; if the candidate frame has an intersection with only one defect frame of the training image, determining the defect type corresponding to the defect frame as labeling information corresponding to the candidate frame; if the candidate frame and at least two defect frames of the training image have intersection, determining the defect type corresponding to the defect frame with the largest intersection of the candidate frame in the at least two defect frames as the labeling information corresponding to the candidate frame.
In one possible implementation, the machine learning model may be an integrated (ensable) learning-based machine learning model. For example, the machine learning model may be a machine learning model based on a random forest algorithm, XGBoost, catBoost, etc., and is not limited herein. Taking a random forest algorithm as an example, a model search can be performed on a training set (comprising characteristics corresponding to candidate frames and labeling information corresponding to the candidate frames) by adopting K-fold cross validation (for example, 5-fold cross validation) to obtain an optimal algorithm parameter para_opt, then the algorithm parameter para_opt is fixed, and a machine learning model is trained on the whole data set (comprising the training set and the testing set). For example, para_opt= { ' criterion: ' gini ', ' min_samples_leaf: ' 1 ', ' min_samples_split: ' 6 ', ' n_detectors: ' 100}. In this implementation, by employing a machine learning model based on ensemble learning, the speed of defect detection can be increased.
In another possible implementation, the machine learning model may be a deep learning model.
The training method of the machine learning model for defect detection provided by the embodiment of the present disclosure is described below through a specific application scenario. In the application scene, a training image corresponding to the PCB and a template image corresponding to the training image are acquired.
In the application scenario, candidate frames for defects in the training image may be determined from the training image and the template image.
The manual feature v_traditional= [ v1, v2, v3, v4, v5, v6] may be extracted for the candidate frame using a conventional image processing method.
Wherein v1 is a gray scale feature, v1= [ temp_hist [0], temp_hist [255], test_hist [0], test_hist [255] ];
v2 is a contour feature, v2= [ v_c_test, v_c_temp ];
v3 is the dimensional feature, v3= [ wt, ht, wt/ht ];
v4 is the gradient feature, v4= [ temp_hog_vec, test_hog_vec ];
v5 is a morphological difference feature, v5= [ v_dilate, v_error ];
v6 is the difference statistic, v6= [ diff_project_y, diff_project_x ].
The depth feature v_deep= [ v_test_deep, v_temp_deep ] may be extracted for the candidate block using a first neural network trained in advance.
The manual characteristic v_traditional and the depth characteristic v_deep can be input into a machine learning model to obtain a defect prediction result corresponding to the candidate frame; and training the machine learning model according to the labeling information corresponding to the candidate frame and the defect prediction result.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides a defect detection device, a training device of a machine learning model for defect detection, an electronic device, a computer readable storage medium, and a computer program product, where any one of the defect detection methods or the training method of the machine learning model for defect detection provided in the disclosure may be implemented, and corresponding technical schemes and technical effects may be referred to corresponding descriptions of method parts and are not repeated.
Fig. 5 shows a block diagram of a defect detection apparatus provided by an embodiment of the present disclosure. As shown in fig. 5, the defect detecting apparatus includes:
a first obtaining module 51, configured to obtain an image to be detected and a template image corresponding to the image to be detected;
a first determining module 52, configured to determine a candidate frame of a defect in the image to be detected according to the image to be detected and the template image;
and a second determining module 53, configured to determine a defect detection result corresponding to the candidate frame at least according to difference information of the image block corresponding to the candidate frame before and after morphological transformation.
In one possible implementation of the present invention,
the image block corresponding to the candidate frame comprises: a first image block of the candidate frame on the template image and a second image block of the candidate frame on a difference image, wherein the difference image represents a difference image of the image to be detected and the template image;
The second determining module 53 is configured to: and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation and the difference information of the second image block before and after morphological transformation.
In one possible implementation, the second determining module 53 is configured to:
obtaining a first binarized image block corresponding to the first image block and a second binarized image block corresponding to the second image block;
performing morphological operation on the first binarized image block to obtain a first morphological transformation image block corresponding to the first binarized image block;
performing morphological operation on the second binarized image block to obtain a second morphological transformation image block corresponding to the second binarized image block;
and determining a defect detection result corresponding to the candidate frame according to a first pixel number with different pixel values between the first binarized image block and the first morphological transformation image block and a second pixel number with different pixel values between the second binarized image block and the second morphological transformation image block.
In one possible implementation, the second determining module 53 is configured to:
Respectively carrying out blurring operation on a first gray image block corresponding to the first image block and a second gray image block corresponding to the second image block to obtain a first blurring image block corresponding to the first image block and a second blurring image block corresponding to the second image block;
and respectively performing binarization operation on the first blurred image block and the second blurred image block to obtain a first binarization image block corresponding to the first image block and a second binarization image block corresponding to the second image block.
In one possible implementation, the second determining module 53 is configured to:
performing morphological operations on the first binarized image blocks based on kernels of at least two sizes to obtain at least two first morphological transformed image blocks corresponding to the first binarized image blocks;
performing morphological operations on the second binarized image blocks based on the kernels of at least two sizes to obtain at least two second morphological transformation image blocks corresponding to the second binarized image blocks;
for the at least two first morphological transformation image blocks, respectively determining the pixel numbers with different pixel values between the first binarization image blocks and the first morphological transformation image blocks to obtain at least two first pixel numbers; for the at least two second morphological transformation image blocks, respectively determining the pixel numbers with different pixel values between the second binarization image blocks and the second morphological transformation image blocks to obtain at least two second pixel numbers; and determining a defect detection result corresponding to the candidate frame according to the at least two first pixel numbers and the at least two second pixel numbers.
In one possible implementation of the present invention,
the first morphological transformation image block comprises a first expansion image block and a first corrosion image block, and the second morphological transformation image block comprises a second expansion image block and a second corrosion image block;
the second determining module 53 is configured to: performing expansion operation on the first binarized image block to obtain a first expanded image block corresponding to the first binarized image block; performing corrosion operation on the first binarized image block to obtain a first corrosion image block corresponding to the first binarized image block; performing expansion operation on the second binarized image block to obtain a second expanded image block corresponding to the second binarized image block; and executing corrosion operation on the second binarized image block to obtain a second corrosion image block corresponding to the second binarized image block.
In one possible implementation, the second determining module 53 is configured to:
respectively carrying out gray statistics on the first image block and the second image block to obtain a first gray statistics result corresponding to the first image block and a second gray statistics result corresponding to the second image block;
and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation, the difference information of the second image block before and after morphological transformation, the first gray level statistical result and the second gray level statistical result.
In one possible implementation of the present invention,
the first gray scale statistic includes: the number of pixels with the gray value of 0 and the number of pixels with the gray value of 255 in the first image block;
the second gray level statistics include: the second image block has a gray value of 0 and a gray value of 255.
In one possible implementation, the second determining module 53 is configured to:
obtaining first contour information corresponding to the first image block and second contour information corresponding to the second image block;
and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation, the difference information of the second image block before and after morphological transformation, the first contour information and the second contour information.
In one possible implementation of the present invention,
the first profile information includes: geometric information of the largest N outlines in the first binarized image block and the number of the outlines in the first binarized image block, wherein N is an integer greater than or equal to 1;
the second profile information includes: geometric information of the largest N contours in the second binarized image block, and the number of contours in the second binarized image block.
In one possible implementation, the second determining module 53 is configured to:
obtaining the width and the height of the first image block;
and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation, the difference information of the second image block before and after morphological transformation, the width and the height.
In one possible implementation, the second determining module 53 is configured to:
obtaining gradient information of the first image block and gradient information of the second image block;
and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation, the difference information of the second image block before and after morphological transformation, the gradient information of the first image block and the gradient information of the second image block.
In one possible implementation, the second determining module 53 is configured to:
obtaining a difference image block of a first binarization image block corresponding to the first image block and a second binarization image block corresponding to the second image block;
obtaining characteristic information of the difference image block;
And determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation, the difference information of the second image block before and after morphological transformation and the characteristic information of the difference image block.
In one possible implementation manner, the feature information of the difference image block includes:
a number of pixels in each row of pixels of the difference image block having a pixel value other than 0, and a number of pixels in each column of pixels of the difference image block having a pixel value other than 0.
In one possible implementation, the first determining module 52 is configured to:
obtaining a difference image of the image to be detected and the template image;
determining contours in the difference image;
and determining candidate frames of the defects in the image to be detected according to the outline.
In one possible implementation, the first determining module 52 is configured to:
respectively carrying out blurring operation on the image to be detected and the template image to obtain a first blurring image corresponding to the image to be detected and a second blurring image corresponding to the template image;
and determining a difference image of the image to be detected and the template image according to the first blurred image and the second blurred image.
In one possible implementation, the first determining module 52 is configured to:
respectively carrying out binarization operation on the first blurred image and the second blurred image to obtain a first binarized image corresponding to the first blurred image and a second binarized image corresponding to the second blurred image;
and determining a difference image of the image to be detected and the template image according to the first binarized image and the second binarized image.
In one possible implementation, the first determining module 52 is configured to:
performing morphological operation on the difference image to obtain a de-interference image corresponding to the difference image;
and searching the outline in the interference elimination image to be used as the outline in the difference image.
In one possible implementation, the first determining module 52 is configured to:
filtering outlines meeting preset conditions in the difference images;
and determining candidate frames of the defects in the image to be detected according to the filtered residual outlines in the difference image.
In one possible implementation, the second determining module 53 is configured to:
extracting features of the first image block through a pre-trained first neural network to obtain a first depth feature corresponding to the first image block;
Extracting features of the second image block through the first neural network to obtain a second depth feature corresponding to the second image block;
and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation, the difference information of the second image block before and after morphological transformation, the first depth feature and the second depth feature.
In one possible implementation, the second determining module 53 is configured to:
at least the difference information of the image blocks corresponding to the candidate frames before and after morphological transformation is input into a pre-trained machine learning model, and a defect detection result corresponding to the candidate frames is obtained through the machine learning model.
In one possible implementation manner, the image to be detected is an image to be detected corresponding to the printed circuit board.
Fig. 6 shows a block diagram of a training apparatus of a machine learning model for defect detection provided by an embodiment of the present disclosure. As shown in fig. 6, the training apparatus for a machine learning model for defect detection includes:
a second obtaining module 61, configured to obtain a training image and a template image corresponding to the training image;
A third determining module 62, configured to determine a candidate frame of a defect in the training image according to the training image and the template image;
a prediction module 63, configured to input, into a machine learning model, at least difference information of image blocks corresponding to the candidate frame before and after morphological transformation, and obtain a defect prediction result corresponding to the candidate frame via the machine learning model;
and the training module 64 is configured to train the machine learning model according to the labeling information corresponding to the candidate frame and the defect prediction result.
In one possible implementation, the third determining module 62 is configured to: obtaining a difference image of the training image and the template image; determining contours in the difference image; and determining candidate frames of defects in the training image according to the outline.
In one possible implementation, the third determining module 62 is configured to: respectively carrying out blurring operation on the training image and the template image to obtain a first blurring image corresponding to the training image and a second blurring image corresponding to the template image; and determining a difference image of the training image and the template image according to the first blurred image and the second blurred image. The first blurred image represents a blurred image corresponding to the training image, and the second blurred image represents a blurred image corresponding to the template image.
In one possible implementation, the third determining module 62 is configured to: respectively carrying out binarization operation on the first blurred image and the second blurred image to obtain a first binarized image corresponding to the first blurred image and a second binarized image corresponding to the second blurred image; and determining a difference image of the training image and the template image according to the first binarized image and the second binarized image.
In one possible implementation, the third determining module 62 is configured to: for any pixel position, if the pixel values of the pixel positions in the first binarized image and the second binarized image are different, the pixel value of the pixel position in the difference image of the training image and the template image is 0; for any pixel position, if the pixel values of the pixel positions in the first binarized image and the second binarized image are the same, the pixel value of the pixel position in the difference image is 255.
In one possible implementation, the third determining module 62 is configured to: respectively carrying out binarization operation on the training image and the template image to obtain a first binarization image corresponding to the training image and a second binarization image corresponding to the template image; and determining a difference image of the training image and the template image according to the first binarized image and the second binarized image.
In one possible implementation, the third determining module 62 is configured to: performing morphological operation on the difference image to obtain a de-interference image corresponding to the difference image; and searching the outline in the interference elimination image to be used as the outline in the difference image.
In one possible implementation, the third determining module 62 is configured to: filtering outlines meeting preset conditions in the difference images; and determining candidate frames of the defects in the training image according to the filtered residual outline in the difference image.
In one possible implementation, the preset condition includes at least one of: the area of the area surrounded by the outline is smaller than a first preset area; the area of the area surrounded by the outline is larger than a second preset area, wherein the second preset area is larger than the first preset area; the aspect ratio of the surrounding rectangle of the outline is smaller than a first preset threshold value; the aspect ratio of the surrounding rectangle of the outline is larger than a second preset threshold value, wherein the second preset threshold value is larger than the first preset threshold value; the average pixel value within the bounding rectangle of the outline is greater than the preset pixel value.
In one possible implementation, the third determining module 62 is configured to: and for any outline remained after filtering in the difference image, determining an enlarged rectangle corresponding to a surrounding rectangle of the outline as a candidate frame of the defect in the training image, wherein the enlarged rectangle coincides with the geometric center of the surrounding rectangle, the length of the enlarged rectangle is a first preset multiple of the length of the surrounding rectangle, and the width of the enlarged rectangle is a second preset multiple of the width of the surrounding rectangle, and the first preset multiple and the second preset multiple are both larger than 1.
In one possible implementation, the third determining module 62 is configured to: the training image and the template image are input into a pre-trained second neural network, via which candidate boxes for defects in the training image are determined. Wherein the second neural network is configured to determine candidate boxes for defects in the training image based on the training image and the template image.
In one possible implementation manner, the image block corresponding to the candidate frame includes: a first image block of the candidate frame on the template image, and a second image block of the candidate frame on a difference image, wherein the difference image represents a difference image of the training image and the template image; the prediction module 63 is configured to: determining morphological difference characteristics corresponding to the candidate frames according to difference information of the first image block before and after morphological transformation and difference information of the second image block before and after morphological transformation; and inputting at least the morphological difference characteristic into a machine learning model, and obtaining a defect prediction result corresponding to the candidate frame through the machine learning model.
In one possible implementation, the prediction module 63 is configured to: obtaining a first binarized image block corresponding to the first image block and a second binarized image block corresponding to the second image block; performing morphological operation on the first binarized image block to obtain a first morphological transformation image block corresponding to the first binarized image block; performing morphological operation on the second binarized image block to obtain a second morphological transformation image block corresponding to the second binarized image block; and determining morphological difference characteristics corresponding to the candidate frame according to a first pixel number with different pixel values between the first binarized image block and the first morphological transformation image block and a second pixel number with different pixel values between the second binarized image block and the second morphological transformation image block.
In one possible implementation, the prediction module 63 is configured to: respectively carrying out blurring operation on a first gray image block corresponding to the first image block and a second gray image block corresponding to the second image block to obtain a first blurring image block corresponding to the first image block and a second blurring image block corresponding to the second image block; and respectively performing binarization operation on the first blurred image block and the second blurred image block to obtain a first binarization image block corresponding to the first image block and a second binarization image block corresponding to the second image block.
In one possible implementation, the prediction module 63 is configured to: performing morphological operations on the first binarized image blocks based on kernels of at least two sizes to obtain at least two first morphological transformed image blocks corresponding to the first binarized image blocks; performing morphological operations on the second binarized image blocks based on the kernels of at least two sizes to obtain at least two second morphological transformation image blocks corresponding to the second binarized image blocks; for the at least two first morphological transformation image blocks, respectively determining the pixel numbers with different pixel values between the first binarization image blocks and the first morphological transformation image blocks to obtain at least two first pixel numbers; for the at least two second morphological transformation image blocks, respectively determining the pixel numbers with different pixel values between the second binarization image blocks and the second morphological transformation image blocks to obtain at least two second pixel numbers; and determining morphological difference characteristics corresponding to the candidate frames according to the at least two first pixel numbers and the at least two second pixel numbers.
In one possible implementation, the first morphological transformation image block includes a first dilation image block and a first erosion image block, and the second morphological transformation image block includes a second dilation image block and a second erosion image block; the prediction module 63 is configured to: performing expansion operation on the first binarized image block to obtain a first expanded image block corresponding to the first binarized image block; performing corrosion operation on the first binarized image block to obtain a first corrosion image block corresponding to the first binarized image block; performing expansion operation on the second binarized image block to obtain a second expanded image block corresponding to the second binarized image block; and executing corrosion operation on the second binarized image block to obtain a second corrosion image block corresponding to the second binarized image block.
In one possible implementation, the prediction module 63 is configured to: respectively carrying out gray statistics on the first image block and the second image block to obtain a first gray statistics result corresponding to the first image block and a second gray statistics result corresponding to the second image block; determining gray features corresponding to the candidate frames according to the first gray statistics result and the second gray statistics result; inputting at least the morphological difference feature and the gray level feature into a machine learning model, and obtaining a defect prediction result corresponding to the candidate frame through the machine learning model.
In one possible implementation, the first gray statistics result includes: the number of pixels with the gray value of 0 and the number of pixels with the gray value of 255 in the first image block; the second gray level statistics include: the second image block has a gray value of 0 and a gray value of 255.
In one possible implementation, the prediction module 63 is configured to: obtaining first contour information corresponding to the first image block and second contour information corresponding to the second image block; determining contour features corresponding to the candidate frames according to the first contour information and the second contour information; inputting at least the morphological difference features and the contour features into a machine learning model, and obtaining defect prediction results corresponding to the candidate frames through the machine learning model.
In one possible implementation, the prediction module 63 is configured to: determining first contour information corresponding to the first image block according to the contour in the first binarized image block corresponding to the first image block; and determining second contour information corresponding to the second image block according to the contour in the second binarized image block corresponding to the second image block.
In one possible implementation, the first profile information includes: geometric information of the largest N outlines in the first binarized image block and the number of the outlines in the first binarized image block, wherein N is an integer greater than or equal to 1; the second profile information includes: geometric information of the largest N contours in the second binarized image block, and the number of contours in the second binarized image block.
In one possible implementation, the geometric information of the contour includes at least one of: the area of the contour, the bounding rectangle of the contour, the center moment of the contour, the position of the geometric center of the contour, the perimeter of the contour, the non-convexity of the contour, the smallest bounding rectangle of the contour, the smallest bounding circle of the contour, the fitting ellipse of the contour, the fitting rectangle of the contour. Wherein, the fitted ellipse of the contour may represent an ellipse obtained by performing ellipse fitting on the contour. The fitted rectangle of the contour may represent a rectangle obtained by straight line fitting of the contour.
In one possible implementation, the prediction module 63 is configured to: obtaining the width and the height of the first image block; determining the size characteristics corresponding to the candidate frames according to the width and the height; inputting at least the morphological difference feature and the dimensional feature into a machine learning model, and obtaining a defect prediction result corresponding to the candidate frame through the machine learning model.
In one possible implementation, the prediction module 63 is configured to: obtaining gradient information of the first image block and gradient information of the second image block; determining gradient characteristics corresponding to the candidate frames according to the gradient information of the first image block and the gradient information of the second image block; inputting at least the morphological difference features and the gradient features into a machine learning model, and obtaining defect prediction results corresponding to the candidate frames through the machine learning model.
In one possible implementation, the prediction module 63 is configured to: obtaining a difference image block of a first binarization image block corresponding to the first image block and a second binarization image block corresponding to the second image block; determining a difference statistical feature corresponding to the candidate frame according to the difference image block; inputting at least the morphological difference features and the difference statistical features into a machine learning model, and obtaining a defect prediction result corresponding to the candidate frame through the machine learning model.
In one possible implementation manner, the feature information of the difference image block includes: the pixel value in the difference image block is not 0.
In one possible implementation, the number of pixels in the difference image block with a pixel value other than 0 includes: a number of pixels in each row of pixels of the difference image block having a pixel value other than 0, and a number of pixels in each column of pixels of the difference image block having a pixel value other than 0.
In one possible implementation, the prediction module 63 is configured to: extracting features of the first image block through a pre-trained first neural network to obtain a first depth feature corresponding to the first image block; extracting features of the second image block through the first neural network to obtain a second depth feature corresponding to the second image block; inputting at least the morphological difference feature, the first depth feature and the second depth feature into a machine learning model, and obtaining a defect prediction result corresponding to the candidate frame through the machine learning model.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementation and technical effects of the functions or modules may refer to the descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. Wherein the computer readable storage medium may be a non-volatile computer readable storage medium or may be a volatile computer readable storage medium.
The disclosed embodiments also propose a computer program comprising computer readable code which, when run in an electronic device, causes a processor in the electronic device to carry out the above method.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, causes a processor in the electronic device to perform the above method.
The embodiment of the disclosure also provides an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the executable instructions stored by the memory to perform the above-described method.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 7 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure. For example, electronic device 1900 may be provided as a server. Referring to FIG. 7, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output interface 1958 (I/O interface). Electronic device 1900 may operate an operating system based on memory 1932, such as the Microsoft Server operating system (Windows Server) TM ) Apple Inc. developed graphical user interface based operating System (Mac OS X TM ) Multi-user multi-process computer operating system (Unix) TM ) Unix-like operating system (Linux) of free and open source code TM ) Unix-like operating system (FreeBSD) with open source code TM ) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
If the technical scheme of the embodiment of the disclosure relates to personal information, the product applying the technical scheme of the embodiment of the disclosure clearly informs the personal information processing rule and obtains personal independent consent before processing the personal information. If the technical solution of the embodiment of the present disclosure relates to sensitive personal information, the product applying the technical solution of the embodiment of the present disclosure obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of "explicit consent". For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A defect detection method, comprising:
acquiring an image to be detected and a template image corresponding to the image to be detected;
obtaining a difference image of the image to be detected and the template image;
determining contours in the difference image;
determining candidate frames of defects in the image to be detected according to the outline;
determining a defect detection result corresponding to the candidate frame at least according to difference information of the image block corresponding to the candidate frame before and after morphological transformation;
wherein the obtaining the difference image between the image to be detected and the template image includes: respectively carrying out blurring operation on the image to be detected and the template image to obtain a first blurring image corresponding to the image to be detected and a second blurring image corresponding to the template image; determining a difference image of the image to be detected and the template image according to the first blurred image and the second blurred image;
And/or the number of the groups of groups,
the determining of the contour in the difference image comprises: performing morphological operation on the difference image to obtain a de-interference image corresponding to the difference image; searching the outline in the interference elimination image to be used as the outline in the difference image;
and/or the number of the groups of groups,
the step of determining candidate frames of the defects in the image to be detected according to the outline comprises the following steps: filtering outlines meeting preset conditions in the difference images; and determining candidate frames of the defects in the image to be detected according to the filtered residual outlines in the difference image.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the image block corresponding to the candidate frame comprises: a first image block of the candidate frame on the template image and a second image block of the candidate frame on a difference image, wherein the difference image represents a difference image of the image to be detected and the template image;
the determining a defect detection result corresponding to the candidate frame at least according to difference information of the image block corresponding to the candidate frame before and after morphological transformation comprises the following steps: and determining a defect detection result corresponding to the candidate frame at least according to the difference information of the first image block before and after morphological transformation and the difference information of the second image block before and after morphological transformation.
3. The method of claim 1, wherein the determining a difference image of the image to be detected and the template image from the first blurred image and the second blurred image comprises:
respectively carrying out binarization operation on the first blurred image and the second blurred image to obtain a first binarized image corresponding to the first blurred image and a second binarized image corresponding to the second blurred image;
and determining a difference image of the image to be detected and the template image according to the first binarized image and the second binarized image.
4. A method according to any one of claims 1 to 3, wherein determining the defect detection result corresponding to the candidate frame at least according to difference information of the image block corresponding to the candidate frame before and after morphological transformation comprises:
at least the difference information of the image blocks corresponding to the candidate frames before and after morphological transformation is input into a pre-trained machine learning model, and a defect detection result corresponding to the candidate frames is obtained through the machine learning model.
5. A method according to any one of claims 1 to 3, wherein the image to be detected is a corresponding image to be detected of a printed circuit board.
6. A method of training a machine learning model for defect detection, comprising:
acquiring a training image and a template image corresponding to the training image;
obtaining a difference image of the training image and the template image;
determining contours in the difference image;
determining candidate frames of defects in the training image according to the outline;
at least inputting difference information of image blocks corresponding to the candidate frames before and after morphological transformation into a machine learning model, and obtaining defect prediction results corresponding to the candidate frames through the machine learning model;
training the machine learning model according to the labeling information corresponding to the candidate frame and the defect prediction result;
wherein the obtaining a difference image between the training image and the template image includes: respectively carrying out blurring operation on the training image and the template image to obtain a first blurring image corresponding to the training image and a second blurring image corresponding to the template image; determining a difference image of the training image and the template image according to the first blurred image and the second blurred image;
and/or the number of the groups of groups,
The determining of the contour in the difference image comprises: performing morphological operation on the difference image to obtain a de-interference image corresponding to the difference image; searching the outline in the interference elimination image to be used as the outline in the difference image;
and/or the number of the groups of groups,
the determining a candidate frame of the defect in the training image according to the outline comprises the following steps: filtering outlines meeting preset conditions in the difference images; and determining candidate frames of the defects in the training image according to the filtered residual outline in the difference image.
7. A defect detection apparatus, comprising:
the first acquisition module is used for acquiring an image to be detected and a template image corresponding to the image to be detected;
the first determining module is used for obtaining a difference image of the image to be detected and the template image, determining a contour in the difference image and determining a candidate frame of the defect in the image to be detected according to the contour;
the second determining module is used for determining a defect detection result corresponding to the candidate frame at least according to difference information of the image block corresponding to the candidate frame before and after morphological transformation;
the first determining module is specifically configured to:
Respectively carrying out blurring operation on the image to be detected and the template image to obtain a first blurring image corresponding to the image to be detected and a second blurring image corresponding to the template image; determining a difference image of the image to be detected and the template image according to the first blurred image and the second blurred image;
and/or the number of the groups of groups,
performing morphological operation on the difference image to obtain a de-interference image corresponding to the difference image; searching the outline in the interference elimination image to be used as the outline in the difference image;
and/or the number of the groups of groups,
filtering outlines meeting preset conditions in the difference images; and determining candidate frames of the defects in the image to be detected according to the filtered residual outlines in the difference image.
8. A training apparatus for a machine learning model for defect detection, comprising:
the second acquisition module is used for acquiring a training image and a template image corresponding to the training image;
a third determining module, configured to obtain a difference image between the training image and the template image, determine a contour in the difference image, and determine a candidate frame of a defect in the training image according to the contour;
The prediction module is used for inputting at least difference information of the image blocks corresponding to the candidate frames before and after morphological transformation into a machine learning model, and obtaining defect prediction results corresponding to the candidate frames through the machine learning model;
the training module is used for training the machine learning model according to the labeling information corresponding to the candidate frame and the defect prediction result;
the third determining module is specifically configured to:
respectively carrying out blurring operation on the training image and the template image to obtain a first blurring image corresponding to the training image and a second blurring image corresponding to the template image; determining a difference image of the training image and the template image according to the first blurred image and the second blurred image;
and/or the number of the groups of groups,
performing morphological operation on the difference image to obtain a de-interference image corresponding to the difference image; searching the outline in the interference elimination image to be used as the outline in the difference image;
and/or the number of the groups of groups,
filtering outlines meeting preset conditions in the difference images; and determining candidate frames of the defects in the training image according to the filtered residual outline in the difference image.
9. An electronic device, comprising:
one or more processors;
a memory for storing executable instructions;
wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the method of any of claims 1 to 6.
10. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 6.
CN202310449479.6A 2022-12-29 2022-12-29 Defect detection method, device, electronic apparatus, storage medium, and program product Pending CN116228746A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310449479.6A CN116228746A (en) 2022-12-29 2022-12-29 Defect detection method, device, electronic apparatus, storage medium, and program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211701346.5A CN115690102B (en) 2022-12-29 2022-12-29 Defect detection method, defect detection apparatus, electronic device, storage medium, and program product
CN202310449479.6A CN116228746A (en) 2022-12-29 2022-12-29 Defect detection method, device, electronic apparatus, storage medium, and program product

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202211701346.5A Division CN115690102B (en) 2022-12-29 2022-12-29 Defect detection method, defect detection apparatus, electronic device, storage medium, and program product

Publications (1)

Publication Number Publication Date
CN116228746A true CN116228746A (en) 2023-06-06

Family

ID=85055873

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202310450829.0A Pending CN116245876A (en) 2022-12-29 2022-12-29 Defect detection method, device, electronic apparatus, storage medium, and program product
CN202310449479.6A Pending CN116228746A (en) 2022-12-29 2022-12-29 Defect detection method, device, electronic apparatus, storage medium, and program product
CN202211701346.5A Active CN115690102B (en) 2022-12-29 2022-12-29 Defect detection method, defect detection apparatus, electronic device, storage medium, and program product

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310450829.0A Pending CN116245876A (en) 2022-12-29 2022-12-29 Defect detection method, device, electronic apparatus, storage medium, and program product

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202211701346.5A Active CN115690102B (en) 2022-12-29 2022-12-29 Defect detection method, defect detection apparatus, electronic device, storage medium, and program product

Country Status (1)

Country Link
CN (3) CN116245876A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116452621B (en) * 2023-03-10 2023-12-15 广州市易鸿智能装备有限公司 Ideal contour generating algorithm, device and storage medium based on reinforcement learning
CN116048945B (en) * 2023-03-29 2023-06-23 摩尔线程智能科技(北京)有限责任公司 Device performance detection method and device, electronic device and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002140695A (en) * 2000-11-01 2002-05-17 Omron Corp Inspection method and its device
US20050254699A1 (en) * 2004-05-13 2005-11-17 Dainippon Screen Mfg, Co., Ltd. Apparatus and method for detecting defect and apparatus and method for extracting wire area
CN101216438A (en) * 2008-01-16 2008-07-09 中国电子科技集团公司第四十五研究所 Printed circuit boards coarse defect image detection method based on FPGA
US20140126839A1 (en) * 2012-11-08 2014-05-08 Sharp Laboratories Of America, Inc. Defect detection using joint alignment and defect extraction
TW201734442A (en) * 2016-02-19 2017-10-01 斯庫林集團股份有限公司 Defect detection apparatus, defect detection method and program product
KR101781009B1 (en) * 2016-08-31 2017-10-23 노틸러스효성 주식회사 Soiled banknote discrimination method
JP2018040657A (en) * 2016-09-07 2018-03-15 大日本印刷株式会社 Inspection device and inspection method
WO2018086299A1 (en) * 2016-11-11 2018-05-17 广东电网有限责任公司清远供电局 Image processing-based insulator defect detection method and system
CN109447946A (en) * 2018-09-26 2019-03-08 中睿通信规划设计有限公司 A kind of Overhead optical cable method for detecting abnormality
CN111028213A (en) * 2019-12-04 2020-04-17 北大方正集团有限公司 Image defect detection method and device, electronic equipment and storage medium
CN111275697A (en) * 2020-02-10 2020-06-12 西安交通大学 Battery silk-screen quality detection method based on ORB feature matching and LK optical flow method
CN111986178A (en) * 2020-08-21 2020-11-24 北京百度网讯科技有限公司 Product defect detection method and device, electronic equipment and storage medium
CN111986190A (en) * 2020-08-28 2020-11-24 哈尔滨工业大学(深圳) Printed matter defect detection method and device based on artifact elimination
CN112508826A (en) * 2020-11-16 2021-03-16 哈尔滨工业大学(深圳) Printed matter defect detection method based on feature registration and gradient shape matching fusion
CN114066856A (en) * 2021-11-18 2022-02-18 深圳市商汤科技有限公司 Model training method and device, electronic equipment and storage medium
CN115205291A (en) * 2022-09-15 2022-10-18 广州镭晨智能装备科技有限公司 Circuit board detection method, device, equipment and medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6812118B2 (en) * 2016-03-16 2021-01-13 株式会社Screenホールディングス Defect detector, defect detection method and program
CN113646801B (en) * 2020-02-27 2024-04-02 京东方科技集团股份有限公司 Defect detection method, device and computer readable storage medium for defect image
CN113436131A (en) * 2020-03-04 2021-09-24 上海微创卜算子医疗科技有限公司 Defect detection method, defect detection device, electronic equipment and storage medium
CN112837303A (en) * 2021-02-09 2021-05-25 广东拓斯达科技股份有限公司 Defect detection method, device, equipment and medium for mold monitoring
CN114078118A (en) * 2021-11-18 2022-02-22 深圳市商汤科技有限公司 Defect detection method and device, electronic equipment and storage medium
CN114170153A (en) * 2021-11-20 2022-03-11 上海微电子装备(集团)股份有限公司 Wafer defect detection method and device, electronic equipment and storage medium
CN115018797A (en) * 2022-06-13 2022-09-06 歌尔股份有限公司 Screen defect detection method, screen defect detection device and computer-readable storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002140695A (en) * 2000-11-01 2002-05-17 Omron Corp Inspection method and its device
US20050254699A1 (en) * 2004-05-13 2005-11-17 Dainippon Screen Mfg, Co., Ltd. Apparatus and method for detecting defect and apparatus and method for extracting wire area
CN101216438A (en) * 2008-01-16 2008-07-09 中国电子科技集团公司第四十五研究所 Printed circuit boards coarse defect image detection method based on FPGA
US20140126839A1 (en) * 2012-11-08 2014-05-08 Sharp Laboratories Of America, Inc. Defect detection using joint alignment and defect extraction
TW201734442A (en) * 2016-02-19 2017-10-01 斯庫林集團股份有限公司 Defect detection apparatus, defect detection method and program product
KR101781009B1 (en) * 2016-08-31 2017-10-23 노틸러스효성 주식회사 Soiled banknote discrimination method
JP2018040657A (en) * 2016-09-07 2018-03-15 大日本印刷株式会社 Inspection device and inspection method
WO2018086299A1 (en) * 2016-11-11 2018-05-17 广东电网有限责任公司清远供电局 Image processing-based insulator defect detection method and system
CN109447946A (en) * 2018-09-26 2019-03-08 中睿通信规划设计有限公司 A kind of Overhead optical cable method for detecting abnormality
CN111028213A (en) * 2019-12-04 2020-04-17 北大方正集团有限公司 Image defect detection method and device, electronic equipment and storage medium
CN111275697A (en) * 2020-02-10 2020-06-12 西安交通大学 Battery silk-screen quality detection method based on ORB feature matching and LK optical flow method
CN111986178A (en) * 2020-08-21 2020-11-24 北京百度网讯科技有限公司 Product defect detection method and device, electronic equipment and storage medium
CN111986190A (en) * 2020-08-28 2020-11-24 哈尔滨工业大学(深圳) Printed matter defect detection method and device based on artifact elimination
CN112508826A (en) * 2020-11-16 2021-03-16 哈尔滨工业大学(深圳) Printed matter defect detection method based on feature registration and gradient shape matching fusion
CN114066856A (en) * 2021-11-18 2022-02-18 深圳市商汤科技有限公司 Model training method and device, electronic equipment and storage medium
CN115205291A (en) * 2022-09-15 2022-10-18 广州镭晨智能装备科技有限公司 Circuit board detection method, device, equipment and medium

Also Published As

Publication number Publication date
CN115690102B (en) 2023-04-18
CN115690102A (en) 2023-02-03
CN116245876A (en) 2023-06-09

Similar Documents

Publication Publication Date Title
CN110738207B (en) Character detection method for fusing character area edge information in character image
US10817741B2 (en) Word segmentation system, method and device
CN115690102B (en) Defect detection method, defect detection apparatus, electronic device, storage medium, and program product
CN111681273B (en) Image segmentation method and device, electronic equipment and readable storage medium
CN106778705B (en) Pedestrian individual segmentation method and device
CN110135514B (en) Workpiece classification method, device, equipment and medium
CN111079638A (en) Target detection model training method, device and medium based on convolutional neural network
CN112906794A (en) Target detection method, device, storage medium and terminal
CN116542975A (en) Defect classification method, device, equipment and medium for glass panel
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
CN115690101A (en) Defect detection method, defect detection apparatus, electronic device, storage medium, and program product
CN109726754A (en) A kind of LCD screen defect identification method and device
CN113888477A (en) Network model training method, metal surface defect detection method and electronic equipment
CN116091503B (en) Method, device, equipment and medium for discriminating panel foreign matter defects
CN113822836A (en) Method of marking an image
CN107330470B (en) Method and device for identifying picture
CN115578362A (en) Defect detection method and device for electrode coating, electronic device and medium
CN111709377B (en) Feature extraction method, target re-identification method and device and electronic equipment
CN111259974B (en) Surface defect positioning and classifying method for small-sample flexible IC substrate
CN109583328B (en) Sparse connection embedded deep convolutional neural network character recognition method
CN115719326A (en) PCB defect detection method and device
CN108021918B (en) Character recognition method and device
CN111291767A (en) Fine granularity identification method, terminal equipment and computer readable storage medium
Kavitha et al. Text detection based on text shape feature analysis with intelligent grouping in natural scene images
KR102199572B1 (en) Reverse object detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination