CN112950563A - Detection method and device, detection equipment and storage medium - Google Patents

Detection method and device, detection equipment and storage medium Download PDF

Info

Publication number
CN112950563A
CN112950563A CN202110199111.XA CN202110199111A CN112950563A CN 112950563 A CN112950563 A CN 112950563A CN 202110199111 A CN202110199111 A CN 202110199111A CN 112950563 A CN112950563 A CN 112950563A
Authority
CN
China
Prior art keywords
image
detected
defects
training
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110199111.XA
Other languages
Chinese (zh)
Inventor
陈鲁
肖安七
张嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Skyverse Ltd
Shenzhen Zhongke Feice Technology Co Ltd
Original Assignee
Shenzhen Zhongke Feice Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongke Feice Technology Co Ltd filed Critical Shenzhen Zhongke Feice Technology Co Ltd
Priority to CN202110199111.XA priority Critical patent/CN112950563A/en
Publication of CN112950563A publication Critical patent/CN112950563A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A detection method, a detection apparatus, a detection device, and a non-volatile computer-readable storage medium. The detection method comprises the steps of detecting the defects of the image of the piece to be detected based on a preset template matching algorithm; if the defect exists, acquiring an image area where the defect is located in the image of the to-be-detected piece to serve as the to-be-detected image; and detecting the defects of the image to be detected based on a preset target detection model. The target detection model replaces quality testing personnel and is matched with the template matching algorithm, so that not only can all defects of the piece to be tested be detected, but also the image to be tested without defects caused by the template matching algorithm overdetection can be accurately identified, the manpower waste is not needed, the problem of poor detection accuracy caused by the reason that the experience of quality testing personnel is low and the fatigue is caused is avoided, and the detection effect is better.

Description

Detection method and device, detection equipment and storage medium
Technical Field
The present application relates to the field of detection technologies, and in particular, to a detection method, a detection apparatus, a detection device, and a non-volatile computer-readable storage medium.
Background
At present, when an image of a workpiece with a defect is detected by a defect detection algorithm based on template matching, a part which is not originally the defect in the image is easily identified as the defect, so that a large amount of over-detection is caused, quality testing personnel are required to check the defect one by one, manpower is wasted, the detection accuracy is influenced by the experience and fatigue degree of the quality testing personnel, and the detection effect is poor.
Disclosure of Invention
The application provides a detection method, a detection device and a non-volatile computer readable storage medium.
The detection method comprises the steps of detecting the defects of the image of the piece to be detected based on a preset template matching algorithm; if the defect exists, acquiring an image area where the defect is located in the image of the to-be-detected piece to serve as the to-be-detected image; and detecting the defects of the image to be detected based on a preset target detection model.
The detection device comprises a first detection module, an acquisition module and a second detection module. The first detection module is used for detecting the defects of the image of the piece to be detected based on a preset template matching algorithm; the acquisition module is used for acquiring an image area where the defect is located in the image of the piece to be detected as an image to be detected when the defect exists; the second detection module is used for detecting the defects of the image to be detected based on a preset target detection model.
The detection device of the embodiment of the application comprises a processor. The processor is configured to: detecting the defects of the image of the piece to be detected based on a preset template matching algorithm; when the defects exist, acquiring an image area where the defects are located in the image of the to-be-detected piece to serve as the to-be-detected image; and detecting the defects of the image to be detected based on a preset target detection model.
A non-transitory computer-readable storage medium embodying a computer program of embodiments of the application, which when executed by one or more processors, causes the processors to perform the detection method. The detection method comprises the steps of detecting the defects of the image of the piece to be detected based on a preset template matching algorithm; if the defect exists, acquiring an image area where the defect is located in the image of the to-be-detected piece to serve as the to-be-detected image; and detecting the defects of the image to be detected based on a preset target detection model.
According to the detection method, the detection device, the detection equipment and the nonvolatile computer readable storage medium, all defects of an image of a piece to be detected are detected through a template matching algorithm, then an image area where the defects are located in the image of the piece to be detected is intercepted to serve as an image to be detected, then the image to be detected corresponding to each defect is input into a target detection model to be detected with high precision, information of the defects contained in each image to be detected is determined, quality testing personnel is replaced by the target detection model and the template matching algorithm is matched, not only can all defects of the piece to be detected, but also the image to be detected without the defects caused by template matching algorithm overdetection can be accurately identified, manpower waste is avoided, the problem of poor detection accuracy caused by the fact that the quality testing personnel have little experience and fatigue is solved, and the detection effect is good.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow diagram of a detection method according to certain embodiments of the present application;
FIG. 2 is a block schematic diagram of a detection device according to certain embodiments of the present application;
FIG. 3 is a schematic plan view of a detection apparatus according to certain embodiments of the present application;
FIGS. 4-6 are schematic flow charts of detection methods according to certain embodiments of the present disclosure;
FIGS. 7 and 8 are schematic illustrations of the detection method of certain embodiments of the present application;
FIGS. 9-14 are schematic illustrations of the detection method of certain embodiments of the present application;
FIG. 15 is a schematic flow chart of a detection method according to certain embodiments of the present application; and
FIG. 16 is a schematic diagram of a connection between a processor and a computer-readable storage medium according to some embodiments of the present application.
Detailed Description
Embodiments of the present application will be further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout. In addition, the embodiments of the present application described below in conjunction with the accompanying drawings are exemplary and are only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the present application.
Referring to fig. 1 to 3, the detection method according to the embodiment of the present disclosure includes the following steps:
011: detecting the defects of the image of the piece to be detected based on a preset template matching algorithm;
012: if the image to be detected has the defects, acquiring an image area where the defects are located in the image of the to-be-detected piece to serve as the to-be-detected image;
013: and detecting the defects of the image to be detected based on a preset target detection model.
The detection device 10 of the embodiment of the present application includes a first detection module 11, an acquisition module 12, and a second detection module 13. The first detection module 11 is used for detecting the defects of the image of the piece to be detected based on a preset template matching algorithm; the obtaining module 12 is configured to obtain an image area where a defect in an image of a to-be-detected object is located when the defect exists, so as to serve as the to-be-detected image; the second detection module 13 is configured to detect a defect of the image to be detected based on a preset target detection model. That is, step 011 can be performed by the first detection module 11, step 012 can be performed by the acquisition module 12, and step 013 can be performed by the second detection module 13.
The detection apparatus 100 of the present embodiment includes a processor 20. The processor 20 is configured to detect a defect of an image of the to-be-detected object based on a preset template matching algorithm; if the image to be detected has the defects, acquiring an image area where the defects are located in the image of the to-be-detected piece to serve as the to-be-detected image; and detecting the defects of the image to be detected based on a preset target detection model. That is, step 011, step 012, and step 013 can be performed by processor 20.
In particular, the detection device 100 may be a measuring machine. It is understood that the specific form of the inspection apparatus 100 is not limited to a measuring machine, but may be any apparatus capable of inspecting the object 200.
The detection apparatus 100 includes a processor 20, a motion platform 30, and a sensor 40. Both the processor 20 and the sensor 40 may be located on the motion platform 30. The motion platform 30 can be used to carry the object 200, and the motion platform 30 moves to drive the sensor 40 to move, so that the sensor 40 collects information of the object 200.
For example, the motion platform 30 includes an XY motion platform 31 and a Z motion platform 32, and the sensor 40 is disposed on the motion platform 30, specifically: the sensor 40 is arranged on the Z-motion platform 32, wherein the XY-motion platform 31 is used for controlling the object 200 to be measured to move along the horizontal plane, so as to change the relative position of the object 200 to be measured and the sensor 40 on the horizontal plane, and the Z-motion platform 32 is used for controlling the sensor 40 to move along the direction vertical to the horizontal plane, so that the three-dimensional position (i.e. the relative position on the horizontal plane and the relative position in the direction vertical to the horizontal plane) of the sensor 40 relative to the object 200 to be measured is realized through the cooperation of the XY-motion platform 31 and the.
It is understood that the motion platform 30 is not limited to the above structure, and only needs to be able to change the three-dimensional position of the sensor 40 relative to the object 200.
The sensor 40 may be one or more and the plurality of sensors 40 may be different types of sensors 40, e.g., the sensors 40 may include visible light cameras, depth cameras, etc. In the present embodiment, the sensor 40 is a visible light camera.
When acquiring the image of the object 200, the sensor 40 may be aligned with the object 200, so that the object 200 is located within the field of view of the sensor 40, and the image of the entire object 200 is directly acquired through one-time shooting. The workpiece 200 to be tested may be different types of workpieces, such as a wafer, a display screen panel, a front cover of a mobile phone, a rear cover of a mobile phone, VR glasses, AR glasses, a cover plate 40 of a smart watch, glass, wood, an iron plate, a housing of any device (e.g., a mobile phone housing), and the like, which need to be tested. In the embodiment of the present application, the device under test 200 is taken as a wafer for example.
Then, the processor 20 detects defects of the image of the to-be-tested object 200 based on a preset template matching algorithm.
For example, the inspection apparatus 100 prestores a plurality of template images of different types of defects, and the preset template matching algorithm may be: the image of the piece 200 to be tested is divided into different image areas, and each image area is compared with all the template images one by one to determine the defects in the image of the piece 200 to be tested. If the image area is matched with the template image, it can be determined that the defect corresponding to the template image exists in the image area, so that the defects of all the image areas in the image of the to-be-detected piece 200 are detected.
It can be understood that the positions of the defects detected based on the preset template matching algorithm are not accurate, and the template matching algorithm does not include defects, but an image area having a wafer pattern similar to the defects may also match the template image, thereby causing over-inspection.
Therefore, after the image of the workpiece 200 is detected based on the preset template matching algorithm, if there is no defect, it can be determined that the image of the workpiece 200 is defect-free. If there are defects, further determination is required, and at this time, the processor 20 cuts out an image area corresponding to each detected defect to be used as an image to be measured.
One or more images to be detected are input into the target detection model, and the processor 20 detects each image to be detected based on the target detection model, thereby detecting the defect of each image to be detected.
The defects detected by the template detection algorithm may comprise position information and type information, and the defects detected by the target detection model may comprise position information, type information and confidence information. Compared with the position information detected by the template matching algorithm, the position information is the position of the image area, the target detection model can more accurately detect the specific position of the defect in the image area, and the detection accuracy of the type of the defect is higher, so that the image to be detected generated by the template matching algorithm is more accurately eliminated, and all defects in the image of the piece to be detected 200 are accurately and uninterruptedly detected. Therefore, after each image to be detected is detected by the target detection model, the processor 20 takes the detection result (i.e., the position information, the type information, and the confidence information of each defect) output by the target detection model as a final detection result.
According to the detection method, the detection device 10 and the detection equipment 100, all defects of the image of the to-be-detected piece 200 are detected through the template matching algorithm, then the image area where the defect is located in the image of the to-be-detected piece 200 is intercepted to serve as the to-be-detected image, then the to-be-detected image corresponding to each defect is input into the target detection model to be detected with high precision, so that the information of the defect contained in each to-be-detected image is determined, the target detection model replaces quality testing personnel and is matched with the template matching algorithm, not only can all defects of the to-be-detected piece 200 be detected, but also the to-be-detected image without defects caused by over-detection of the template matching algorithm can be accurately identified, manpower waste is avoided, the problem that detection accuracy is poor due to low experience and fatigue of quality testing personnel is solved.
Referring to fig. 2, fig. 3 and fig. 4, in some embodiments, step 011 includes:
0111: acquiring a preset template image matched with the image of the piece to be detected 200;
0112: and fusing the preset template image and the image of the piece to be detected 200 to detect the defects.
In some embodiments, the first detection module 11 is further configured to obtain a preset template image matched with the image of the to-be-detected piece 200; and fusing the preset template image and the image of the piece to be detected 200 to detect the defects. That is, step 0111 and step 0112 may be performed by the first detection module.
In some embodiments, the processor 20 is further configured to obtain a preset template image matched with the image of the to-be-tested object 200; and fusing the preset template image and the image of the piece to be detected 200 to detect the defects. That is, step 0111 and step 0112 may be performed by processor 20.
Specifically, when detecting the defect of the image of the to-be-detected piece 200 based on the preset template matching algorithm, the processor 20 may first obtain a preset template image matched with the image of the to-be-detected piece 200, where the preset template image matched with the image of the to-be-detected piece 200 may be an image of the to-be-detected piece 200 without the defect, and for a wafer, the preset template image is a wafer image without the defect, and the model of the wafer is the same as that of the to-be-detected piece 200, so as to ensure that the wafer patterns, the shape thereof, the pattern background, and the like of the two are the same.
The processor 20 then performs a fusion process on the preset template image and the image of the object 200. Specifically, the image of the to-be-detected piece 200 and the preset template image are equally divided into a plurality of image areas with the same number, then the image areas corresponding to the positions of the to-be-detected piece 200 and the preset template image are compared, and if the image areas corresponding to the two image areas are different (that is, the images of the image areas corresponding to the two image areas are different), it is determined that the position may have a defect, so that all the image areas with defects are detected. Therefore, whether each image area has defects can be determined without matching each image area with template images of all defects of different types, the calculated amount is small, and missing detection cannot occur.
Referring to fig. 2, 3 and 5, in some embodiments, step 0112 includes:
01121: performing difference shadow processing on the preset template image and the image of the piece to be detected 200 to obtain a difference image; and
01122: a connected component of the difference image is computed to detect defects.
In some embodiments, the first detecting module 11 is further configured to perform subtraction processing on the preset template image and the image of the to-be-detected piece 200 to obtain a difference image; a connected component of the difference image is computed to detect defects. That is, step 01121 and step 01122 may be performed by the first detection module 11.
In some embodiments, the processor 20 is further configured to perform a difference processing on the preset template image and the image of the to-be-detected object 200 to obtain a difference image; a connected component of the difference image is computed to detect defects. That is, step 01121 and step 01122 may be performed by processor 20.
Specifically, when the processor 20 performs fusion processing on the preset template image and the image of the to-be-detected piece 200, the preset template image and the image of the to-be-detected piece 200 may be subjected to subtraction processing, a difference is made between pixel values of pixels corresponding to positions of the preset template image and the image of the to-be-detected piece 200, and the difference is used as the pixel value, so as to obtain a difference image.
Generally, the preset template image and the image of the to-be-detected piece 200 are both obtained by shooting with the sensor 40 of the same model, and the pixels of the preset template image and the image of the to-be-detected piece 200 and the positions of the to-be-detected piece 200 in the two images are basically the same, so that the difference part in the difference image obtained by difference processing of the preset template image and the to-be-detected piece 200 is caused by the defect, the defect can be prominently displayed in the difference image, and the defect detection accuracy is improved.
The processor 20 may identify a connected component of the difference image, where the connected component is an image region composed of a plurality of pixels each having a pixel value greater than a predetermined pixel value (e.g., 10, 20, 30, etc.) and being connected to each other in position. For example, the predetermined pixel value may be a pixel average value of all pixels of the difference image, and it can be understood that the larger the predetermined pixel value is selected, the higher the probability that the detected connected domain is a defect is, and the accuracy of defect detection can be improved; the smaller the selection of the preset pixel value is, the smaller the probability that the detected connected domain is a defect is, and the omission can be prevented.
After identifying the plurality of connected domains of the difference image, it may be determined that each connected domain corresponds to a defect, and the processor 20 may use an image region corresponding to the connected domain in the image of the to-be-detected object 200 as an image to be detected, so as to input the image to the target detection model for subsequent defect detection.
Referring to fig. 2, 3 and 6, in some embodiments, step 01122 includes the following steps:
01123: identifying a plurality of light spots in the difference image and numbering each light spot;
01124: when the distance between two adjacent light spots is smaller than a preset distance threshold value, modifying the serial numbers of the two adjacent light spots into the same serial number;
01125: connecting the light spots with the same number as a connected domain; and
01126: and when the area of the connected domain is larger than a preset area threshold value, determining that the connected domain is a defect.
In some embodiments, the first detection module 11 is further configured to identify a plurality of light spots in the difference image and number each light spot; when the distance between two adjacent light spots is smaller than a preset distance threshold value, modifying the serial numbers of the two adjacent light spots into the same serial number; connecting the light spots with the same number as a connected domain; and determining the connected domain as a defect when the area of the connected domain is larger than a preset area threshold. That is, step 01123, step 01124, step 01125, and step 01126 may be performed by the first detection module 11.
In some embodiments, processor 20 is further configured to identify a plurality of spots in the difference image and number each spot; when the distance between two adjacent light spots is smaller than a preset distance threshold value, modifying the serial numbers of the two adjacent light spots into the same serial number; connecting the light spots with the same number as a connected domain; and determining the connected domain as a defect when the area of the connected domain is larger than a preset area threshold. That is, step 01123, step 01124, step 01125, and step 01126 may be implemented by processor 20.
Specifically, of course, due to the privacy effects of the shooting time, the shooting environment, and the like, the preset template image and the image of the object 200 may have differences other than the differences caused by the defects, so as to be highlighted in the difference image, or cause the defect portion that is originally a whole to be divided into a plurality of small defect portions that are close to each other, that is, the connected domain is divided into a plurality of discontinuous portions.
Therefore, when determining the connected component, the processor 20 first identifies all the light spots in the difference image according to the predetermined pixel value, and sequentially numbers the light spots, where the light spots may be a part of the connected component, that is, the light spots are also an image area composed of a plurality of interconnected pixels larger than the predetermined pixel value.
When the distance between two adjacent light spots is smaller than a preset distance threshold value, the numbers of the two adjacent light spots can be modified into the same number. As shown in fig. 7, there are 5 light spots (light spot 1, light spot 2, light spot 3, light spot 4, and light spot 5, respectively), two light spots may be distributed at intervals or in an adjacent distribution, for example, light spot 1 and light spot 5, light spot 5 and light spot 4 are distributed at intervals, light spot 2 and light spot 3 are distributed in an adjacent manner, when the distance between the two light spots is smaller than a predetermined distance threshold (for example, 1 pixel, 2 pixels, 3 pixels, etc., taking the predetermined distance threshold as 2 pixels as an example), it may be determined that the two light spots are connected, and the processor 20 modifies the numbers of the connected light spots to be the same number, for example, the numbers of light spot 1, light spot 4, and light spot 5 are all 1, and the numbers of light spot 2 and light spot 3 are all 2. The distance between the two light spots is the minimum distance between the two light spots, such as the distance between the two closest pixels (located at light spot 1 and light spot 5, respectively) in light spot 1 and light spot 5.
Processor 20 then concatenates the numbered spots as a concatenated field, as shown in fig. 8, spot 1, spot 4 and spot 5 collectively forming concatenated field a, and spot 2 and spot 3 collectively forming concatenated field b. Therefore, a plurality of light spots corresponding to one defect originally are communicated together, and missing detection caused by processing as noise due to the fact that the area of the communicated region is too small is prevented.
There is a range of experience with different types of dut 200, corresponding to the type of defect and the size of the defect. For example, for a wafer, defects such as foreign objects, adhesive residue, oxidation, bubbles, wrinkles, cracks, etc. are generally included, and the size (e.g., area) of the defects is larger than a predetermined area threshold.
Therefore, when the area of the connected domain is larger than the preset area threshold, the processor 20 may determine that the connected domain is a defect, thereby eliminating the connected domain with a smaller area, which is a noise, and improving the accuracy of defect detection.
Referring to fig. 2, 3 and 9, step 13 includes:
0131: acquiring a plurality of training images of a plurality of workpieces with defects;
0132: marking defects in the training image;
0133: inputting the training images before labeling and the training images after labeling as training sets into a target detection model for training so as to obtain a target detection model trained to be convergent; and
0134: and detecting the defects of the image of the to-be-detected piece 200 according to the converged target detection model.
In some embodiments, the second detection module 13 is further configured to acquire a plurality of training images of a plurality of workpieces having defects; marking defects in the training image; inputting the training images before labeling and the training images after labeling as training sets into a target detection model for training so as to obtain a target detection model trained to be convergent; and detecting the defects of the image to be detected according to the converged target detection model. That is, steps 0131 to 0134 may be performed by the second detection module 13.
In certain embodiments, the processor 20 is further configured to acquire a plurality of training images of a plurality of workpieces having defects; marking defects in the training image; inputting the training images before labeling and the training images after labeling as training sets into a target detection model for training so as to obtain a target detection model trained to be convergent; and detecting the defects of the image to be detected according to the converged target detection model. That is, steps 0131 to 0134 may be performed by processor 20.
Specifically, in acquiring a training image of a workpiece having a defect, the workpiece having the defect, which has been detected in advance, may be placed on the motion stage 30, and the processor 20 controls the motion stage 30 to move so that the sensor 40 captures an original image of the workpiece as the training image.
When the original image is shot, the processor 20 can adjust the distance between the sensor 40 and the workpiece according to the field range of the sensor 40, so that the workpiece is located in the field range, and the original image of the whole workpiece can be obtained by shooting the image once; alternatively, the sensor 40 may be configured such that each time it captures an image, the field of view covers only a partial region of the workpiece, different regions of the workpiece are captured by moving to obtain a plurality of original images, and then the plurality of original images are combined to obtain an original image of the entire workpiece.
When the workpiece for shooting the original image is selected, the selected workpieces can all be the same type of workpiece, so that the target detection model obtained after subsequent training is specially used for detecting the type of workpiece, and the detection accuracy of the target detection model is improved. Of course, the selected workpieces can also contain different types of workpieces, so that the target detection model obtained after training can simultaneously realize the detection of the defects of the workpieces of various types, and the application is wide. The present embodiment is described by taking the workpiece as an example, and the defects of the wafer generally include foreign objects, residual glue, oxidation, bubbles, wrinkles, cracks, and the like.
In order to improve the training effect, when a wafer is selected, a wafer pattern or a plurality of wafers with different wafer background patterns can be selected, so that a plurality of training images with different image backgrounds can be obtained, the diversity of the training images can be improved, the influence of the image background on the trained target detection model can be reduced while the training effect is improved, and the defect detection can be accurately carried out on the target detection model even under different image backgrounds.
In addition, when selecting a wafer, a wafer having at least some of the different types of defects may also be selected. For example, if wafer a, wafer B, and wafer C are selected, the defects of wafer a, wafer B, and wafer C are at least partially different, such as wafer a having defects of foreign objects, adhesive residue, and oxidation, wafer B having defects of adhesive residue, oxidation, and bubbles, and wafer C having defects of oxidation, bubbles, wrinkles, and cracks. Therefore, the defects of the training images have certain difference, the diversity of the training images can be improved, and the training effect is improved.
It is understood that the regions where the probability of occurrence of defects is the greatest are different for different types of workpieces. Therefore, when the training image is obtained, the part of the predetermined region in the original image can be intercepted to be used as the training image, and the predetermined region is the region with the maximum defect probability of the current workpiece, so that the training image has enough defects to carry out subsequent training while the training image is ensured to be small in size to reduce the calculation amount.
In one example, the workpiece is a wafer, and the predetermined area is generally a central area of the wafer, such as a circular area with a radius of a predetermined radius around the center of the wafer, and the predetermined radius can be determined according to the radius of the wafer, such as 60%, 70%, 75%, 80%, 90%, etc. of the radius of the wafer. Therefore, after the original image of the wafer is shot and acquired, the image corresponding to the central area in the original image can be intercepted, so that the training image is obtained.
After the training image is obtained, the defects in the training image can be labeled in advance. For example, quality control personnel empirically label defects in the training images. Such as marking out the type of defect in the training image, and then selecting the defect by using a defect frame (e.g., a rectangular frame, a circular frame, etc.) in the training image as the location of the defect. Or, the processor 20 firstly detects the defects of the training image based on a preset template matching algorithm, and then determines the defects by quality testing personnel to frame the positions of the defects and mark the types of the defects, so that the defect marking speed is increased, the workload of the quality testing personnel is reduced, and the probability of misjudgment of the quality testing personnel due to fatigue is reduced.
The processor 20 may obtain the plurality of training images after the labeling, and then the processor 20 inputs the plurality of training images before the labeling and the plurality of training images after the labeling as a training set into the target detection model for training until the target detection model converges.
The target detection model may be, but is not limited to, a second order detection algorithm (e.g., Faster R-CNN and its variants), a first order detection algorithm (e.g., YOLOV3 and its variants), an anchor-free detection algorithm (e.g., CenterNet and its variants), and the like.
And when the adjusted target detection model can accurately detect the defects of the current type of workpieces after training of the training set, the target detection model can be considered to be converged.
Finally, the processor 20 detects the image of the object 200 after the sensor 40 captures the image of the object 200 according to the converged target detection model, so as to identify defects in the image of the object 200.
Therefore, after the defect marking is carried out on the training image with the defects, the training is carried out by inputting the target detection model, so that the target detection model trained to be convergent is obtained, the defect of the image of the piece to be detected is detected through the trained target detection model, the noise of the defect and the noise of the background image can be accurately identified, the influence of the image background on the defect detection is small, the over-detection is not easy to generate, and the detection effect can be improved.
In some embodiments, processor 20 is further configured to perform an amplification process on the plurality of training images, the amplification process including at least one of mirroring, translation, rotation, shearing, and deformation.
Specifically, to further increase the number and diversity of training images, processor 20 may perform an augmentation process on training images derived from the original images.
Referring to fig. 10, for example, processor 20 mirrors each training image P1 to obtain a mirrored image P2 of each training image P1 as a new training image P1. The mirror image P2 after the mirror image processing is mirror-symmetrical to the training image P1, and the axis of symmetry may be arbitrary, for example, by performing mirror image processing with any one side of the training image P1 as the axis of symmetry (in fig. 10, mirror image processing is performed with the rightmost side of the training image P1 as the axis of symmetry), or performing mirror image processing with the diagonal line of the training image P1 or the connecting line of the midpoints of any two sides as the axis of symmetry, a plurality of new training images are obtained by mirror image processing.
Referring to FIG. 11, for another example, processor 20 performs a panning process on each training image P1 to obtain a panned image P3 of each training image P1 as a new training image P1. Specifically, a predetermined image region (i.e., the region occupied by the training image P1) is determined by using the training image P1, then the training image P1 is translated, such as left translation, right translation, left-up translation, and the like (rightward translation in fig. 11), then the image of the predetermined image region (i.e., the translated image P3) is used as a new training image P1, and the position of the defect after translation in the image is changed, so that a plurality of new training images P1 are obtained.
Referring to fig. 12, for another example, processor 20 performs a rotation process on each training image P1 to obtain a rotated image P4 of each training image P1 as a new training image P1. Specifically, a predetermined image region is determined by using the training image P1, then the training image P1 is rotated, for example, clockwise or counterclockwise by 10 degrees, 30 degrees, 60 degrees, 90 degrees, 140 degrees, etc. (fig. 12 is rotated by 30 degrees counterclockwise), then the image of the predetermined image region (and the rotated image P4) is used as a new training image P1, and the position of the defect after rotation in the image is changed, so as to obtain a plurality of new training images P1.
Referring to fig. 13, for another example, the processor 20 performs a cropping process on each training image P1 to obtain a cropped image P5 of each training image as a new training image P1. Specifically, a predetermined image region is determined by using the training image P1, then the training image P1 is cropped, for example, 1/4, 1/3, 1/2 of the training image P1 is cropped (fig. 13 is 1/2 for cropping the training image), and then the image of the predetermined image region (i.e., the cropping image P5) is used as a new training image P1, so as to obtain a plurality of new training images P1.
Referring to fig. 14, for another example, processor 20 performs warping on each training image P1 to obtain warped image P6 of each training image P1 as new training image P1. Specifically, a predetermined image area is determined by using the training image P1, then the training image P1 is deformed, for example, the training image is compressed in the transverse direction, so that the original rectangular training image P1 becomes a rectangle with notches, then the image of the predetermined image area (i.e., the deformed image P6) is used as a new training image P1, and the position and the shape of the deformed defect in the image are changed, so that a plurality of new training images P1 are obtained.
Of course, the processor 20 may also perform the translation process and the rotation process on the training image at the same time; or simultaneously carrying out translation processing, rotation processing and mirror image processing; or simultaneously carrying out translation processing, rotation processing, mirror image processing and shearing processing; alternatively, the translation processing, the rotation processing, and the mirror processing are performed simultaneously, and the translation processing, the rotation processing, and the mirror processing are performed a plurality of times respectively at different distances, different angles, and different symmetry axes, which are not listed here.
By carrying out amplification processing on the training images, a large number of training images can be obtained without obtaining more original images, the diversity of the training images is better, and the training effect on the target detection model can be improved.
Referring to fig. 2, 3 and 15, in some embodiments, step 0133 includes:
01331: inputting the training image before labeling to a target detection model to output a detection result;
01332: comparing the detection result with the marked training image to determine a first loss value; and
01333: and adjusting the target detection model according to the first loss value so that the target detection model converges.
In some embodiments, the second detection module 13 is further configured to input the pre-labeling training image to the target detection model to output a detection result; comparing the detection result with the marked training image to determine a first loss value; and adjusting the target detection model according to the first loss value so as to make the target detection model converge. That is, steps 01331 through 01333 may be performed by the second detection module 13.
In some embodiments, the processor 20 is further configured to input the pre-labeling training image to the target detection model to output a detection result; comparing the detection result with the marked training image to determine a first loss value; and adjusting the target detection model according to the first loss value so as to make the target detection model converge. That is, steps 01331 through 01333 may be performed by processor 20.
Specifically, during training, a training image before labeling is input to a target detection model, then the target detection model outputs a detection result, the detection result comprises the type and the position of each defect, then the detection result is compared with the training image after labeling, and if the types of the defects at the corresponding positions of the detection result and the training image after labeling are the same, the deviation of the positions is determined, so that a first loss value is determined; the processor 20 adjusts the target detection model according to the first loss value so that the target detection model converges. For example, the type detection parameters of the defects are adjusted according to whether the types of the defects at the corresponding positions of the detection result and the labeled training image are the same, the position detection parameters of the defects are adjusted according to the position deviation of the defects at the corresponding positions of the detection result and the labeled training image, and the detection and adjustment are performed through a training set comprising a large number of training images before and after labeling, so that the target detection model is converged, and the detection effect of the target detection model is ensured.
In some embodiments, the processor 20 is further configured to compare the type of the defect in the detection result with the type of the corresponding defect in the labeled training image to determine a type loss value; comparing the positions of the defects in the detection result with the positions of the corresponding defects in the marked training image to determine a position loss value; a first penalty value is determined based on the type penalty value and the location penalty value.
Specifically, when determining the first loss value, the type of the defect in the detection result may be compared with the type of the defect corresponding to (e.g., corresponding to the position) in the labeled training image to determine the type loss value. And if the type of the defect in the detection result is the same as the type of the corresponding defect in the labeled training image, determining that the loss value is 0, and if the type of the defect in the detection result is different from the type of the corresponding defect in the labeled training image, determining that the loss value is 1.
Then, the position of the defect in the detection result can be compared with the position of the corresponding defect (e.g., corresponding to the position) in the labeled training image to determine the position loss value. If the position of the defect in the detection result is marked by the first defect frame, the corresponding defect in the marked training image is marked by the second defect frame, and the first defect frame and the second defect frame are both rectangular, the difference of the position coordinates of the first defect frame and the second defect frame (such as the difference of the position coordinates of the centers of the first defect frame and the second defect frame) can be calculated, and then the distance between the defect in the detection result and the corresponding defect in the marked training image is determined according to the difference, so that the position loss value is calculated according to the distance, and the larger the distance is, the larger the position loss value is.
Since the importance of the defect type determination is higher, when determining the first loss value according to the type loss value and the position loss value, a weight with a larger type loss value may be given, for example, the first loss value is a + type loss value + b + position loss value, where a is greater than b. Therefore, the detection accuracy of the type of the defect after the processor 20 adjusts the target detection model according to the first loss value is ensured.
In some embodiments, the processor 20 is further configured to transform the training set to obtain a validation set; inputting the verification set to the adjusted target detection model to output a second loss value; when the second loss value is smaller than a preset threshold value, determining that the target detection model converges; and when the second loss value is larger than the preset threshold value, taking the verification set as a training set, and training the target detection model again until the target detection model converges.
Specifically, after the target detection model is adjusted according to the first loss value, it is required to determine whether the target detection model is converged, at this time, the training set may be transformed to obtain the verification set, the transformation may be performed on the training image by at least one of translation, rotation, mirroring, shearing, and deformation, and the specific transformation process may refer to amplification processing, which is not described herein again. And obtaining new training images after transformation, and obtaining a verification set formed by a plurality of new training images after each training image is transformed. The verification set comprises each transformed training image, wherein the training images corresponding to the marked images before and after the marking are subjected to the same transformation processing so as to be still corresponding to the training images in the verification set. The training images in the verification set are different from the training images in the training set, so that the verification set can accurately verify whether the target detection model is converged.
After the verification set is input to the target detection model, the target detection model outputs a second loss value, and at this time, the processor 20 determines whether the second loss value is smaller than a preset threshold value. If the second loss value is smaller than or equal to the preset threshold value, the detection loss is small, the detection accuracy meets the requirement, and the target detection model can be determined to be converged.
If the second loss value is larger than the preset threshold value, the detection loss is too large, the detection accuracy still does not meet the requirement, and at the moment, the target detection model can be determined not to be converged and the training needs to be continued. And at the moment, taking the verification set as a training set, performing amplification treatment on the training set again to increase the number and diversity of training images of the training set, performing second-round training on the target detection model again, performing transformation treatment on the training set again after the training to obtain the verification set, verifying whether the target detection model converges again, continuing to perform amplification treatment on the verification set as the training set when the target detection model does not converge, performing third-round training on the target detection model again, and repeating the steps until the trained target detection model converges.
In some embodiments, the processor 20 is further configured to input a preset validation set to the target detection model to output a third loss value, the validation set being different from the images of the training set; when the third loss value is smaller than a preset threshold value, determining that the target detection model converges; and when the third loss value is larger than the preset threshold value, carrying out transformation processing on the training set, and training the target detection model again according to the training set after the transformation processing until the target detection model is converged.
Specifically, after the target detection model is adjusted according to the first loss value, it is necessary to determine whether the target detection model converges. At this time, the processor 20 may first obtain a preset verification set, where images in the verification set are different from training images in the training set, so that the verification set can accurately verify whether the target detection model converges.
Then, after the processor 20 inputs the preset verification set to the target detection model, the target detection model outputs a third loss value, and at this time, the processor 20 determines whether the third loss value is greater than a preset threshold value. If the third loss value is smaller than the preset threshold value, the detection loss is small, the detection accuracy meets the requirement, and the target detection model can be determined to be converged.
If the third loss value is larger than the preset threshold value, the detection loss is too large, the detection accuracy still does not meet the requirement, and at the moment, the target detection model can be determined not to be converged and the training needs to be continued. And then, performing amplification treatment on the training set again to increase the number and diversity of training images of the training set, performing second round training on the target detection model again, verifying whether the target detection model is converged or not through a preset verification set after the training, continuing performing amplification treatment on the training set when the target detection model is not converged, performing third round training on the target detection model again, and repeating the steps until the trained target detection model is converged.
In some embodiments, the processor 20 is further configured to detect the image of the dut 200 according to the converged target detection model to determine the type, location, and confidence level of the defect; and outputting the type, the position and the confidence coefficient of the defect when the confidence coefficient is greater than the confidence coefficient threshold corresponding to the type of the defect.
Specifically, after the training of the target detection model is completed, the detection device 100 acquires the image of the to-be-detected piece 200 through the sensor 40, and then the processor 20 detects the image of the to-be-detected piece 200 according to the target detection model to determine the type, the position and the confidence of the defect. And when the confidence coefficient is larger than the confidence coefficient threshold value corresponding to the type of the current defect, determining that the current defect is accurately detected, and outputting the type, the position and the confidence coefficient of the current defect as a detection result.
The confidence threshold corresponds to the type of the defect, the defects of different types correspond to different confidence thresholds, and therefore the detection accuracy of the defects of different types is improved in a targeted mode, the target detection model is an end-to-end model, the end-to-end model only uses one model and one objective function, compared with a training effect which is possibly caused by slight difference in a multi-module model training target, the training effect is difficult to achieve the optimal state, errors among different modules can affect each other, the final detection accuracy is affected, implementation and maintenance of the end-to-end model are simple, the trained model can achieve the optimal effect, the detection effect is good, and the engineering complexity is low.
Referring to fig. 16, one or more non-transitory computer-readable storage media 300 containing a computer program 302 according to an embodiment of the present disclosure, when the computer program 302 is executed by one or more processors 20, enable the processor 20 to perform the calibration method according to any of the embodiments described above.
For example, referring to fig. 1-3, the computer program 302, when executed by the one or more processors 20, causes the processors 20 to perform the steps of:
011: detecting the defects of the image of the piece to be detected based on a preset template matching algorithm;
012: if the image to be detected has the defects, acquiring an image area where the defects are located in the image of the to-be-detected piece to serve as the to-be-detected image;
013: and detecting the defects of the image to be detected based on a preset target detection model.
As another example, referring to fig. 2, 3 and 4 in conjunction, when the computer program 302 is executed by the one or more processors 20, the processors 20 may further perform the steps of:
0111: acquiring a preset template image matched with the image of the piece to be detected 200;
0112: and fusing the preset template image and the image of the piece to be detected 200 to detect the defects.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples and features of the various embodiments or examples described in this specification can be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A method of detection, comprising:
detecting the defects of the image of the piece to be detected based on a preset template matching algorithm;
if the defect exists, acquiring an image area where the defect is located in the image of the to-be-detected piece to serve as the to-be-detected image; and
and detecting the defects of the image to be detected based on a preset target detection model.
2. The detection method according to claim 1, wherein the detecting the defect of the image of the object based on the preset template matching algorithm comprises:
acquiring a preset template image matched with the image of the piece to be detected;
and fusing the preset template image and the image of the piece to be detected so as to detect the defects.
3. The detection method according to claim 2, wherein the fusion processing of the preset template image and the image of the object to be detected to obtain a processed image comprises:
performing difference shadow processing on the preset template image and the image of the piece to be detected to obtain a difference image;
calculating a connected component of the difference image to detect the defect.
4. The detection method of claim 3, wherein said computing a connected component of the difference image to detect the defect comprises:
identifying a plurality of light spots in the difference image and numbering each light spot;
when the distance between two adjacent light spots is smaller than a preset distance threshold value, modifying the serial numbers of the two adjacent light spots into the same serial number;
connecting the light spots with the same number as the connected domain; and
and when the area of the connected domain is larger than a preset area threshold value, determining that the connected domain is the defect.
5. The detection method according to claim 1, further comprising:
and if the defects do not exist, determining that the image of the to-be-detected piece is free of defects.
6. The detection method according to claim 1, wherein the detecting the defect of the image to be detected based on a preset target detection model comprises:
acquiring a plurality of training images of a plurality of workpieces with defects;
marking defects in the training image;
inputting a plurality of training images before labeling and a plurality of training images after labeling into a target detection model as a training set for training so as to obtain the target detection model trained to be convergent; and
and detecting the defects of the image to be detected according to the converged target detection model.
7. The detection method according to claim 6, wherein the inputting a plurality of the training images before labeling and a plurality of the training images after labeling as a training set into a target detection model for training to obtain the target detection model trained to converge comprises:
inputting the training image before labeling to the target detection model to output a detection result;
comparing the detection result with the labeled training image to determine a first loss value; and
and adjusting the target detection model according to the first loss value so as to make the target detection model converge.
8. A detection device, comprising:
the first detection module is used for detecting the defects of the image of the piece to be detected based on a preset template matching algorithm;
the acquisition module is used for acquiring an image area where the defect is located in the image of the piece to be detected as an image to be detected when the defect exists;
and the second detection module is used for detecting the defects of the image to be detected based on a preset target detection model.
9. A detection device, comprising a processor configured to:
detecting the defects of the image of the piece to be detected based on a preset template matching algorithm;
when the defects exist, acquiring an image area where the defects are located in the image of the to-be-detected piece to serve as the to-be-detected image; and
and detecting the defects of the image to be detected based on a preset target detection model.
10. A non-transitory computer-readable storage medium containing a computer program which, when executed by a processor, causes the processor to perform the detection method of any one of claims 1-7.
CN202110199111.XA 2021-02-22 2021-02-22 Detection method and device, detection equipment and storage medium Pending CN112950563A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110199111.XA CN112950563A (en) 2021-02-22 2021-02-22 Detection method and device, detection equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110199111.XA CN112950563A (en) 2021-02-22 2021-02-22 Detection method and device, detection equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112950563A true CN112950563A (en) 2021-06-11

Family

ID=76245383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110199111.XA Pending CN112950563A (en) 2021-02-22 2021-02-22 Detection method and device, detection equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112950563A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107543828A (en) * 2017-08-25 2018-01-05 广东工业大学 A kind of Surface Flaw Detection method and system
WO2020006961A1 (en) * 2018-07-03 2020-01-09 北京字节跳动网络技术有限公司 Image extraction method and device
US20200134800A1 (en) * 2018-10-29 2020-04-30 International Business Machines Corporation Precision defect detection based on image difference with respect to templates
US20200226731A1 (en) * 2019-01-15 2020-07-16 International Business Machines Corporation Product defect detection
CN111814867A (en) * 2020-07-03 2020-10-23 浙江大华技术股份有限公司 Defect detection model training method, defect detection method and related device
CN111862195A (en) * 2020-08-26 2020-10-30 Oppo广东移动通信有限公司 Light spot detection method and device, terminal and storage medium
CN111879777A (en) * 2020-06-19 2020-11-03 巨轮(广州)智能装备有限公司 Soft material fitting defect detection method, device, equipment and storage medium
US20200357106A1 (en) * 2019-05-09 2020-11-12 Hon Hai Precision Industry Co., Ltd. Method for detecting defects, electronic device, and computer readable medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107543828A (en) * 2017-08-25 2018-01-05 广东工业大学 A kind of Surface Flaw Detection method and system
WO2020006961A1 (en) * 2018-07-03 2020-01-09 北京字节跳动网络技术有限公司 Image extraction method and device
US20200134800A1 (en) * 2018-10-29 2020-04-30 International Business Machines Corporation Precision defect detection based on image difference with respect to templates
US20200226731A1 (en) * 2019-01-15 2020-07-16 International Business Machines Corporation Product defect detection
US20200357106A1 (en) * 2019-05-09 2020-11-12 Hon Hai Precision Industry Co., Ltd. Method for detecting defects, electronic device, and computer readable medium
CN111879777A (en) * 2020-06-19 2020-11-03 巨轮(广州)智能装备有限公司 Soft material fitting defect detection method, device, equipment and storage medium
CN111814867A (en) * 2020-07-03 2020-10-23 浙江大华技术股份有限公司 Defect detection model training method, defect detection method and related device
CN111862195A (en) * 2020-08-26 2020-10-30 Oppo广东移动通信有限公司 Light spot detection method and device, terminal and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
未耒智能: "未耒智能-铆钉缺陷的视觉检测", pages 1 - 8, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/51162391> *
熊邦书;梅梦丽;莫燕;: "基于背景连通域的印刷线路板缺陷定位及识别", 半导体光电, no. 05, pages 85 *

Similar Documents

Publication Publication Date Title
CN110595999B (en) Image acquisition system
CN112884743B (en) Detection method and device, detection equipment and storage medium
CN111982921B (en) Method and device for detecting hole defects, conveying platform and storage medium
JP6519265B2 (en) Image processing method
TWI444613B (en) Photograph inspecting device and photograph inspecting method
CN111025701B (en) Curved surface liquid crystal screen detection method
CN110501347A (en) A kind of rapid automatized Systems for optical inspection and method
JP2016130663A (en) Inspection device and control method of inspection device
JP7135418B2 (en) FLATNESS DETECTION METHOD, FLATNESS DETECTION APPARATUS AND FLATNESS DETECTION PROGRAM
CN110889823A (en) SiC defect detection method and system
CN113269762A (en) Screen defect detection method, system and computer storage medium
CN113888510A (en) Detection method, detection device, detection equipment and computer readable storage medium
CN109712115B (en) Automatic PCB detection method and system
CN115375610A (en) Detection method and device, detection equipment and storage medium
CN112884744A (en) Detection method and device, detection equipment and storage medium
CN117589770A (en) PCB patch board detection method, device, equipment and medium
CN115375608A (en) Detection method and device, detection equipment and storage medium
CN114689604A (en) Image processing method for optical detection of object to be detected with smooth surface and detection system thereof
CN112950563A (en) Detection method and device, detection equipment and storage medium
CN116930187A (en) Visual detection method and visual detection system for vehicle body paint surface defects
JP4333349B2 (en) Mounting appearance inspection method and mounting appearance inspection apparatus
JP2005283267A (en) Through hole measuring device, method, and program for through hole measurement
CN113066069A (en) Adjusting method and device, adjusting equipment and storage medium
TW201522949A (en) Inspection method for image data
CN112926439A (en) Detection method and device, detection equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination