WO2022170706A1 - 用于模具监视的缺陷检测方法、装置、设备及介质 - Google Patents

用于模具监视的缺陷检测方法、装置、设备及介质 Download PDF

Info

Publication number
WO2022170706A1
WO2022170706A1 PCT/CN2021/098423 CN2021098423W WO2022170706A1 WO 2022170706 A1 WO2022170706 A1 WO 2022170706A1 CN 2021098423 W CN2021098423 W CN 2021098423W WO 2022170706 A1 WO2022170706 A1 WO 2022170706A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
points
pixel
region
feature
Prior art date
Application number
PCT/CN2021/098423
Other languages
English (en)
French (fr)
Inventor
张翔
程鑫
吴俊耦
孙仲旭
王升
吴丰礼
Original Assignee
广东拓斯达科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东拓斯达科技股份有限公司 filed Critical 广东拓斯达科技股份有限公司
Publication of WO2022170706A1 publication Critical patent/WO2022170706A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Definitions

  • the present application relates to the technical field of digital image processing, for example, to a defect detection method, device, equipment and medium for mold monitoring.
  • Defect detection methods for mold monitoring can be roughly divided into three categories: matching-based methods, image-understanding-based search methods, and feature location-based search methods.
  • matching-based methods There are two kinds of matching-based methods, gray-based matching method and shape-based matching method.
  • the grayscale-based matching method uses the pixels of the target image to be tested and the original image to measure the similarity
  • shape-based matching method uses the target to be tested and the template to measure the similarity, and then performs defect detection
  • the method based on image understanding search It relies on artificial intelligence (Artificial Intelligence, AI) and other means to summarize the target features, and then carry out defect detection
  • the method based on feature location search is to first convert the analysis of the entire image into the analysis of image features, and then carry out defect detection. .
  • AI Artificial Intelligence
  • the gray-based matching method in the above scheme has a slow matching speed, and is not suitable for the situation where the target rotates and deforms before and after the mold is opened;
  • the shape-based matching method has higher requirements on the on-site working conditions, and the on-site environment of the mold monitoring application Complex, not suitable for mold monitoring applications;
  • the method based on image understanding search has low matching accuracy, and each scene needs to train a large amount of data, which is difficult to achieve generalization;
  • the method based on feature location search has high requirements for feature selection, and consumes a lot of energy. duration.
  • the present application provides a defect detection method, device, equipment and medium for mold monitoring, which can realize efficient and accurate defect detection, and is beneficial to improve the quality of mold production.
  • a defect detection method for mold monitoring comprising:
  • the template image is obtained by photographing a mold
  • the to-be-detected image is obtained by photographing a mold.
  • the detection image is obtained by photographing the product produced by using the mold;
  • the position information of the feature points determines an image transformation rule, and according to the image transformation rule, position information correction is performed on all the pixel points in the second region of interest to obtain corrected pixel points;
  • a differential image is determined according to the corrected pixel points and the pixel points corresponding to the corrected pixel points in the first region of interest, and a defect detection is performed on the differential image according to a preset defect judgment method.
  • defect detection device for mold monitoring, the device comprising:
  • the acquisition module is configured to acquire a first pixel point set corresponding to the first region of interest of the template image and a second pixel point set corresponding to the second region of interest of the to-be-detected image, wherein the template image is obtained by photographing the mold obtaining, the to-be-detected image is obtained by photographing the product produced by using the mold;
  • the determining module is configured to determine the first target pixel point according to the similarity between the first target pixel point in the first pixel point set and the second target pixel point corresponding to the first target pixel point in the second pixel point set a feature point set and a second feature point set;
  • a correction module configured to be the same as the position information of a preset number of first feature points in the first feature point set and the second feature point set corresponding to the preset number of first feature points
  • the position information of the second feature points of the number determines an image transformation rule, and according to the image transformation rule, corrects the position information of all the pixels in the second region of interest to obtain corrected pixels;
  • a detection module configured to determine a differential image according to the corrected pixel points and the pixel points corresponding to the corrected pixel points in the first region of interest, and to analyze the differential image according to a preset defect judgment method Perform defect detection.
  • Also provided is a computer device comprising:
  • processors one or more processors
  • storage means arranged to store one or more programs
  • the one or more processors implement the defect detection method for mold monitoring described in any embodiment of the present application.
  • a computer-readable storage medium which stores a computer program, and when the computer program is executed by a processor, implements the defect detection method for mold monitoring described in any embodiment of the present application.
  • FIG. 1 is a flowchart of a defect detection method for mold monitoring provided in Embodiment 1 of the present application;
  • FIG. 2 is a flowchart of a defect detection method for mold monitoring provided in Embodiment 2 of the present application;
  • FIG. 3 is a schematic structural diagram of a defect detection device for mold monitoring provided in Embodiment 3 of the present application;
  • FIG. 4 is a schematic structural diagram of a computer device according to Embodiment 4 of the present application.
  • FIG. 1 is a flowchart of a defect detection method for mold monitoring provided in the first embodiment of the application.
  • This embodiment is applicable to defect detection of images to be inspected obtained by photographing products produced by using molds in the mold monitoring process. Case.
  • the defect detection method for mold monitoring provided in this embodiment can be executed by the defect detection device for mold monitoring provided in the embodiment of the present application.
  • the device can be implemented in software and/or hardware, and is integrated in the execution of this method. computer equipment.
  • the method of this embodiment includes but is not limited to the following steps.
  • S110 Acquire a first set of pixels corresponding to a first region of interest of the template image and a second set of pixels corresponding to a second region of interest of the image to be detected.
  • the template image is obtained by photographing the mold
  • the image to be detected is obtained by photographing the product produced by using the mold
  • the template image and the image to be detected may be obtained by performing corresponding photographing operations after receiving the photographing signal.
  • the first pixel point set corresponding to the first region of interest (ROI) can be understood as two meanings, one is the first pixel point set in the first ROI, and the other is the boundary of the first ROI. Expand, and obtain the first set of pixels in the expanded first region.
  • the second pixel point set corresponding to the second ROI can also be understood as two meanings, one is the second pixel point set in the second ROI, and the other is the boundary expansion of the second ROI, and the obtained expansion The second set of pixels in the second area after that. There is a one-to-one correspondence between the pixels in the first pixel point set and the second pixel point set.
  • the corresponding pixel points in the first pixel point set and the second pixel point set are The corresponding operation of the pixel points can save time and speed up the detection speed.
  • S120 Determine the first feature point set and the second feature point according to the similarity between the first target pixel point in the first pixel point set and the second target pixel point corresponding to the first target pixel point in the second pixel point set point collection.
  • the first target pixel point and the second target pixel point are both points that meet the target point screening conditions, and the target point screening conditions may be preset, or may be determined according to the actual situation.
  • the first target pixel point in the first pixel point set and the corresponding first target pixel point in the second pixel point set can be determined.
  • Two target pixel points and then determine the similarity between the first target pixel point and the corresponding second target pixel point according to any one of Euclidean distance, Hamming distance or other similarity detection methods. high or low, it can be determined whether the first target pixel point and the corresponding second target pixel point are feature points, and correspondingly, the first feature point set and the second feature point set can be determined.
  • the preset number may be determined according to the actual situation, or may be set in advance, which is not limited in this embodiment.
  • the position information of the points can determine the image transformation law, that is, the image transformation matrix. Correct the position information of all the pixels in the second ROI according to the image transformation matrix to obtain corrected pixels, so that all the pixels in the second ROI and the corresponding pixels in the first ROI are at the same position, which is convenient for subsequent
  • the corrected pixel points and the corresponding pixel points in the first ROI determine a differential image, and perform defect detection on the differential image according to a preset defect judgment method.
  • S140 Determine a difference image according to the corrected pixel points and the pixel points in the first region of interest corresponding to the corrected pixel points, and perform defect detection on the difference image according to a preset defect judgment method.
  • the difference image can be understood as the difference between the corrected pixel point and the corresponding pixel point in the first region of interest.
  • the difference image can display the ROI area of the two images. different information.
  • the difference image between the template image and the image to be detected can be obtained according to the corrected pixel points and the corresponding pixel points in the first ROI. After the difference image is determined, according to the preset defect judgment method, for example, the grayscale corresponding to the difference image is determined.
  • the magnitude of the difference value is judged to detect defects in the differential image, or the differential image is judged by a preset sensitivity threshold to determine the area to be detected in the differential image, and the area of the to-be-detected area is judged to detect defects in the differential image. Detection and other methods can obtain the defect detection result of the image to be detected.
  • the first pixel point set corresponding to the first region of interest of the template image and the second pixel point set corresponding to the second region of interest of the to-be-detected image are obtained, according to the first pixel point set in the first pixel point set
  • the similarity between a target pixel point and the corresponding second target pixel point in the second pixel point set determines the first feature point set and the second feature point set, according to the preset number of first feature points in the first feature point set.
  • the position information of the point and the position information of the second feature points corresponding to the same number in the second feature point set determine an image transformation rule, and according to the image transformation rule, position information correction is performed on all the pixel points in the second region of interest,
  • the corrected pixel points are obtained, a differential image is determined according to the corrected pixel points and the corresponding pixel points in the first region of interest, and defects are detected on the differential image according to a preset defect judgment method, and the selection of the region of interest can effectively Reduce workload and save time, and achieve efficient and accurate defect detection, which is beneficial to improve the quality of mold production.
  • acquiring the first set of pixels corresponding to the first region of interest of the template image and the second set of pixels corresponding to the second region of interest of the image to be detected may include: determining the first sense of the template image The region of interest and the second region of interest of the image to be detected; perform noise reduction processing on the pixels corresponding to the first region of interest to obtain a first set of pixels, and perform noise reduction processing on the pixels corresponding to the second region of interest , get the second set of pixels.
  • the first ROI of the template image and the second ROI of the to-be-detected image are determined first, for example, areas with multiple shapes such as a circular area, a rectangular area, and a polygonal area can be selected in the entire image of the template image.
  • an area with multiple shapes such as a circular area, a rectangular area, and a polygonal area, may be selected as the second ROI in the entire image of the image to be detected.
  • the shapes of the first ROI and the second ROI should be consistent to ensure the accuracy of the acquired first pixel point set and the corresponding second pixel point set.
  • the pixels corresponding to the first ROI are subjected to noise reduction processing by means of a mean filter, an adaptive median filter, a morphological noise filter and other noise reduction methods, and the first ROI is obtained.
  • the pixels corresponding to the second ROI are also subjected to noise reduction processing by means of a mean filter, an adaptive median filter, and a morphological noise filter to obtain a second set of pixels.
  • the interference of noise on the image can be reduced, which is beneficial to Improve the accuracy of subsequent defect detection results.
  • noise reduction processing is performed on pixels corresponding to the first region of interest to obtain a first set of pixels
  • noise reduction processing is performed on pixels corresponding to the second region of interest to obtain a second set of pixels , which may include: performing boundary expansion on the first region of interest to obtain an expanded first region, and performing boundary expansion on the second region of interest to obtain an expanded second region; Noise reduction is performed on pixels in one area to obtain a first set of pixels, and an adaptive median filter is used to perform noise reduction on pixels in the second area to obtain a second set of pixels.
  • the adaptive median filter is a method of image noise reduction, which can not only filter out the salt and pepper noise with high probability, but also better protect the image details.
  • Pepper noise refers to a small gray value, and the effect is small black dots; salt noise refers to a large gray value, and the effect is small white dots.
  • a set fixed value can be used to fill the boundary of the first ROI, and the expansion can be obtained.
  • the first area after.
  • the same method is used to expand the boundary of the second ROI, and the expanded second region can be obtained.
  • noise reduction processing is performed on all the pixels in the first area through an adaptive median filter to obtain a first set of pixels.
  • a second set of pixels can be obtained, so that the first target pixel and the second pixel in the first set of pixels can be subsequently
  • the similarity of the corresponding second target pixel points in the point set determines the first feature point set and the second feature point set.
  • the window size of the median filter can be dynamically changed by the adaptive median filter according to preset conditions, so as to achieve the effect of removing noise and protecting image details.
  • the output result of the adaptive median filter is a gray value, which is used to replace the gray value at the point (x, y) at the center of the filter window.
  • the gray value ranges from 0 to 255.
  • the adaptive median filter can be divided into the following two steps, including step A and step B.
  • the purpose of this step is to determine whether the median Z med obtained in the current window is noise.
  • S xy represents the action area of the adaptive median filter, and also represents the area covered by the filter window, and the center point of the area is the pixel point of the y-th row and the x-th column in the image.
  • Z min represents the smallest gray value in S xy
  • Z max represents the largest gray value in S xy
  • Z med represents the median of all gray values in S xy
  • Z xy represents the y-th row and x-th column of the image.
  • the gray value of the pixel, S max represents the maximum window size allowed by S xy .
  • the adaptive median filter can quickly deal with noise with a low probability of occurrence; when the probability of noise points is high, it can also be processed by increasing the window size in the adaptive median filter.
  • the The image sharpness evaluation of the denoised image can be carried out through a multi-factor image sharpness evaluation strategy.
  • the multi-factor image sharpness evaluation strategy can be a bidirectional gradient evaluation strategy and an image variance evaluation strategy. Both are processed in the spatial domain, and the main idea is to use the gradient difference of the grayscale features between adjacent pixels to evaluate the image sharpness.
  • G x the convolution through horizontal edge detection, that is, the gradient value in the horizontal direction
  • G y the convolution through vertical edge detection, that is, the gradient value in the vertical direction.
  • f(x, y) represents the gray value of the pixel point in the middle position in the image A
  • x represents the number of rows
  • y represents the number of columns
  • the matrix A is substituted into the above two formulas respectively, we can get:
  • the value of G can evaluate the image sharpness, the larger the value of G, the higher the sharpness of the corresponding image.
  • Variance is a measure used in probability theory to examine the degree of dispersion between a set of discrete data and the expectation of the set of discrete data. The larger the variance, the larger the deviation between the data in this group, and the uneven distribution of the data within the group; the smaller the variance, the more evenly distributed among the data in the group.
  • the image variance evaluation strategy if the image clarity is high, the grayscale difference between the image data will increase, that is, the variance of the image data should be larger, so the image clarity can be measured by the variance of the image grayscale data. The larger the value, the better the sharpness.
  • the image variance formula is as follows:
  • M*N represents the resolution size of the image
  • M represents the width of the image
  • N represents the height of the image
  • p(i, j) represents the gray value of the pixel point in the ith row and the jth column
  • represents the mean.
  • the first feature point set is determined according to the similarity between the first target pixel point in the first pixel point set and the second target pixel point corresponding to the first target pixel point in the second pixel point set and the second set of feature points, which may include: determining a first set of corner points to be confirmed according to the gradient of each pixel point in the first set of pixel points, and determining the first set of corner points to be confirmed according to the first set of corner points to be confirmed in the first set of corner points to be confirmed.
  • a corner point response function determines the first target pixel point set; determines the second corner point set to be confirmed according to the gradient of each pixel point in the second pixel point set, and determines each corner point set to be confirmed according to the second corner point set to be confirmed
  • the second corner point response function of the point determines the second target pixel point set; for each first target pixel point in the first target pixel point set, calculate the difference between the current first target pixel point and the second target pixel point set and the current first target pixel point.
  • the Hamming distance of the second target pixel corresponding to the position of a target pixel. Determine whether the current first target pixel and the second target pixel are feature points according to the Hamming distance.
  • the point is stored in the first feature point set, and the second target pixel point is stored in the second feature point set.
  • the method for determining the first target pixel point set is:
  • the gradient value I x in the horizontal direction and the gradient value I y in the vertical direction of each pixel point in the first pixel point set corresponding to the template image I can be calculated through the gradient calculation formula. If I x is greater than or equal to the set gradient threshold in the horizontal direction, and I y is greater than or equal to the set gradient threshold in the vertical direction, then the pixel point is determined as the first corner point to be confirmed, and the pixel point is stored in the first The set of corner points to be confirmed.
  • the second target pixel point set can be determined.
  • the first target pixel set and the pixels in the second target pixel set are in a one-to-one correspondence.
  • the first target pixel point set and the second target pixel point set After obtaining the first target pixel point set and the second target pixel point set, for each first target pixel point in the first target pixel point set, calculate the difference between the current first target pixel point and the second target pixel point set and the current The Hamming distance of the second target pixel corresponding to the position of the first target pixel, if the value corresponding to the Hamming distance is greater than the preset distance threshold, it is determined that the current first target pixel and the second target pixel are features point, and store the current first target pixel point in the first feature point set, and store the second target pixel point in the second feature point set.
  • x is: 00000000 00000000 01000000 11101111
  • y is: 00000000 10000000 11000000 01101111
  • the gradient threshold in the horizontal direction, the gradient threshold in the vertical direction, and the threshold T may all be determined according to actual conditions, and may also be set in advance, which is not limited in this embodiment.
  • the similarity between the first target pixel point and the corresponding second target pixel point is determined according to the magnitude relationship of the Hamming distance, so as to determine the two pixel points Whether it is a feature point, the obtained first feature point set and second feature point set are more accurate.
  • FIG. 2 is a flowchart of a defect detection method for mold monitoring according to Embodiment 2 of the present application.
  • the embodiments of the present application are described on the basis of the above-mentioned embodiments.
  • this embodiment explains the process of performing defect detection on a differential image according to a preset defect judgment method.
  • the method of this embodiment includes but is not limited to the following steps.
  • S210 Acquire a first set of pixels corresponding to the first region of interest of the template image and a second set of pixels corresponding to the second region of interest of the image to be detected.
  • S220 Determine the first feature point set and the second feature point according to the similarity between the first target pixel point in the first pixel point set and the second target pixel point corresponding to the first target pixel point in the second pixel point set point collection.
  • the position information of the feature points determines the image transformation rule, and corrects the position information of all the pixel points in the second region of interest according to the image transformation rule, and obtains the corrected pixel points, which may include: extracting pre-prediction points from the first feature point set.
  • the image transformation matrix is determined according to the position information of each group of feature points in the feature points of the preset number of groups; the position information is performed on all the pixels in the second region of interest according to the image transformation matrix Correction to get corrected pixels.
  • the preset number may be pre-designed, or may be determined according to the actual situation, as long as it can meet the requirements for determining the image transformation matrix.
  • At least four sets of position information of feature points are required for calculation, but considering that there are feature points with inaccurate position information in the feature point screening process that affect the transformation accuracy, multiple sets of feature points are required. used for calculation. Theoretically, the more feature points, the more accurate the result of the transformation matrix. In practical applications, too many feature points will lead to an increase in the amount of calculation. Therefore, a preset number of first feature points are extracted from the first feature point set, and the same number of second feature points corresponding to the first feature points are extracted from the second feature point set to form a preset number of groups feature points. According to the position information of each group of feature points in the preset number of feature points, the image conversion matrix can be determined.
  • the conversion process is as follows: Suppose [x, y, w] is the second feature point set in the image to be detected
  • the position information of a feature point (equivalent to the position information before conversion of a feature point)
  • [x', y', w'] is the position information of the corresponding feature point in the first feature point set in the template image (equivalent to a feature point converted position information)
  • the corresponding image transformation rules are as follows:
  • A is the corresponding image transformation matrix
  • a 11 , a 12 , . . . , a 33 are the corresponding elements in the matrix A
  • w is the value of the z-axis of the feature points in the second feature point set.
  • matrix transformation is performed on all pixels in the second ROI through the image transformation matrix, that is, position information correction, and corrected pixels can be obtained after correction.
  • the feature points of the preset number of groups are obtained first, then the image transformation matrix is determined according to the position information of the feature points of the preset number of groups, and finally all the pixel points in the second ROI are positioned according to the image transformation matrix.
  • Information correction makes the determined image transformation matrix more accurate, so the corrected pixel points are more accurate, errors are reduced, and accuracy and speed are taken into account.
  • a two-dimensional image is transformed into another plane image through perspective transformation, and the process is:
  • x represents the horizontal position information of the feature points in the second feature point set
  • y represents the vertical direction position information of the feature points in the second feature point set
  • u represents the horizontal direction position information of the corresponding feature points in the first feature point set
  • v represents the vertical position information of the corresponding feature points in the first feature point set
  • a, b, c, d, e, f, k, l, m, and n are all coefficients to be determined.
  • the gray value corresponding to each pixel point in the corrected pixel point is A difference image can be obtained by subtracting the grayscale values corresponding to the corresponding pixels in the first region of interest, and the difference image is represented by a matrix.
  • the preset sensitivity threshold may be preset, or may be determined according to the actual situation, which is not limited in this embodiment.
  • the judgment rule can be: if a gray value difference is less than or equal to the preset sensitivity threshold , the region corresponding to the gray value difference in the second ROI of the image to be detected is filtered (that is, not determined as the to-be-detected region of the differential image); if a gray value difference is greater than the preset sensitivity threshold , the region corresponding to the gray value difference in the second ROI of the image to be detected is determined as the region to be detected of the differential image.
  • the judgment rule can be expressed by the following formula:
  • M(x, y) represents the gray value of one pixel in the corrected pixel
  • N(x, y) represents the gray value of the pixel corresponding to M(x, y) in the first ROI
  • T represents the preset sensitivity threshold
  • a preset defect judgment method can be used to perform defect detection on the to-be-detected area to obtain a defect detection result of the to-be-detected image.
  • performing defect detection on the area to be inspected according to the preset defect judgment method may include: for each area to be inspected in the area to be inspected, if the area of the current area to be inspected is greater than or equal to the first area threshold, then the area to be inspected is currently inspected. The defect detection result of the inspection area is failed; when the area of all areas to be inspected is less than the first area threshold, the total area of all areas to be inspected is obtained, if the total area is greater than or equal to the second area threshold, then the area of the area to be inspected is The defect detection result is failed.
  • both the first area threshold and the second area threshold may be preset, or may be determined according to the actual situation, which is not limited in this embodiment.
  • the method of judging the total area of all to-be-detected areas makes the judgment process more efficient. In order to be reliable, the final defect detection results are also more accurate.
  • Table 1 the evaluation results of the two filtering methods are shown in Table 1 below.
  • Table 1 the results in the multi-factor image sharpness evaluation strategy show that the adaptive median filter can better preserve image details. .
  • the comparison results of the time required for determining the feature points by the two methods are shown in Table 2: the time for determining the feature points by the method in the present application is faster than Oriented Fast and Rotated Brief (Oriented Fast and Rotated Brief, ORB) The time to determine the feature points by the method is about 100ms faster.
  • the following table 3 shows the defect detection results. It can be seen from table 3 that in the case of unstable mold opening position, the defect detection method in this application can accurately judge the working conditions, which is beneficial to improve the quality of mold production and precision.
  • the grayscale value corresponding to each pixel point in the corrected pixel point is compared with the grayscale value corresponding to the corresponding pixel point in the first region of interest. Subtract, obtain a differential image, judge the differential image through a preset sensitivity threshold, determine the area to be detected in the differential image, and perform defect detection on the area to be detected according to the preset defect judgment method, so that the result of defect detection is more accurate and more accurate. Close to the actual situation, efficient and accurate defect detection is realized, which is beneficial to improve the quality of mold production.
  • FIG. 3 is a schematic structural diagram of a defect detection device for mold monitoring provided in Embodiment 3 of the present application. As shown in FIG. 3 , the device may include the following modules.
  • the acquiring module 310 is configured to acquire a first set of pixels corresponding to the first region of interest of the template image and a second set of pixels corresponding to the second region of interest of the image to be detected, wherein the template image is obtained by photographing the mold , the image to be detected is obtained by photographing the product produced by the mold;
  • the determining module 320 is configured to determine the first feature point set according to the similarity between the first target pixel point in the first pixel point set and the second target pixel point corresponding to the first target pixel point in the second pixel point set and the second feature point set;
  • the correction module 330 is set to the position information of a preset number of first feature points in the first feature point set and the same number of first feature points corresponding to the preset number of first feature points in the second feature point set.
  • the position information of the second feature point determines the image transformation rule, and corrects the position information of all the pixel points in the second region of interest according to the image transformation rule to obtain corrected pixel points;
  • the detection module 340 is configured to determine a differential image according to the corrected pixel points and the pixel points corresponding to the corrected pixel points in the first region of interest, and perform defect detection on the differential image according to a preset defect judgment method.
  • the first pixel point set corresponding to the first region of interest of the template image and the second pixel point set corresponding to the second region of interest of the to-be-detected image are obtained, according to the first pixel point set in the first pixel point set
  • the similarity between a target pixel point and the corresponding second target pixel point in the second pixel point set determines the first feature point set and the second feature point set, according to the preset number of first feature points in the first feature point set.
  • the position information of the point and the position information of the second feature points corresponding to the same number in the second feature point set determine an image transformation rule, and according to the image transformation rule, position information correction is performed on all the pixel points in the second region of interest,
  • the corrected pixel points are obtained, a differential image is determined according to the corrected pixel points and the corresponding pixel points in the first region of interest, and defect detection is performed on the differential image according to a preset defect judgment method, which can realize efficient and accurate defect detection. , which is conducive to improving the quality of mold production.
  • the above acquisition module 310 may include: a region determination unit, configured to determine the first region of interest of the template image and the second region of interest of the to-be-detected image; a set determination unit, configured to determine the pixel points corresponding to the first region of interest Perform noise reduction processing to obtain a first set of pixels, and perform noise reduction processing on pixels corresponding to the second region of interest to obtain a second set of pixels.
  • the above-mentioned set determination unit may be set to: perform boundary expansion on the first region of interest to obtain the expanded first region, and perform boundary expansion on the second region of interest to obtain the expanded second region;
  • the value filter performs noise reduction processing on the pixels in the first area to obtain a first set of pixels, and performs noise reduction processing on the pixels in the second area through an adaptive median filter to obtain a second set of pixels .
  • the above determining module 320 may be configured to: determine the first set of corner points to be confirmed according to the gradient of each pixel point in the first set of pixel points, and determine the first set of corner points to be confirmed according to the first set of corner points to be confirmed in the first set of corner points to be confirmed.
  • the corner point response function determines the first set of target pixel points; determines the second set of corner points to be confirmed according to the gradient of each pixel point in the second set of pixel points, and determines each corner point to be confirmed according to the second set of corner points to be confirmed
  • the second corner point response function of the The Hamming distance of the second target pixel point corresponding to the position of the target pixel point, according to the Hamming distance to determine whether the current first target pixel point and the second target pixel point are feature points, if so, then the current first target pixel point Store to the first feature point set, and store the second target pixel point to the second feature point set.
  • the above-mentioned correction module 330 may be configured to: extract a preset number of first feature points from the first feature point set, and extract the same number of second feature points corresponding to the first feature points from the second feature point set.
  • feature points which constitute a preset number of feature points, wherein the preset number is equal to the value corresponding to the preset number of groups; according to the position information of each group of feature points in the feature points of the preset number of groups, determine the image transformation matrix; Correct the position information of all pixels in the second region of interest according to the image transformation matrix to obtain corrected pixels.
  • the above-mentioned detection module 340 may include: a differential image determination unit, configured to correspond the grayscale value corresponding to each pixel point in the corrected pixel point to the pixel point corresponding to each pixel point in the first region of interest The gray value of the differential image is subtracted to obtain a differential image; the area to be detected determination unit is set to judge the differential image through a preset sensitivity threshold to determine the to-be-detected area of the differential image; the defect detection unit is set to determine according to a preset defect The method performs defect detection on the area to be detected.
  • the above-mentioned defect detection unit can be set to: for each to-be-detected area in the to-be-detected area, if the area of the current to-be-detected area is greater than or equal to the first area threshold, the defect detection result of the current to-be-detected area is not passed; when When the areas of all areas to be inspected are smaller than the first area threshold, the total area of all areas to be inspected is obtained, and if the total area is greater than or equal to the second area threshold, the defect detection result of the area to be inspected is rejected.
  • the defect detection device for mold monitoring provided in this embodiment can be applied to the defect detection method for mold monitoring provided in any of the foregoing embodiments, and has corresponding functions and effects.
  • FIG. 4 is a schematic structural diagram of a computer device according to Embodiment 4 of the present application.
  • the computer device includes a processor 410, a storage device 420 and a communication device 430; the number of processors 410 in the computer device may be One or more, a processor 410 is taken as an example in FIG. 4 ; the processor 410, the storage device 420 and the communication device 430 in the computer equipment can be connected by a bus or other means, and the connection by a bus is taken as an example in FIG. 4 .
  • the storage device 420 can be used to store software programs, computer-executable programs, and modules, such as the modules corresponding to the defect detection method for mold monitoring in the embodiments of the present application (for example, for mold monitoring.
  • the processor 410 executes various functional applications and data processing of the computer equipment by running the software programs, instructions and modules stored in the storage device 420, ie, implements the above-mentioned defect detection method for mold monitoring.
  • the storage device 420 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the computer equipment, and the like. Additionally, storage device 420 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, storage device 420 may include memory located remotely from processor 410, which may be connected to the computer device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the communication device 430 is configured to realize network connection or mobile data connection between servers.
  • a computer device provided in this embodiment can be used to execute the defect detection method for mold monitoring provided by any of the above embodiments, and has corresponding functions and effects.
  • the fifth embodiment of the present application also provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, implements the defect detection method for mold monitoring in any embodiment of the present application, and the method includes:
  • the information determines the image transformation rule, and corrects the position information of all the pixel points in the second region of interest according to the image transformation rule to obtain the corrected pixel points;
  • a differential image is determined according to the corrected pixel points and the pixel points corresponding to the corrected pixel points in the first region of interest, and defect detection is performed on the differential image according to a preset defect judgment method.
  • a storage medium containing computer-executable instructions provided by an embodiment of the present application the computer-executable instructions of which are not limited to the above-mentioned method operations, and can also perform defect detection for mold monitoring provided by any embodiment of the present application related operations in the method.
  • the present application can be implemented by software and necessary general-purpose hardware, and can also be implemented by hardware.
  • the technical solution of the present application can be embodied in the form of a software product, and the computer software product can be stored in a computer-readable storage medium, such as a floppy disk of a computer, a read-only memory (Read-Only Memory, ROM), a random access memory ( Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disk, etc., including multiple instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the various embodiments of the present application. method.
  • a computer device which may be a personal computer, a server, or a network device, etc.
  • the multiple units and modules included are only divided according to functional logic, but are not limited to the above-mentioned division, as long as the corresponding functions can be realized; in addition, The names of the multiple functional units are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种用于模具监视的缺陷检测方法、装置、设备及介质。其中,该方法包括:获取模板图像对应的第一像素点集合和待检测图像对应的第二像素点集合(S110);根据第一目标像素点与第一目标像素点对应的第二目标像素点的相似性,确定第一特征点集合和第二特征点集合(S120);根据第一特征点集合中预设个数的第一特征点的位置信息和第二特征点集合中预设个数的第一特征点对应的第二特征点的位置信息确定图像变换法则,并根据图像变换法则对第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点(S130);根据校正后的像素点与第一感兴趣区域内校正后的像素点对应的像素点确定差分图像,并对差分图像进行缺陷检测(S140)。

Description

用于模具监视的缺陷检测方法、装置、设备及介质
本申请要求在2021年02月09日提交中国专利局、申请号为202110181174.2的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及数字图像处理技术领域,例如涉及一种用于模具监视的缺陷检测方法、装置、设备及介质。
背景技术
在制造行业中,由于不同产品和模具的特殊性以及不规则性,在检测产品生产状态时费时费力,不能准确快速地检测模具开模状态,导致出现模具损坏和生产效率降低等问题。以注塑行业为例,模具质量优劣直接关系到产品质量优劣,因此,在注塑过程中如何对模具和产品的状态实施有效监控,从而保证模具生产质量是注塑行业的重点。
用于模具监视的缺陷检测方法,大致可分为三类:基于匹配的方法、基于图像理解查找的方法和基于特征定位查找的方法。基于匹配的方法包括两种,基于灰度的匹配方法和基于形状的匹配方法。基于灰度的匹配方法利用待测目标图像和原始图像的像素点进行相似性度量,基于形状的匹配方法利用待测目标和模板进行相似性度量,然后再进行缺陷检测;基于图像理解查找的方法是依靠人工智能(Artificial Intelligence,AI)等手段对目标特征进行归纳,再进行缺陷检测;基于特征定位查找的方法是先将对整幅图像的分析转化为对图像特征的分析,再进行缺陷检测。
但是,上述方案中基于灰度的匹配方法匹配速度较慢,不适用于开模前后目标发生旋转和形变的情况;基于形状的匹配方法对现场工况要求比较高,而模具监视应用的现场环境复杂,不适用于模具监视应用;基于图像理解查找的方法匹配精度较低,而且每个场景需要训练大量数据,难以实现通用化;基于特征定位查找的方法对特征选取的要求较高,而且耗时长。尚未有一种高效的用于模具监视的缺陷检测方法。
发明内容
本申请提供了一种用于模具监视的缺陷检测方法、装置、设备及介质,能够实现高效和准确的缺陷检测,有利于提高模具生产质量。
提供了一种用于模具监视的缺陷检测方法,该方法包括:
获取模板图像的第一感兴趣区域对应的第一像素点集合和待检测图像的第二感兴趣区域对应的第二像素点集合,其中,所述模板图像通过对模具进行拍摄得到,所述待检测图像通过对使用所述模具生产的产品进行拍摄得到;
根据所述第一像素点集合中第一目标像素点与所述第二像素点集合中与所述第一目标像素点对应的第二目标像素点的相似性,确定第一特征点集合和第二特征点集合;
根据所述第一特征点集合中预设个数的第一特征点的位置信息和所述第二特征点集合中与所述预设个数的第一特征点对应的相同个数的第二特征点的位置信息确定图像变换法则,并根据所述图像变换法则对所述第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点;
根据所述校正后的像素点与所述第一感兴趣区域内与所述校正后的像素点对应的像素点确定差分图像,并根据预设缺陷判断方法对所述差分图像进行缺陷检测。
还提供了一种用于模具监视的缺陷检测装置,该装置包括:
获取模块,设置为获取模板图像的第一感兴趣区域对应的第一像素点集合和待检测图像的第二感兴趣区域对应的第二像素点集合,其中,所述模板图像通过对模具进行拍摄得到,所述待检测图像通过对使用所述模具生产的产品进行拍摄得到;
确定模块,设置为根据所述第一像素点集合中第一目标像素点与所述第二像素点集合中与所述第一目标像素点对应的第二目标像素点的相似性,确定第一特征点集合和第二特征点集合;
校正模块,设置为根据所述第一特征点集合中预设个数的第一特征点的位置信息和所述第二特征点集合中与所述预设个数的第一特征点对应的相同个数的第二特征点的位置信息确定图像变换法则,并根据所述图像变换法则对所述第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点;
检测模块,设置为根据所述校正后的像素点与所述第一感兴趣区域内与所述校正后的像素点对应的像素点确定差分图像,并根据预设缺陷判断方法对所述差分图像进行缺陷检测。
还提供了一种计算机设备,包括:
一个或多个处理器;
存储装置,设置为存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多 个处理器实现本申请任意实施例所述的用于模具监视的缺陷检测方法。
还提供了一种计算机可读存储介质,存储有计算机程序,该计算机程序被处理器执行时实现本申请任意实施例所述的用于模具监视的缺陷检测方法。
附图说明
图1为本申请实施例一提供的一种用于模具监视的缺陷检测方法的流程图;
图2为本申请实施例二提供的一种用于模具监视的缺陷检测方法的流程图;
图3为本申请实施例三提供的一种用于模具监视的缺陷检测装置的结构示意图;
图4为本申请实施例四提供的一种计算机设备的结构示意图。
具体实施方式
下面结合附图和实施例对本申请进行说明。此处所描述的实施例仅仅用于解释本申请,而非对本申请的限定。为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。
实施例一
图1为本申请实施例一提供的一种用于模具监视的缺陷检测方法的流程图,本实施例可适用于模具监视过程中对使用模具生产的产品进行拍摄得到的待检测图像进行缺陷检测的情况。本实施例提供的用于模具监视的缺陷检测方法可以由本申请实施例提供的用于模具监视的缺陷检测装置来执行,该装置可以通过软件和/或硬件的方式实现,并集成在执行本方法的计算机设备中。
参见图1,本实施例的方法包括但不限于如下步骤。
S110,获取模板图像的第一感兴趣区域对应的第一像素点集合和待检测图像的第二感兴趣区域对应的第二像素点集合。
其中,模板图像通过对模具进行拍摄得到,待检测图像通过对使用模具生产的产品进行拍摄得到,模板图像和待检测图像可以是收到拍摄信号后进行相应的拍摄操作得到的。第一感兴趣区域(Region of Interest,ROI)对应的第一像素点集合可以理解为两种意思,一种是第一ROI内的第一像素点集合,另一种是对第一ROI进行边界扩充,得到的扩充后的第一区域内的第一像素点集合。同样地,第二ROI对应的第二像素点集合也可以理解为两种意思,一种是第二 ROI内的第二像素点集合,另一种是对第二ROI进行边界扩充,得到的扩充后的第二区域内的第二像素点集合。第一像素点集合和第二像素点集合中的像素点是一一对应关系。
为了对待检测图像的缺陷情况进行检测,在得到模板图像和待检测图像之后,由于两幅图像中都包含了多个像素点,如果对所有的像素点都进行处理,将会耗费时间,造成不必要的资源浪费。通过获取模板图像的第一ROI对应的第一像素点集合和待检测图像的第二ROI对应的第二像素点集合,再对第一像素点集合中的像素点和第二像素点集合中对应的像素点进行相应操作,能够节省时间,加快检测速度。
S120,根据第一像素点集合中第一目标像素点与第二像素点集合中与所述第一目标像素点对应的第二目标像素点的相似性,确定第一特征点集合和第二特征点集合。
其中,第一目标像素点和第二目标像素点均为符合目标点筛选条件的点,目标点筛选条件可以为预先设定好的,也可以视实际情况而定。
在得到第一像素点集合和第二像素点集合之后,按照预先设定好的目标点筛选条件,能够确定第一像素点集合中的第一目标像素点和第二像素点集合中对应的第二目标像素点,然后根据欧氏距离、汉明距离或者其他相似性检测方法中的任意一种方法确定第一目标像素点与对应的第二目标像素点之间的相似性,根据相似性的高低,就可以确定第一目标像素点与对应的第二目标像素点是否为特征点,相应的就能确定出第一特征点集合和第二特征点集合。
S130,根据第一特征点集合中预设个数的第一特征点的位置信息和第二特征点集合中与所述预设个数的第一特征点对应的相同个数的第二特征点的位置信息确定图像变换法则,并根据图像变换法则对第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点。
其中,预设个数可以视实际情况而定,也可以提前设定好,本实施例不做限制。
确定了第一特征点集合和第二特征点集合之后,根据第一特征点集合中预设个数的第一特征点的位置信息和第二特征点集合中对应的相同个数的第二特征点的位置信息可以确定图像变换法则,即图像转换矩阵。根据该图像转换矩阵对第二ROI内的所有像素点进行位置信息校正得到校正后的像素点,使得第二ROI内的所有像素点与第一ROI内对应的像素点位于同一位置,便于后续根据校正后的像素点与第一ROI内对应的像素点确定差分图像,并根据预设缺陷判断方法对差分图像进行缺陷检测。
S140,根据校正后的像素点与第一感兴趣区域内与所述校正后的像素点对应的像素点确定差分图像,并根据预设缺陷判断方法对差分图像进行缺陷检测。
其中,差分图像可以理解为校正后的像素点与第一感兴趣区域内对应的像素点之间的差异。
对待检测图像的第二ROI内的所有像素点进行校正之后,为了获取模板图像和待检测图像之间的差异,需要确定两幅图像对比之后的差分图像,差分图像能够显示两张图像的ROI区域的不同信息。根据校正后的像素点与第一ROI内对应的像素点能够得到模板图像和待检测图像的差分图像,在确定了差分图像之后,根据预设缺陷判断方法,例如对差分图像所对应的灰度值差值数值的大小进行判断从而对差分图像进行缺陷检测,或者通过预设敏感度阈值对差分图像进行判断,确定差分图像的待检测区域,对待检测区域的面积进行判断从而对差分图像进行缺陷检测等方法,能够得到待检测图像的缺陷检测结果。
本实施例提供的技术方案,获取模板图像的第一感兴趣区域对应的第一像素点集合和待检测图像的第二感兴趣区域对应的第二像素点集合,根据第一像素点集合中第一目标像素点与第二像素点集合中对应的第二目标像素点的相似性,确定第一特征点集合和第二特征点集合,根据第一特征点集合中预设个数的第一特征点的位置信息和第二特征点集合中对应的相同个数的第二特征点的位置信息确定图像变换法则,并根据图像变换法则对第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点,根据校正后的像素点与第一感兴趣区域内对应的像素点确定差分图像,并根据预设缺陷判断方法对差分图像进行缺陷检测,通过感兴趣区域的选择能够有效减少工作量和节省时间,实现高效和准确的缺陷检测,有利于提高模具生产质量。
在一些实施例中,获取模板图像的第一感兴趣区域对应的第一像素点集合和待检测图像的第二感兴趣区域对应的第二像素点集合,可以包括:确定模板图像的第一感兴趣区域和待检测图像的第二感兴趣区域;对第一感兴趣区域对应的像素点进行降噪处理,得到第一像素点集合,以及对第二感兴趣区域对应的像素点进行降噪处理,得到第二像素点集合。
在一实施例中,先确定模板图像的第一ROI和待检测图像的第二ROI,例如,可以在模板图像的整幅图像中选择圆形区域、矩形区域和多边形区域等多个形状的区域作为第一ROI,可以在待检测图像的整幅图像中选择圆形区域、矩形区域和多边形区域等多个形状的区域作为第二ROI。第一ROI和第二ROI的形状应该保持一致,以保证获取的第一像素点集合和对应的第二像素点集合的准确性。在得到第一ROI和第二ROI之后,对第一ROI对应的像素点通过均值滤波器、自适应中值滤波器以及形态学噪声滤除器等降噪方法进行降噪处理, 就得到了第一像素点集合,对第二ROI对应的像素点也通过均值滤波器、自适应中值滤波器以及形态学噪声滤除器等降噪方法进行降噪处理,就得到了第二像素点集合。
本申请实施例中,通过先确定第一ROI和第二ROI,再分别对第一ROI对应的像素点和第二ROI对应的像素点进行降噪处理,能够降低噪声对图像的干扰,有利于提高后续缺陷检测结果的准确性。
在一些实施例中,对第一感兴趣区域对应的像素点进行降噪处理,得到第一像素点集合,以及对第二感兴趣区域对应的像素点进行降噪处理,得到第二像素点集合,可以包括:对第一感兴趣区域进行边界扩充,得到扩充后的第一区域,以及对第二感兴趣区域进行边界扩充,得到扩充后的第二区域;通过自适应中值滤波器对第一区域内的像素点进行降噪处理,得到第一像素点集合,以及通过自适应中值滤波器对第二区域内的像素点进行降噪处理,得到第二像素点集合。
其中,自适应中值滤波器是一种图像降噪的方法,不但能够滤除概率较大的椒盐噪声,而且能够更好的保护图像细节。椒噪声是指较小的灰度值,呈现的效果是小黑点;盐噪声是指较大的灰度值,呈现的效果是小白点。
在一实施例中,为了更好的获取模板图像和待检测图像的边缘特征,通过对第一ROI进行边界扩充,例如,可以采用设定固定值对第一ROI的边界进行填充,能够得到扩充后的第一区域。同样地,对第二ROI也采用相同的方法进行边界扩充,能够得到扩充后的第二区域。在得到第一区域和第二区域之后,通过自适应中值滤波器对第一区域内的所有像素点进行降噪处理,能够得到第一像素点集合。同样地,通过自适应中值滤波器对第二区域内的所有像素点进行降噪处理,能得到第二像素点集合,以便后续根据第一像素点集合中第一目标像素点与第二像素点集合中对应的第二目标像素点的相似性,确定第一特征点集合和第二特征点集合。
其中,当噪声出现的概率比较高时,通过自适应中值滤波器根据预设好的条件,动态地改变中值滤波器的窗口尺寸,能够达到去除噪声和保护图像细节的效果。自适应中值滤波器的输出结果是一个灰度值,用来替换滤波窗口的中心位置的点(x,y)处的灰度值,灰度值的取值范围为0~255。自适应中值滤波器可以分为以下两个步骤,包括步骤A和步骤B。
步骤A:令A 1=Z med-Z min,A 2=Z med-Z max,如果A 1>0且A 2<0,跳转到步骤B;否则,增大窗口的尺寸;如果增大后的窗口的尺寸小于或者等于S max,则重复步骤A,否则,输出Z med
该步骤目的是确定当前窗口内得到的中值Z med是否是噪声。
步骤B:令B 1=Z xy-Z min,B 2=Z xy-Z max,如果B 1>0且B 2<0,则输出Z xy,否则输出Z med
其中,S xy表示自适应中值滤波器的作用区域,也表示滤波器窗口所覆盖的区域,该区域中心点为图像中第y行第x列的像素点。Z min表示S xy中最小的灰度值,Z max表示S xy中最大的灰度值,Z med表示S xy中所有灰度值的中值,Z xy表示图像中第y行第x列的像素点的灰度值,S max表示S xy所允许的最大窗口尺寸。
从上述两个步骤可知:自适应中值滤波器,可以快速处理出现概率较低的噪声;噪声点概率较高时,通过增大自适应中值滤波器中的窗口尺寸,同样能够处理。
在一些实施例中,在对第一ROI对应的像素点进行降噪处理,得到第一像素点集合,以及对第二ROI对应的像素点进行降噪处理,得到第二像素点集合之后,可以对降噪处理后的图像进行图像清晰度评价,可以通过多因子图像清晰度评价策略进行评价。
由于图像清晰度直接影响图像质量,因此在模具监视应用中,高清晰度的图像更有利于后续的缺陷检测。多因子图像清晰度评价策略可以为双向梯度评价策略和图像方差评价策略。两者都是在空域中进行处理,主要思路是利用相邻像素间灰度特征的梯度差进行图像清晰度评价。
对于双向梯度评价策略,如果以A代表原始图像,则G x代表经横向边缘检测的卷积,即水平方向的梯度值,G y代表经纵向边缘检测的卷积,即垂直方向的梯度值,以3*3矩阵为例,对应的G x和G y的公式如下所示:
Figure PCTCN2021098423-appb-000001
Figure PCTCN2021098423-appb-000002
Figure PCTCN2021098423-appb-000003
其中,f(x,y)表示图像A中中间位置的像素点的灰度值,x表示行数,y表示列数,将矩阵A分别代入上述两个公式中,可以得到:
Figure PCTCN2021098423-appb-000004
Figure PCTCN2021098423-appb-000005
在得到G x和G y之后,通过评价函数
Figure PCTCN2021098423-appb-000006
的值可以对图像清晰度进行评价,G的值越大,则对应图像的清晰度越高。
方差是概率论中用来考察一组离散数据和该组离散数据的期望之间的离散程度的度量方法。方差较大,表示这一组数据之间的偏差就较大,组内的数据分布不均衡;方差较小,表示组内的数据之间分布平均。对于图像方差评价策略,图像清晰度高,图像数据之间的灰度差异会增大,即图像数据的方差应该较大,因此可以通过图像灰度数据的方差来衡量图像的清晰度,方差越大,表示清晰度越好,通过计算图像的方差,评价图像像素间的差别,进而评价图像清晰度是否合格。图像方差公式如下所示:
Figure PCTCN2021098423-appb-000007
其中,M*N表示图像的分辨率大小,M表示图像的宽度,N表示图像的高度,p(i,j)表示第i行第j列的像素点的灰度值,μ表示均值。
在一些实施例中,根据第一像素点集合中第一目标像素点与第二像素点集合中与所述第一目标像素点对应的第二目标像素点的相似性,确定第一特征点集合和第二特征点集合,可以包括:根据第一像素点集合中每个像素点的梯度确定第一待确认角点集合,并根据第一待确认角点集合中每个待确认角点的第一角点响应函数确定第一目标像素点集合;根据第二像素点集合中每个像素点的梯度确定第二待确认角点集合,并根据第二待确认角点集合中每个待确认角点的第二角点响应函数确定第二目标像素点集合;针对第一目标像素点集合中每个第一目标像素点,计算当前第一目标像素点与第二目标像素点集合中与当前第一目标像素点的位置相对应的第二目标像素点的汉明距离,根据汉明距离 确定当前第一目标像素点和第二目标像素点是否为特征点,若是,则将当前第一目标像素点存储至第一特征点集合,将第二目标像素点存储至第二特征点集合。
在一实施例中,第一目标像素点集合的确定方式为:
1)假设模板图像为I,通过梯度计算公式可以计算出模板图像I对应的第一像素点集合中每个像素点的水平方向的梯度值I x和垂直方向的梯度值I y,若I x大于或者等于设定的水平方向的梯度阈值,以及I y大于或者等于设定的垂直方向的梯度阈值,则将该像素点确定为第一待确认角点,并将该像素点存储至第一待确认角点集合。
2)计算第一待确认角点集合中每个第一待确认角点的
Figure PCTCN2021098423-appb-000008
和I XI Y
3)对计算后的
Figure PCTCN2021098423-appb-000009
和I XI Y进行高斯滤波,消除高斯噪声。
4)针对每个第一待确认角点,分别构造相关矩阵
Figure PCTCN2021098423-appb-000010
以及第一角点响应函数
Figure PCTCN2021098423-appb-000011
其中,|M|表示M的行列式,t r(M)表示M的迹,即矩阵M对角线上多个元素的和。
5)对R进行非极大值抑制,若R大于阈值T,则将对应的第一待确认角点确定为第一目标像素点,并将该第一目标像素点存储至第一目标像素点集合中。
同样地,通过上述确定方式对待检测图像对应的第二像素点集合中每个像素点进行相同的计算,可以确定出第二目标像素点集合。其中,第一目标像素点集合和第二目标像素点集合中的像素点是一一对应的关系。在得到第一目标像素点集合和第二目标像素点集合之后,针对第一目标像素点集合中每个第一目标像素点,计算当前第一目标像素点与第二目标像素点集合中与当前第一目标像素点的位置相对应的第二目标像素点的汉明距离,若该汉明距离对应的值大于预设距离阈值,则确定当前第一目标像素点和第二目标像素点是特征点,并将当前第一目标像素点存储至第一特征点集合,将第二目标像素点存储至第二特征点集合。
其中,汉明距离表示两个(相同长度)的二进制序列中对应位的数值不同的位的数量,即描述序列x=(x 1,x 2,…,x k,…,x n)和序列y=(y 1,y 2,…,y k,…,y n)的距离d(x,y),汉明距离的计算公式如下所示:
Figure PCTCN2021098423-appb-000012
其中,
Figure PCTCN2021098423-appb-000013
表示模2运算(异或操作),x k∈{0,1},y k∈{0,1},k为序列下标。
示例性的,以计算当前第一目标像素点与第二目标像素点集合中与当前第一目标像素点的位置相对应的第二目标像素点的汉明距离为例,先将第一目标像素点的灰度值转换为二进制序列x,将第二目标像素点集合中与当前第一目标像素点的位置相对应的第二目标像素点转换为二进制序列y,然后根据d(x,y)的计算公式就可以得到当前第一目标像素点与对应的第二目标像素点的汉明距离。例如,假设
x为:00000000 00000000 01000000 11101111
y为:00000000 10000000 11000000 01101111
则d(x,y)=3。
水平方向的梯度阈值、垂直方向的梯度阈值和阈值T均可以视实际情况而定,也可以提前设定好,本实施例不做限制。
本申请实施例中,在模具监视的复杂工况中,根据汉明距离的大小关系来判断第一目标像素点和对应的第二目标像素点之间的相似性,从而确定这两个像素点是否为特征点,所得到的第一特征点集合和第二特征点集合更准确。汉明距离越小,则这两个像素点的相似度越高,则说明这两个点均为特征点,表明这两个点的唯一性较好,能够形成一对一信息。
实施例二
图2为本申请实施例二提供的一种用于模具监视的缺陷检测方法的流程图。本申请实施例是在上述实施例的基础上进行说明。可选的,本实施例对根据预设缺陷判断方法对差分图像进行缺陷检测的过程进行解释说明。
参见图2,本实施例的方法包括但不限于如下步骤。
S210,获取模板图像的第一感兴趣区域对应的第一像素点集合和待检测图像的第二感兴趣区域对应的第二像素点集合。
S220,根据第一像素点集合中第一目标像素点与第二像素点集合中与所述第一目标像素点对应的第二目标像素点的相似性,确定第一特征点集合和第二特征点集合。
S230,根据第一特征点集合中预设个数的第一特征点的位置信息和第二特征点集合中与所述预设个数的第一特征点对应的相同个数的第二特征点的位置 信息确定图像变换法则,并根据图像变换法则对第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点。
可选的,根据第一特征点集合中预设个数的第一特征点的位置信息和第二特征点集合中与所述预设个数的第一特征点对应的相同个数的第二特征点的位置信息确定图像变换法则,并根据图像变换法则对第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点,可以包括:从第一特征点集合中提取预设个数的第一特征点,以及从第二特征点集合中提取出与第一特征点对应的相同个数的第二特征点,构成预设组数的特征点,其中,预设个数与预设组数对应的数值相等;根据预设组数的特征点中每组特征点的位置信息,确定图像转换矩阵;根据图像转换矩阵对第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点。
其中,预设个数可以是预先设计好的,也可以视实际情况而定,能够满足确定图像转换矩阵的需求即可。
在一实施例中,针对模具监视的工况,至少需要四组特征点的位置信息进行计算,但是考虑到特征点筛选过程中有位置信息不准确的特征点影响变换精度,需要多组特征点用于计算。理论上特征点越多,转换矩阵的结果越精确,在实际应用中特征点过多会导致计算量增加。因此,从第一特征点集合中提取预设个数的第一特征点,从第二特征点集合中提取出与第一特征点对应的相同个数的第二特征点,构成预设组数的特征点。根据这预设组数的特征点中每一组特征点的位置信息,能够确定图像转换矩阵,转换过程如下所示:假设[x,y,w]为待检测图像中第二特征点集合中一个特征点的位置信息(相当于一个特征点转换前的位置信息),[x',y',w']为模板图像中第一特征点集合中相应特征点的位置信息(相当于一个特征点转换后的位置信息),则相应的图像变换法则如下所示:
Figure PCTCN2021098423-appb-000014
其中,
Figure PCTCN2021098423-appb-000015
A为相应的图像转换矩阵,a 11,a 12,…,a 33为矩阵A中的相应元素,w为第二特征点集合中特征点的z轴的值。
在得到图像转换矩阵之后,通过图像转换矩阵对第二ROI内的所有像素点进行矩阵转换,即位置信息校正,校正之后就能够得到校正后的像素点。
本申请实施例中,通过先获取预设组数的特征点,再根据预设组数的特征点的位置信息确定图像转换矩阵,最后根据图像转换矩阵对第二ROI内的所有像素点进行位置信息校正,使得所确定的图像转换矩阵更准确,从而校正后的像素点也更准确,减小了误差,兼顾了精度和速度。
示例性的,下面以四组特征点为例,对图像转换矩阵的计算过程进行说明。
一个二维图像经过透视变换为另一个平面图像,过程为:
Figure PCTCN2021098423-appb-000016
Figure PCTCN2021098423-appb-000017
其中,x表示第二特征点集合中特征点的水平方向位置信息,y表示第二特征点集合中特征点的垂直方向位置信息,u表示第一特征点集合中相应特征点的水平方向位置信息,v表示第一特征点集合中相应特征点的垂直方向位置信息,a、b、c、d、e、f、k、l、m和n均为待求系数。
对式(8)和式(9)进行转换,得到下式:
Figure PCTCN2021098423-appb-000018
其中,g和h均为待求系数,n=m=g,l=k=h。
假设(x1,y1),(x2,y2),(x3,y3),(x4,y4)是第二特征点集合中的四个特征点(即转换前的四个特征点),(u1,v1),(u2,v2),(u3,v3),(u4,v4)是第一特征点集合中与第二特征点集合中的四个特征点相对应的四个特征点(即转换后的四个特征点),总共组成了四组特征点,将这四组特征点代入式(10),可求得图像转换矩阵:
Figure PCTCN2021098423-appb-000019
S240,将校正后的像素点中每个像素点对应的灰度值与第一感兴趣区域内与每个像素点对应的像素点对应的灰度值相减,得到差分图像。
在得到校正后的像素点之后,由于经过校正之后的像素点与第一ROI内对应的像素点的位置信息是一致的,因此将校正后的像素点中每个像素点对应的灰度值与第一感兴趣区域内对应的像素点对应的灰度值相减,能够得到差分图像,该差分图像用矩阵进行表示。
S250,通过预设敏感度阈值对差分图像进行判断,确定差分图像的待检测区域。
其中,预设敏感度阈值可以是预先设定好的,也可以视实际情况而定,本实施例不做限制。
在得到差分图像之后,通过将差分图像中的每一个灰度值差值与预设敏感度阈值的大小进行判断,判断规则可以为:若一个灰度值差值小于或者等于预设敏感度阈值,则该灰度值差值在待检测图像的第二ROI中所对应的区域被过滤(即未被确定为差分图像的待检测区域);若一个灰度值差值大于预设敏感度阈值,则该灰度值差值在待检测图像的第二ROI中所对应的区域被确定为差分图像的待检测区域。判断规则可以用下式进行表示:
Figure PCTCN2021098423-appb-000020
其中,M(x,y)表示校正后的像素点中一个像素点的灰度值,N(x,y)表示第一ROI内与M(x,y)对应的像素点的灰度值,T表示预设敏感度阈值,D(x,y)表示待检测区域的判断结果,若D(x,y)=1,则表示该像素点对应的区域被确定为差分图像的待检测区域,若D(x,y)=0,则表示该像素点对应的区域被过滤了。
S260,根据预设缺陷判断方法对待检测区域进行缺陷检测。
确定了差分图像的待检测区域之后,就能够通过预设缺陷判断方法对待检测区域进行缺陷检测,得到待检测图像的缺陷检测结果。
可选的,根据预设缺陷判断方法对待检测区域进行缺陷检测,可以包括:针对待检测区域中的每个待检测区域,若当前待检测区域的面积大于或者等于第一面积阈值,则当前待检测区域的缺陷检测结果为不通过;当所有待检测区域的面积均小于第一面积阈值时,获取所有待检测区域的总面积,若总面积大于或者等于第二面积阈值,则待检测区域的缺陷检测结果为不通过。
其中,第一面积阈值和第二面积阈值均可以是预先设定好的,也可以视实际情况而定,本实施例不做限制。
本申请实施例中,通过对每个待检测区域的面积进行判断,在所有待检测区域的面积均小于第一面积阈值时,对所有待检测区域的总面积进行判断的方式,使得判断过程更为可靠,最终得到的缺陷检测结果也更为准确。
示例性的,如下表1所示为两种滤波方法的评价结果,从表1中可以看出,多因子图像清晰度评价策略中的结果显示自适应中值滤波器可以更好的保留图像细节。
表1
Figure PCTCN2021098423-appb-000021
示例性的,如下表2所示为两种方法确定特征点所需时间的对比结果,从表2可知:本申请中方法确定特征点的时间比定向快速旋转(Oriented Fast and Rotated Brief,ORB)方法确定特征点的时间快100ms左右。
表2
Figure PCTCN2021098423-appb-000022
示例性的,如下表3所示为缺陷检测结果,从表3可知:在开模位置不稳定的情况下,本申请中的缺陷检测方法可以准确的判断工况,有利于提高模具生产质量和精度。
表3
Figure PCTCN2021098423-appb-000023
本实施例提供的技术方案,在得到校正后的像素点之后,将校正后的像素点中每个像素点对应的灰度值与第一感兴趣区域内对应的像素点对应的灰度值相减,得到差分图像,通过预设敏感度阈值对差分图像进行判断,确定差分图像的待检测区域,根据预设缺陷判断方法对待检测区域进行缺陷检测,使得缺陷检测的结果更为准确,也更贴近实际情况,实现了高效和准确的缺陷检测,从而有利于提高模具生产质量。
实施例三
图3为本申请实施例三提供的一种用于模具监视的缺陷检测装置的结构示意图,如图3所示,该装置可以包括以下模块。
获取模块310,设置为获取模板图像的第一感兴趣区域对应的第一像素点集合和待检测图像的第二感兴趣区域对应的第二像素点集合,其中,模板图像通过对模具进行拍摄得到,待检测图像通过对使用模具生产的产品进行拍摄得到;
确定模块320,设置为根据第一像素点集合中第一目标像素点与第二像素点集合中与所述第一目标像素点对应的第二目标像素点的相似性,确定第一特征点集合和第二特征点集合;
校正模块330,设置为根据第一特征点集合中预设个数的第一特征点的位置 信息和第二特征点集合中与所述预设个数的第一特征点对应的相同个数的第二特征点的位置信息确定图像变换法则,并根据图像变换法则对第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点;
检测模块340,设置为根据校正后的像素点与第一感兴趣区域内与所述校正后的像素点对应的像素点确定差分图像,并根据预设缺陷判断方法对差分图像进行缺陷检测。
本实施例提供的技术方案,获取模板图像的第一感兴趣区域对应的第一像素点集合和待检测图像的第二感兴趣区域对应的第二像素点集合,根据第一像素点集合中第一目标像素点与第二像素点集合中对应的第二目标像素点的相似性,确定第一特征点集合和第二特征点集合,根据第一特征点集合中预设个数的第一特征点的位置信息和第二特征点集合中对应的相同个数的第二特征点的位置信息确定图像变换法则,并根据图像变换法则对第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点,根据校正后的像素点与第一感兴趣区域内对应的像素点确定差分图像,并根据预设缺陷判断方法对差分图像进行缺陷检测,能够实现高效和准确的缺陷检测,有利于提高模具生产质量。
上述获取模块310,可以包括:区域确定单元,设置为确定模板图像的第一感兴趣区域和待检测图像的第二感兴趣区域;集合确定单元,设置为对第一感兴趣区域对应的像素点进行降噪处理,得到第一像素点集合,以及对第二感兴趣区域对应的像素点进行降噪处理,得到第二像素点集合。
上述集合确定单元,可以设置为:对第一感兴趣区域进行边界扩充,得到扩充后的第一区域,以及对第二感兴趣区域进行边界扩充,得到扩充后的第二区域;通过自适应中值滤波器对第一区域内的像素点进行降噪处理,得到第一像素点集合,以及通过自适应中值滤波器对第二区域内的像素点进行降噪处理,得到第二像素点集合。
上述确定模块320,可以设置为:根据第一像素点集合中每个像素点的梯度确定第一待确认角点集合,并根据第一待确认角点集合中每个待确认角点的第一角点响应函数确定第一目标像素点集合;根据第二像素点集合中每个像素点的梯度确定第二待确认角点集合,并根据第二待确认角点集合中每个待确认角点的第二角点响应函数确定第二目标像素点集合;针对第一目标像素点集合中每个第一目标像素点,计算当前第一目标像素点与第二目标像素点集合中与当前第一目标像素点的位置相对应的第二目标像素点的汉明距离,根据汉明距离确定当前第一目标像素点和第二目标像素点是否为特征点,若是,则将当前第一目标像素点存储至第一特征点集合,将第二目标像素点存储至第二特征点集合。
上述校正模块330,可以设置为:从第一特征点集合中提取预设个数的第一特征点,以及从第二特征点集合中提取出与第一特征点对应的相同个数的第二特征点,构成预设组数的特征点,其中,预设个数与预设组数对应的数值相等;根据预设组数的特征点中每组特征点的位置信息,确定图像转换矩阵;根据图像转换矩阵对第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点。
上述检测模块340,可以包括:差分图像确定单元,设置为将校正后的像素点中每个像素点对应的灰度值与第一感兴趣区域内与所述每个像素点对应的像素点对应的灰度值相减,得到差分图像;待检测区域确定单元,设置为通过预设敏感度阈值对差分图像进行判断,确定差分图像的待检测区域;缺陷检测单元,设置为根据预设缺陷判断方法对待检测区域进行缺陷检测。
上述缺陷检测单元,可以设置为:针对待检测区域中的每个待检测区域,若当前待检测区域的面积大于或者等于第一面积阈值,则当前待检测区域的缺陷检测结果为不通过;当所有待检测区域的面积均小于第一面积阈值时,获取所有待检测区域的总面积,若总面积大于或者等于第二面积阈值,则待检测区域的缺陷检测结果为不通过。
本实施例提供的用于模具监视的缺陷检测装置可适用于上述任意实施例提供的用于模具监视的缺陷检测方法,具备相应的功能和效果。
实施例四
图4为本申请实施例四提供的一种计算机设备的结构示意图,如图4所示,该计算机设备包括处理器410、存储装置420和通信装置430;计算机设备中处理器410的数量可以是一个或多个,图4中以一个处理器410为例;计算机设备中的处理器410、存储装置420和通信装置430可以通过总线或其他方式连接,图4中以通过总线连接为例。
存储装置420作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请实施例中的用于模具监视的缺陷检测方法对应的模块(例如,用于模具监视的缺陷检测装置中的获取模块310、确定模块320、校正模块330和检测模块340)。处理器410通过运行存储在存储装置420中的软件程序、指令以及模块,从而执行计算机设备的多种功能应用以及数据处理,即实现上述的用于模具监视的缺陷检测方法。
存储装置420可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据计算机设备 的使用所创建的数据等。此外,存储装置420可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储装置420可包括相对于处理器410远程设置的存储器,这些远程存储器可以通过网络连接至计算机设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
通信装置430,设置为实现服务器之间的网络连接或者移动数据连接。
本实施例提供的一种计算机设备可用于执行上述任意实施例提供的用于模具监视的缺陷检测方法,具备相应的功能和效果。
实施例五
本申请实施例五还提供了一种计算机可读存储介质,存储有计算机程序,该计算机程序被处理器执行时实现本申请任意实施例中的用于模具监视的缺陷检测方法,该方法包括:
获取模板图像的第一感兴趣区域对应的第一像素点集合和待检测图像的第二感兴趣区域对应的第二像素点集合,其中,模板图像通过对模具进行拍摄得到,待检测图像通过对使用模具生产的产品进行拍摄得到;
根据第一像素点集合中第一目标像素点与第二像素点集合中与所述第一目标像素点对应的第二目标像素点的相似性,确定第一特征点集合和第二特征点集合;
根据第一特征点集合中预设个数的第一特征点的位置信息和第二特征点集合中与所述预设个数的第一特征点对应的相同个数的第二特征点的位置信息确定图像变换法则,并根据图像变换法则对第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点;
根据校正后的像素点与第一感兴趣区域内与所述校正后的像素点对应的像素点确定差分图像,并根据预设缺陷判断方法对差分图像进行缺陷检测。
本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的方法操作,还可以执行本申请任意实施例所提供的用于模具监视的缺陷检测方法中的相关操作。
通过以上关于实施方式的描述,本申请可借助软件及必需的通用硬件来实现,也可以通过硬件实现。本申请的技术方案可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory, RAM)、闪存(FLASH)、硬盘或光盘等,包括多个指令用以使得一台计算机设备(可以是个人计算机,服务器或者网络设备等)执行本申请多个实施例所述的方法。
上述用于模具监视的缺陷检测装置的实施例中,所包括的多个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,多个功能单元的名称也只是为了便于相互区分,并不用于限制本申请的保护范围。

Claims (10)

  1. 一种用于模具监视的缺陷检测方法,包括:
    获取模板图像的第一感兴趣区域对应的第一像素点集合和待检测图像的第二感兴趣区域对应的第二像素点集合,其中,所述模板图像通过对模具进行拍摄得到,所述待检测图像通过对使用所述模具生产的产品进行拍摄得到;
    根据所述第一像素点集合中第一目标像素点与所述第二像素点集合中与所述第一目标像素点对应的第二目标像素点的相似性,确定第一特征点集合和第二特征点集合;
    根据所述第一特征点集合中预设个数的第一特征点的位置信息和所述第二特征点集合中与所述预设个数的第一特征点对应的相同个数的第二特征点的位置信息确定图像变换法则,并根据所述图像变换法则对所述第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点;
    根据所述校正后的像素点与所述第一感兴趣区域内与所述校正后的像素点对应的像素点确定差分图像,并根据预设缺陷判断方法对所述差分图像进行缺陷检测。
  2. 根据权利要求1所述的方法,其中,所述获取模板图像的第一感兴趣区域对应的第一像素点集合和待检测图像的第二感兴趣区域对应的第二像素点集合,包括:
    确定所述模板图像的第一感兴趣区域和所述待检测图像的第二感兴趣区域;
    对所述第一感兴趣区域对应的像素点进行降噪处理,得到所述第一像素点集合,以及对所述第二感兴趣区域对应的像素点进行降噪处理,得到所述第二像素点集合。
  3. 根据权利要求2所述的方法,其中,所述对所述第一感兴趣区域对应的像素点进行降噪处理,得到所述第一像素点集合,以及对所述第二感兴趣区域对应的像素点进行降噪处理,得到所述第二像素点集合,包括:
    对所述第一感兴趣区域进行边界扩充,得到扩充后的第一区域,以及对所述第二感兴趣区域进行边界扩充,得到扩充后的第二区域;
    通过自适应中值滤波器对所述第一区域内的像素点进行降噪处理,得到所述第一像素点集合,以及通过自适应中值滤波器对所述第二区域内的像素点进行降噪处理,得到所述第二像素点集合。
  4. 根据权利要求1所述的方法,其中,所述根据所述第一像素点集合中第一目标像素点与所述第二像素点集合中与所述第一目标像素点对应的第二目标 像素点的相似性,确定第一特征点集合和第二特征点集合,包括:
    根据所述第一像素点集合中每个像素点的梯度确定第一待确认角点集合,并根据所述第一待确认角点集合中每个待确认角点的第一角点响应函数确定第一目标像素点集合;
    根据所述第二像素点集合中每个像素点的梯度确定第二待确认角点集合,并根据所述第二待确认角点集合中每个待确认角点的第二角点响应函数确定第二目标像素点集合;
    针对所述第一目标像素点集合中每个第一目标像素点,计算当前第一目标像素点与所述第二目标像素点集合中与所述当前第一目标像素点的位置相对应的第二目标像素点的汉明距离,根据所述汉明距离确定所述当前第一目标像素点和所述第二目标像素点是否为特征点,在确定所述当前第一目标像素点和所述第二目标像素点是特征点的情况下,将所述当前第一目标像素点存储至所述第一特征点集合,将所述第二目标像素点存储至所述第二特征点集合。
  5. 根据权利要求1所述的方法,其中,所述根据所述第一特征点集合中预设个数的第一特征点的位置信息和所述第二特征点集合中与所述预设个数的第一特征点对应的相同个数的第二特征点的位置信息确定图像变换法则,并根据所述图像变换法则对所述第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点,包括:
    从所述第一特征点集合中提取所述预设个数的第一特征点,以及从所述第二特征点集合中提取出与所述第一特征点对应的相同个数的第二特征点,构成预设组数的特征点,其中,所述预设个数与所述预设组数对应的数值相等;
    根据所述预设组数的特征点中每组特征点的位置信息,确定图像转换矩阵;
    根据所述图像转换矩阵对所述第二感兴趣区域内的所有像素点进行位置信息校正,得到所述校正后的像素点。
  6. 根据权利要求1所述的方法,其中,所述根据所述校正后的像素点与所述第一感兴趣区域内与所述校正后的像素点对应的像素点确定差分图像,并根据预设缺陷判断方法对所述差分图像进行缺陷检测,包括:
    将所述校正后的像素点中每个像素点对应的灰度值与所述第一感兴趣区域内与所述每个像素点对应的像素点对应的灰度值相减,得到所述差分图像;
    通过预设敏感度阈值对所述差分图像进行判断,确定所述差分图像的待检测区域;
    根据所述预设缺陷判断方法对所述待检测区域进行缺陷检测。
  7. 根据权利要求6所述的方法,其中,所述根据所述预设缺陷判断方法对所述待检测区域进行缺陷检测,包括:
    针对所述待检测区域中的每个待检测区域,在当前待检测区域的面积大于或者等于第一面积阈值的情况下,所述当前待检测区域的缺陷检测结果为不通过;
    在所有待检测区域的面积均小于所述第一面积阈值的情况下,获取所有待检测区域的总面积,在所述总面积大于或者等于第二面积阈值的情况下,所述待检测区域的缺陷检测结果为不通过。
  8. 一种用于模具监视的缺陷检测装置,包括:
    获取模块,设置为获取模板图像的第一感兴趣区域对应的第一像素点集合和待检测图像的第二感兴趣区域对应的第二像素点集合,其中,所述模板图像通过对模具进行拍摄得到,所述待检测图像通过对使用所述模具生产的产品进行拍摄得到;
    确定模块,设置为根据所述第一像素点集合中第一目标像素点与所述第二像素点集合中与所述第一目标像素点对应的第二目标像素点的相似性,确定第一特征点集合和第二特征点集合;
    校正模块,设置为根据所述第一特征点集合中预设个数的第一特征点的位置信息和所述第二特征点集合中与所述预设个数的第一特征点对应的相同个数的第二特征点的位置信息确定图像变换法则,并根据所述图像变换法则对所述第二感兴趣区域内的所有像素点进行位置信息校正,得到校正后的像素点;
    检测模块,设置为根据所述校正后的像素点与所述第一感兴趣区域内与所述校正后的像素点对应的像素点确定差分图像,并根据预设缺陷判断方法对所述差分图像进行缺陷检测。
  9. 一种计算机设备,包括:
    一个或多个处理器;
    存储装置,设置为存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7中任一项所述的用于模具监视的缺陷检测方法。
  10. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1-7中任一项所述的用于模具监视的缺陷检测方法。
PCT/CN2021/098423 2021-02-09 2021-06-04 用于模具监视的缺陷检测方法、装置、设备及介质 WO2022170706A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110181174.2 2021-02-09
CN202110181174.2A CN112837303A (zh) 2021-02-09 2021-02-09 一种用于模具监视的缺陷检测方法、装置、设备及介质

Publications (1)

Publication Number Publication Date
WO2022170706A1 true WO2022170706A1 (zh) 2022-08-18

Family

ID=75933294

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/098423 WO2022170706A1 (zh) 2021-02-09 2021-06-04 用于模具监视的缺陷检测方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN112837303A (zh)
WO (1) WO2022170706A1 (zh)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294410A (zh) * 2022-10-08 2022-11-04 加乐新材料(南通)有限公司 一种基于图形识别的塑料制品成型控制方法
CN115496762A (zh) * 2022-11-21 2022-12-20 深圳市富安娜家居用品股份有限公司 一种基于纺织工艺的染色不良缺陷识别方法
CN115880302A (zh) * 2023-03-08 2023-03-31 杭州智源电子有限公司 基于图像分析的仪表盘焊接质量检测方法
CN116245876A (zh) * 2022-12-29 2023-06-09 摩尔线程智能科技(北京)有限责任公司 缺陷检测方法、装置、电子设备、存储介质和程序产品
CN116363136A (zh) * 2023-06-01 2023-06-30 山东创元智能设备制造有限责任公司 一种机动车部件自动化生产在线筛选方法及系统
CN116402815A (zh) * 2023-06-08 2023-07-07 岑科科技(深圳)集团有限公司 基于人工智能的电感线圈封装异常检测方法
CN116453029A (zh) * 2023-06-16 2023-07-18 济南东庆软件技术有限公司 基于图像数据的楼宇火灾环境检测方法
CN116682107A (zh) * 2023-08-03 2023-09-01 山东国宏生物科技有限公司 基于图像处理的大豆视觉检测方法
CN117095003A (zh) * 2023-10-20 2023-11-21 山东亿盟源新材料科技有限公司 一种双金属复合板材碳钢原材料清洁度检测方法及装置
CN117745724A (zh) * 2024-02-20 2024-03-22 高唐县瑞景精密机械有限公司 基于视觉分析的石材打磨加工缺陷区域分割方法
CN117764992A (zh) * 2024-02-22 2024-03-26 山东乔泰管业科技有限公司 基于图像处理的塑料管材质量检测方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837303A (zh) * 2021-02-09 2021-05-25 广东拓斯达科技股份有限公司 一种用于模具监视的缺陷检测方法、装置、设备及介质
CN114155367B (zh) * 2022-02-09 2022-05-13 北京阿丘科技有限公司 印制电路板缺陷检测方法、装置、设备及存储介质
CN115063613B (zh) * 2022-08-09 2023-07-14 海纳云物联科技有限公司 一种验证商品标签的方法及装置
CN117058141B (zh) * 2023-10-11 2024-03-01 福建钜鸿百纳科技有限公司 一种玻璃磨边缺陷的检测方法及终端

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803244A (zh) * 2016-11-24 2017-06-06 深圳市华汉伟业科技有限公司 缺陷识别方法及系统
CN108288274A (zh) * 2018-02-24 2018-07-17 北京理工大学 模具检测方法、装置以及电子设备
CN110503633A (zh) * 2019-07-29 2019-11-26 西安理工大学 一种基于图像差分的贴花陶瓷盘表面缺陷检测方法
CN111028213A (zh) * 2019-12-04 2020-04-17 北大方正集团有限公司 图像缺陷检测方法、装置、电子设备及存储介质
CN111583211A (zh) * 2020-04-29 2020-08-25 广东利元亨智能装备股份有限公司 缺陷检测方法、装置及电子设备
CN111986190A (zh) * 2020-08-28 2020-11-24 哈尔滨工业大学(深圳) 一种基于伪影剔除的印刷品缺陷检测方法及装置
CN112837303A (zh) * 2021-02-09 2021-05-25 广东拓斯达科技股份有限公司 一种用于模具监视的缺陷检测方法、装置、设备及介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803244A (zh) * 2016-11-24 2017-06-06 深圳市华汉伟业科技有限公司 缺陷识别方法及系统
CN108288274A (zh) * 2018-02-24 2018-07-17 北京理工大学 模具检测方法、装置以及电子设备
CN110503633A (zh) * 2019-07-29 2019-11-26 西安理工大学 一种基于图像差分的贴花陶瓷盘表面缺陷检测方法
CN111028213A (zh) * 2019-12-04 2020-04-17 北大方正集团有限公司 图像缺陷检测方法、装置、电子设备及存储介质
CN111583211A (zh) * 2020-04-29 2020-08-25 广东利元亨智能装备股份有限公司 缺陷检测方法、装置及电子设备
CN111986190A (zh) * 2020-08-28 2020-11-24 哈尔滨工业大学(深圳) 一种基于伪影剔除的印刷品缺陷检测方法及装置
CN112837303A (zh) * 2021-02-09 2021-05-25 广东拓斯达科技股份有限公司 一种用于模具监视的缺陷检测方法、装置、设备及介质

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294410A (zh) * 2022-10-08 2022-11-04 加乐新材料(南通)有限公司 一种基于图形识别的塑料制品成型控制方法
CN115294410B (zh) * 2022-10-08 2023-10-17 加乐新材料(南通)有限公司 一种基于图形识别的塑料制品成型控制方法
CN115496762A (zh) * 2022-11-21 2022-12-20 深圳市富安娜家居用品股份有限公司 一种基于纺织工艺的染色不良缺陷识别方法
CN115496762B (zh) * 2022-11-21 2023-01-24 深圳市富安娜家居用品股份有限公司 一种基于纺织工艺的染色不良缺陷识别方法
CN116245876A (zh) * 2022-12-29 2023-06-09 摩尔线程智能科技(北京)有限责任公司 缺陷检测方法、装置、电子设备、存储介质和程序产品
CN115880302A (zh) * 2023-03-08 2023-03-31 杭州智源电子有限公司 基于图像分析的仪表盘焊接质量检测方法
CN116363136A (zh) * 2023-06-01 2023-06-30 山东创元智能设备制造有限责任公司 一种机动车部件自动化生产在线筛选方法及系统
CN116363136B (zh) * 2023-06-01 2023-08-11 山东创元智能设备制造有限责任公司 一种机动车部件自动化生产在线筛选方法及系统
CN116402815A (zh) * 2023-06-08 2023-07-07 岑科科技(深圳)集团有限公司 基于人工智能的电感线圈封装异常检测方法
CN116402815B (zh) * 2023-06-08 2023-08-22 岑科科技(深圳)集团有限公司 基于人工智能的电感线圈封装异常检测方法
CN116453029B (zh) * 2023-06-16 2023-08-29 济南东庆软件技术有限公司 基于图像数据的楼宇火灾环境检测方法
CN116453029A (zh) * 2023-06-16 2023-07-18 济南东庆软件技术有限公司 基于图像数据的楼宇火灾环境检测方法
CN116682107A (zh) * 2023-08-03 2023-09-01 山东国宏生物科技有限公司 基于图像处理的大豆视觉检测方法
CN116682107B (zh) * 2023-08-03 2023-10-10 山东国宏生物科技有限公司 基于图像处理的大豆视觉检测方法
CN117095003A (zh) * 2023-10-20 2023-11-21 山东亿盟源新材料科技有限公司 一种双金属复合板材碳钢原材料清洁度检测方法及装置
CN117095003B (zh) * 2023-10-20 2024-01-26 山东亿盟源新材料科技有限公司 一种双金属复合板材碳钢原材料清洁度检测方法及装置
CN117745724A (zh) * 2024-02-20 2024-03-22 高唐县瑞景精密机械有限公司 基于视觉分析的石材打磨加工缺陷区域分割方法
CN117745724B (zh) * 2024-02-20 2024-04-26 高唐县瑞景精密机械有限公司 基于视觉分析的石材打磨加工缺陷区域分割方法
CN117764992A (zh) * 2024-02-22 2024-03-26 山东乔泰管业科技有限公司 基于图像处理的塑料管材质量检测方法
CN117764992B (zh) * 2024-02-22 2024-04-30 山东乔泰管业科技有限公司 基于图像处理的塑料管材质量检测方法

Also Published As

Publication number Publication date
CN112837303A (zh) 2021-05-25

Similar Documents

Publication Publication Date Title
WO2022170706A1 (zh) 用于模具监视的缺陷检测方法、装置、设备及介质
CN110264416B (zh) 稀疏点云分割方法及装置
JP4772839B2 (ja) 画像識別方法および撮像装置
CN109241985B (zh) 一种图像识别方法及装置
CN105160654A (zh) 基于特征点提取的毛巾标签缺陷检测方法
CN108229475B (zh) 车辆跟踪方法、系统、计算机设备及可读存储介质
CN112419297A (zh) 一种螺栓松动检测方法、装置、设备及存储介质
CN108986152B (zh) 一种基于差分图像的异物检测方法及装置
JP2012032370A (ja) 欠陥検出方法、欠陥検出装置、学習方法、プログラム、及び記録媒体
CN109242959B (zh) 三维场景重建方法及系统
CN115908415B (zh) 基于边缘的缺陷检测方法、装置、设备及存储介质
JP2017511674A (ja) Jpeg圧縮画像に関連付けられる写真カメラモデルを特定するためのシステム、ならびに関連付けられる方法、使用およびアプリケーション
CN116433666A (zh) 板卡线路缺陷在线识别方法、系统、电子设备及存储介质
WO2017113692A1 (zh) 一种图像匹配方法及装置
CN114283132A (zh) 一种缺陷检测方法、装置、设备以及存储介质
CN114972339A (zh) 用于推土机结构件生产异常检测的数据增强系统
US9286217B2 (en) Systems and methods for memory utilization for object detection
CN111340765B (zh) 一种基于背景分离的热红外图像倒影检测方法
CN117372487A (zh) 图像配准方法、装置、计算机设备和存储介质
CN115690747B (zh) 车辆盲区检测模型测试方法、装置、电子设备及存储介质
CN116403200A (zh) 基于硬件加速的车牌实时识别系统
CN112819823A (zh) 一种面向家具板材的圆孔检测方法、系统及装置
CN112308061A (zh) 一种车牌字符排序方法、识别方法及装置
Tao et al. Measurement algorithm of notch length of plastic parts based on video
CN111681229B (zh) 深度学习模型训练方法、可穿戴衣服瑕疵识别方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21925360

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE