CN117541832B - Abnormality detection method, abnormality detection system, electronic device, and storage medium - Google Patents

Abnormality detection method, abnormality detection system, electronic device, and storage medium Download PDF

Info

Publication number
CN117541832B
CN117541832B CN202410011307.5A CN202410011307A CN117541832B CN 117541832 B CN117541832 B CN 117541832B CN 202410011307 A CN202410011307 A CN 202410011307A CN 117541832 B CN117541832 B CN 117541832B
Authority
CN
China
Prior art keywords
pixel
image
feature
images
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410011307.5A
Other languages
Chinese (zh)
Other versions
CN117541832A (en
Inventor
徐海俊
韩晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Mega Technology Co Ltd
Original Assignee
Suzhou Mega Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Mega Technology Co Ltd filed Critical Suzhou Mega Technology Co Ltd
Priority to CN202410011307.5A priority Critical patent/CN117541832B/en
Publication of CN117541832A publication Critical patent/CN117541832A/en
Application granted granted Critical
Publication of CN117541832B publication Critical patent/CN117541832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • G06V10/7515Shifting the patterns to accommodate for positional errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The embodiment of the application provides an abnormality detection method, an abnormality detection system, electronic equipment and a storage medium. The method comprises the following steps: respectively extracting features of the acquired first images to obtain a plurality of first feature images, wherein the first images are images of non-defective objects; fitting a probability distribution model corresponding to each pixel position according to the pixel values of each pixel corresponding to the same pixel position in the plurality of first feature maps; extracting features of the image to be detected to obtain a second feature map, wherein each first pixel in the second feature map corresponds to an area in the image to be detected; for each first pixel, matching the pixel value of the first pixel with a probability distribution model corresponding to the position of the first pixel, and obtaining a matching result; and determining whether the region corresponding to the first pixel in the image to be detected has a defect according to the matching result. The method can realize accurate and rapid detection of the defects on the basis of no negative sample of the defects.

Description

Abnormality detection method, abnormality detection system, electronic device, and storage medium
Technical Field
The present application relates to the field of abnormality detection technology, and more particularly, to an abnormality detection method, an abnormality detection system, an electronic device, and a storage medium.
Background
In the industrial field, especially in the sub-field of high-precision tips such as wafers, the product yield requirement is extremely high. Currently, defect data sets are often utilized for anomaly detection of wafer products. The wafer inspection is achieved by iteratively training the inspection model by collecting a large amount of defect data.
However, when the wafer yield is insufficient or there is no mass production, the number of defective products is limited, the collection difficulty of the defect data set is extremely high, and the inspection model cannot be effectively trained, so that the inspection accuracy is extremely poor, and even defective products are difficult to automatically inspect.
Disclosure of Invention
The present application has been made in view of the above-described problems. According to an aspect of the present application, there is provided an abnormality detection method including:
Respectively extracting features of the acquired first images to obtain a plurality of first feature images, wherein the first images are images of non-defective objects;
fitting a probability distribution model corresponding to each pixel position according to the pixel values of each pixel corresponding to the same pixel position in the plurality of first feature maps;
Extracting features of the image to be detected to obtain a second feature map, wherein each first pixel in the second feature map corresponds to an area in the image to be detected;
For each of the first pixels,
Matching the pixel value of the first pixel with a probability distribution model corresponding to the position of the first pixel, and obtaining a matching result;
and determining whether the region corresponding to the first pixel in the image to be detected has a defect according to the matching result.
Illustratively, feature extraction is performed on each of the acquired plurality of first images, including:
Inputting a plurality of first images into a first model respectively, and outputting a first feature map of each first image by the first model;
extracting features of an image to be detected, including:
inputting the image to be detected into a first model, and outputting a second feature map by the first model;
wherein the first model is a deep learning model trained using the image classification data set.
Illustratively, the method further comprises:
Inputting a plurality of first images into an initial model pre-trained by using an image classification dataset, and outputting a plurality of initial feature maps by the initial model;
Generating a calibration feature map according to pixel values of second pixels corresponding to the same pixel position in the initial feature maps, wherein the pixel value of each pixel in the calibration feature map is determined according to the pixel value of the second pixel corresponding to the pixel position;
And taking the plurality of first images as training samples, taking the calibration feature images as training targets, and performing secondary training on the initial model to obtain a first model.
Illustratively, generating the calibration feature map from pixel values of respective second pixels corresponding to the same pixel location in the plurality of initial feature maps includes:
For a plurality of second pixels corresponding to any pixel location in the plurality of initial feature maps,
Dividing the plurality of second pixels into a plurality of clusters by using a clustering algorithm according to the pixel values of the plurality of second pixels; and
The pixel value of the second pixel representing the center position of the main cluster among the clusters is used as the pixel value of the pixel corresponding to the pixel position in the calibration characteristic map.
Illustratively, the probability distribution model is a normal distribution model, fitting the probability distribution model corresponding to each pixel position according to the pixel values of each pixel corresponding to the same pixel position in the plurality of first feature maps, including:
For a plurality of third pixels corresponding to any one of the pixel positions in the plurality of first feature maps,
Calculating an average value and a variance of pixel values of the plurality of third pixels; and
A normal distribution model describing the distribution of the pixel values of the plurality of third pixels is fitted based on the determined mean and variance.
Illustratively, matching the pixel value of the first pixel with a probability distribution model corresponding to the location of the first pixel includes:
Substituting the pixel value of the first pixel into a probability density function of a normal distribution model corresponding to the position of the first pixel, and obtaining a probability density value corresponding to the pixel value of the first pixel;
determining whether a region corresponding to the first pixel in the image to be detected has a defect comprises:
judging whether a probability density value corresponding to a pixel value of the first pixel meets a preset condition or not; and
If yes, determining that a defect exists in the region corresponding to the first pixel in the image to be detected.
Illustratively, determining whether the probability density value corresponding to the pixel value of the first pixel meets a preset condition includes:
Judging whether the probability density value corresponding to the pixel value of the first pixel is smaller than the probability density threshold value corresponding to the first pixel; and
If yes, determining that the probability density value corresponding to the pixel value of the first pixel meets the preset condition.
Illustratively, the probability density thresholds for the different first pixels are different.
Illustratively, the plurality of first feature maps, the second feature maps, and the plurality of first images are all equal in size.
According to another aspect of the present application, there is also provided an abnormality detection system including:
The first extraction module is used for extracting the characteristics of the acquired first images respectively to obtain a plurality of first characteristic images, wherein the first images are images of non-defective objects;
The fitting module is used for fitting a probability distribution model corresponding to each pixel position according to the pixel values of each pixel corresponding to the same pixel position in the plurality of first feature maps;
The second extraction module is used for carrying out feature extraction on the image to be detected to obtain a second feature image, wherein each first pixel in the second feature image corresponds to one region in the image to be detected;
the matching module is used for matching the pixel value of each first pixel with the probability distribution model corresponding to the position of the first pixel, and obtaining a matching result;
and the determining module is used for determining whether the region corresponding to the first pixel in the image to be detected has a defect or not according to the matching result.
According to another aspect of the present application, there is also provided an electronic device including a processor and a memory, the memory storing computer program instructions which, when executed by the processor, are adapted to carry out the above-described anomaly detection method.
According to another aspect of the present application, there is also provided a storage medium having stored thereon program instructions for executing the above-described abnormality detection method at run-time.
As described above, the defect data of some products in the related art are difficult to collect, so that it is difficult to achieve effective detection of defects of the products with less defect data. In the above scheme of the present application, a probability distribution model corresponding to each pixel position is fitted according to the pixel values of the pixels corresponding to the same pixel position in the feature images of a large number of positive sample images. And the pixel values of all pixels in the feature map of the image to be detected are matched with the corresponding probability distribution model, and the defect area in the image to be detected can be rapidly and accurately detected according to the matching result. The method can realize accurate and rapid detection of the defects on the basis of no negative sample of the defects, so that the defect detection under various scenes can be realized. Moreover, the detection method is simple in execution logic and small in calculated amount, so that the implementation difficulty is lower, the detection efficiency is higher, and the real-time performance and the convenience of detection are higher compared with a model detection method. The user experience is also better.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
The above and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 shows a schematic flow chart of an anomaly detection method according to an embodiment of the present application;
FIG. 2 shows a flow chart of an anomaly detection method according to another embodiment of the present application;
FIG. 3 shows a schematic diagram of an anomaly detection system in accordance with an embodiment of the present application;
Fig. 4 shows a schematic diagram of an electronic device according to an embodiment of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein. Based on the embodiments of the application described in the present application, all other embodiments that a person skilled in the art would have without inventive effort shall fall within the scope of the application.
In order to solve the above-described technical problems at least in part, according to one aspect of the present application, there is provided an abnormality detection method. The method only needs to obtain the positive sample, can greatly reduce the dependence on the current product data, does not need to collect the defect data, effectively reduces project period, and more rapidly brings benefit increase in production links to enterprises.
Fig. 1 shows a schematic flow chart of an anomaly detection method 100 according to one embodiment of the present application. As shown in fig. 1, the method 100 includes step S110, step S120, step S130, step S140, and step S150.
Step S110: and respectively extracting the characteristics of the acquired first images to obtain a plurality of first characteristic diagrams. Wherein the first image is an image of a defect-free object.
In the embodiment of the present application, the object may be arbitrary, and the present application is not limited thereto. Which may be any object or product desired to be detected. The abnormality detection method according to the embodiment of the present application will be described below by taking a wafer as an example.
The plurality of first images may be normal sample images of a non-defective wafer. It can be appreciated that, due to the characteristics of the wafer product, the yield is high, so that the positive sample collection amount is large and the time is short, about 10w pieces of image data can be collected in about 1 hour, and a sufficient positive sample image set can be quickly obtained.
The first image may be a gray scale image or a color image. It may be an image of any suitable size and resolution, or an image satisfying preset requirements. For example, the first image may be a color RGB image meeting the resolution requirement. The first image may be an original image of the wafer directly acquired by the image acquisition device, or may be an image obtained after preprocessing the original image. The preprocessing operation may include all operations for facilitating image detection, such as improving the visual effect of an image, improving the sharpness of an image, and the like. For example, the preprocessing operation may include a denoising operation such as filtering, or may include an image adjustment operation such as a gray scale, contrast, brightness, or the like on the entire image without affecting feature extraction.
In this step, a plurality of suitable feature extraction methods may be used to perform feature extraction on the first image to obtain a first feature map. For example, various suitable feature extraction models may be employed to perform feature extraction on the first image and output a first feature map. Including but not limited to convolutional neural networks (Convolutional Neural Network, CNN), cyclic neural networks (Recurrent Neural Network, RNN), pre-trained deep learning models, image pyramid models, scale-invariant feature transform models, directional gradient histogram models, and the like. The pre-trained deep learning model may be a model pre-trained on a large scale image dataset, such as VGGNet, resNet, inception, among others.
The first feature map and the first image may be the same or different in size. Optionally, the first feature map and the first image are different in size. The size of the first feature map is smaller than the size of the first image. Taking the Pixel size of the first image as 512 pixels (Pixel) x 512 pixels (Pixel) as an example, the size of the first feature map after feature extraction may be 64 pixels (Pixel) x 64 pixels (Pixel). Alternatively, the first feature map and the first image are the same size. For example, a first image may be first input into the feature extraction model to obtain an initial feature map, and then the initial feature map may be upsampled to obtain a first feature map having the same size as the first image. The first image and the first feature map each have a size of 512 pixels (pixels) x 512 pixels (pixels).
Step S120: and fitting a probability distribution model corresponding to each pixel position according to the pixel values of each pixel corresponding to the same pixel position in the plurality of first feature maps.
The size of each first feature map is the same, so that the respective pixel positions in the plurality of first feature maps have a correspondence. The pixels corresponding to the same positions may be pixels having the same coordinates in the respective images. For example, the number of the first feature maps is 1000, and then 1000 pixels with the pixel coordinates of (10, 10) of the 1000 first feature maps are pixels corresponding to the same position.
Illustratively, the probability distribution model may be a plurality of suitable continuous probability distribution models. For example, a normal distribution model. Various suitable methods may be used to fit the probability distribution model for each pixel location. Taking a normal distribution model fitted to the corresponding pixel coordinates (10, 10) as an example, first, pixel value data of 1000 pixels of the pixel coordinates (10, 10) in 1000 first images may be extracted. And then, the data can be cleaned to remove abnormal values and missing values, and the integrity and the accuracy of the data are ensured. And descriptive statistical analysis of the data may be performed to calculate the mean and standard deviation of the data. And a normalization test may be performed, and after the test passes, parameter estimation may be performed, and the calculated mean value corresponds to the expected value of the distribution, and the standard deviation corresponds to the standard deviation of the distribution. And further performing model fitting and evaluation, constructing a normal distribution model by using the estimated parameters, and performing model fitting. The model may be fitted using maximum likelihood estimation or other methods. After model fitting, various evaluation metrics (e.g., residual analysis) can be used to evaluate the quality and fit of the model. Illustratively, taking the average value of the pixel values of 1000 pixels of the pixel coordinates (10, 10) as μ and the standard deviation as σ as an example, a normal distribution model can be obtained. The positive-ethernet distribution model corresponding to each pixel represents the distribution of 1000 pixel values for that pixel location. Illustratively, a Probability Density Function (PDF) corresponding to the normal distribution of each location may also be obtained. It will be appreciated that the probability density function f (x) describes the probability density of the pixel value x in the 1000 pixel value data distribution for that pixel location.
Taking the example that the first feature map size is 64 pixels (Pixel) ×64 pixels (Pixel), 64×64 probability distribution models can be obtained as described above.
Step S130: and extracting the characteristics of the image to be detected to obtain a second characteristic diagram. Wherein each first pixel in the second feature map corresponds to an area in the image to be detected.
The second feature map may be an original feature map obtained by performing feature extraction on a model to be detected through any suitable feature extraction model. Thus, the second feature map may be smaller in size than the image to be detected. In this case, each first pixel corresponds to a plurality of pixels in the image to be detected. The region corresponding to the first pixel is a region composed of a plurality of pixels. Taking the Pixel size of the image to be detected as 512 pixels (Pixel) ×512 pixels (Pixel) as an example, the size of the second feature map after feature extraction is 64 pixels (Pixel) ×64 pixels (Pixel). Therefore, every 1 pixel point in the second feature map corresponds to 8 pixel points in the image to be detected, and the 8 pixel points form an area in the image to be detected.
In another embodiment, the image to be detected and the second feature map may be equal in size. For example, on the basis of an original feature map obtained after feature extraction of an image to be detected, the original feature map is up-sampled to obtain an image of the same size as the image to be detected. For example, an image to be detected of 512 pixels (Pixel) ×512 pixels (Pixel) is input into the trained feature extraction model, and an original feature map of 64 pixels (Pixel) ×64 pixels (Pixel) is output. Then, the original feature map may be up-sampled using an image up-sampling method such as nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, or the like to obtain a second feature map of 512 pixels (Pixel) ×512 pixels (Pixel). Both dimensions were 512 pixels (Pixel) x 512 pixels (Pixel). The dimensions of the second feature map may be the same as the dimensions of the first feature map. For example, the second feature map and the first feature map are original feature maps obtained by respectively performing feature extraction on the first image and the image to be detected by using the same feature extraction model. Or both are obtained by adopting the same image up-sampling method on the basis of the respective original characteristic diagrams.
Step S140: and for each first pixel, matching the pixel value of the first pixel with a probability distribution model corresponding to the position of the first pixel, and obtaining a matching result.
According to an embodiment of the present application, the first pixel is a pixel at any position in the image to be detected. Taking the example that the probability distribution model is a normal distribution model, the pixel coordinates of the first pixel are (10, 10), for example, and the pixel value is 100, for example. In step S130, the average value of the pixel values of 1000 pixels whose pixel coordinates are (10, 10) is μ, the standard deviation is σ, and the probability density function f (x) of the normal distribution model is obtained. In this step, x=100 may be substituted into the probability density function f (x), resulting in a corresponding probability density value.
Taking the example that the size of the second feature map is 64 pixels (Pixel) ×64 pixels (Pixel), the Pixel values of the 64×64 first pixels may be substituted into probability density functions of the corresponding normal distribution models, respectively, to obtain corresponding probability density values. In this example, the matching result is a probability density value.
Step S150: and determining whether the region corresponding to the first pixel in the image to be detected has a defect according to the matching result.
In this step, a variety of suitable determination logic may be employed to determine whether a region of the image to be inspected corresponding to each first pixel is defective. By way of example and not limitation, in the above example in which the probability density value corresponding to each first pixel is used as the matching result, the probability density value corresponding to each first pixel may be compared with a preset probability density value threshold, and whether the region corresponding to each first pixel in the image to be detected is defective or not may be determined according to the comparison result. Examples of such methods will be explained in the following and will not be repeated.
As described above, the defect data of some products in the related art are difficult to collect, so that it is difficult to achieve effective detection of defects of the products with less defect data. In the above scheme of the present application, a probability distribution model corresponding to each pixel position is fitted according to the pixel values of the pixels corresponding to the same pixel position in the feature images of a large number of positive sample images. And the pixel values of all pixels in the feature map of the image to be detected are matched with the corresponding probability distribution model, and the defect area in the image to be detected can be rapidly and accurately detected according to the matching result. The method can realize accurate and rapid detection of the defects on the basis of no negative sample of the defects, so that the defect detection under various scenes can be realized. Moreover, the detection method is simple in execution logic and small in calculated amount, so that the implementation difficulty is lower, the detection efficiency is higher, and the real-time performance and the convenience of detection are higher compared with a model detection method. The user experience is also better.
In some embodiments, the step S110 of extracting features from the acquired plurality of first images respectively further includes: inputting a plurality of first images into a first model respectively, and outputting a first feature map of each first image by the first model; step S130 performs feature extraction on the image to be detected, including: inputting the image to be detected into a first model, and outputting a second feature map by the first model; wherein the first model is a deep learning model trained using the image classification data set.
Illustratively, the first model may be a WIDE RESNET-50-2 model pre-trained using Imagenet image classification datasets. The model is a convolutional neural network model, a variant of the WIDE RESNET series. It is an improved version of the residual network (ResNet) based architecture, whose expressive power and performance can be enhanced by increasing the width of the network. WIDE RESNET-50-2 model is often used in the computer vision field for tasks such as image classification, object detection and semantic segmentation, and has better performance and generalization capability. The image classification dataset may be any standard dataset for training and evaluation including, but not limited to, an ImageNet dataset, a MNIST dataset, or CIFAR-100 datasets, etc., which the present application is not limited to.
It can be appreciated that deep convolutional neural networks pre-trained on large-scale image data such as ImageNet have the advantages of good generalization capability, high accuracy, faster model training speed, strong customizability, and the like.
In the scheme, the image classification data set trained deep learning model is utilized to conduct feature extraction to obtain the scheme of the first feature map and the second feature map, so that the accuracy and the high efficiency of feature extraction can be guaranteed, the accuracy of subsequent model matching can be guaranteed, and the detection efficiency and the detection precision of anomaly detection are improved.
In some embodiments, the anomaly detection method 100 further comprises: inputting a plurality of first images into an initial model pre-trained by using an image classification dataset, and outputting a plurality of initial feature maps by the initial model; and generating a calibration feature map according to the pixel values of the second pixels corresponding to the same pixel positions in the initial feature maps. Wherein the pixel value of each pixel in the calibration feature map is determined from the pixel values of the respective second pixels corresponding to the pixel location; and taking the plurality of first images as training samples, taking the calibration feature images as training targets, and performing secondary training on the initial model to obtain a first model.
As previously described, the first model may be a WIDE RESNET-50-2 model pre-trained using Imagenet image classification datasets. In the embodiment of the application, firstly, an image classification data set is utilized to pretrain a feature extraction model to obtain an initial model. And then, inputting each first image into a pre-trained initial feature extraction model to perform feature extraction, so as to obtain a plurality of initial feature images. And the pixel values of the pixels corresponding to the same pixel position in the plurality of initial feature images can be subjected to statistical analysis, and a calibration feature image can be generated according to the result of the statistical analysis. For example, the pixel value of each second pixel in the calibration feature map is equal to the average of the pixel values of the plurality of pixels at the corresponding pixel location in the plurality of initial feature maps. For another example, the pixel value of each second pixel in the calibration feature map is equal to a pixel value that can represent an average level of the pixel values of the plurality of pixels at the corresponding pixel locations in the plurality of initial feature maps. Which may be a pixel value of a pixel representative of the population selected from among the pixel values of a plurality of pixels corresponding to the pixel locations in the plurality of initial feature maps. Or new pixel values fitted by computing these pixels. In other words, the pixel value of the second image for each pixel location in the calibration feature map can represent the average level of the pixel value of the pixel location in the respective initial feature map. Thus, the calibration feature map can be used as a training target of the initial model, and the initial model is trained again by using the plurality of first images to obtain a first model.
In the initial feature map, a plurality of second pixels at the same position on the calibration feature map are determined together according to the pixel values at the same position on each map. The pixel locations on the initial signature are the same as the locations of the second pixels generated on the calibration signature. And training the initial model again by using the calibration feature images as training targets and using the plurality of first images, so as to obtain a first model.
In the above embodiment, the calibration feature map is generated by using the plurality of first images, and the calibration feature map is used as a training target, and the initial model is trained again, so that the initial model can be trained to be a training model more similar to the needs of the user, the model obtains the capabilities of noise removal and positive sample feature stabilization, and a calibrated feature extraction model is obtained after the training is completed.
In some embodiments, generating the calibration feature map from pixel values of respective second pixels corresponding to the same pixel location in the plurality of initial feature maps includes:
For a plurality of second pixels corresponding to any pixel location in the plurality of initial feature maps,
Dividing the plurality of second pixels into a plurality of clusters by using a clustering algorithm according to the pixel values of the plurality of second pixels; and
The pixel value of the second pixel representing the center position of the main cluster among the clusters is used as the pixel value of the pixel corresponding to the pixel position in the calibration characteristic map.
The clustering algorithm finds the optimal cluster allocation result of a plurality of sample points in an iterative optimization mode. The resulting cluster center represents the average position of each cluster, and each sample point is assigned to the cluster corresponding to its nearest cluster center. The clustering algorithm may divide the sample points in the dataset into different clusters such that the sample points within each cluster are as similar as possible, while the sample points between different clusters are as dissimilar as possible. This helps to group the data and understand the inherent structure of the data. Clustering algorithms may also help identify outliers in a dataset because outliers tend to be far from other points and may be assigned to a single cluster.
Illustratively, the clustering algorithm may be a k-means algorithm (k-means). Illustratively, there are 100 images of the initial feature map, each of which has a size of 64 pixels (pixels) by 64 pixels (pixels) as an example. Taking the pixel coordinates (10, 10) as an example, the k-means algorithm can be adopted to cluster the 5 clusters and the duty ratio of the number of each cluster for 1000 pixels of the pixel coordinates (10, 10) of the 1000 initial feature images, and the pixel value of the pixel at the central position of one cluster with the largest duty ratio is selected as the pixel value of the pixel coordinates (10, 10) in the calibration feature images. Suppose that the k-means algorithm clusters 5 clusters, each 20, 25, 40, 5 and 10 in number. The pixel value of the pixel at the center position of the cluster of 40 number is taken as the pixel value of the pixel coordinates (10, 10) in the calibration feature map. By traversing each Pixel location and calculating as described above, a 64 Pixel (Pixel) by 64 Pixel (Pixel) calibration map can be obtained.
It will be appreciated that the clustering algorithm is simple, efficient, and that the pixel values of the pixels in the cluster center may accurately represent the average level of the pixel values of the pixels in the same pixel location. The generated calibration feature map is used as a model training target to effectively calibrate the feature extraction model, so that the first feature map and the second feature map extracted by the feature extraction model can more accurately express the image features of the first image and the image to be detected, and the accuracy and the high efficiency of anomaly detection are improved.
In some embodiments, the probability distribution model is a normal distribution model, fitting the probability distribution model for each pixel location in the plurality of first feature maps based on pixel values of each pixel for the same pixel location, comprising: calculating the average value and variance of the pixel values of a plurality of third pixels corresponding to any pixel position in the plurality of first feature maps; and fitting a normal distribution model describing a distribution of pixel values of the plurality of third pixels according to the determined mean and variance.
In the above embodiment, whether each pixel value in the second feature map meets the requirement is determined by using the normal distribution model, and further whether the second feature map generated by the image to be detected is abnormal is determined, so that the accuracy of the abnormality detection method can be further improved.
In some embodiments, matching the pixel value of the first pixel with a probability distribution model corresponding to the location of the first pixel includes: substituting the pixel value of the first pixel into a probability density function of a normal distribution model corresponding to the position of the first pixel, and obtaining a probability density value corresponding to the pixel value of the first pixel; determining whether a region corresponding to the first pixel in the image to be detected has a defect comprises: judging whether a probability density value corresponding to a pixel value of the first pixel meets a preset condition or not; if yes, determining that a defect exists in the region corresponding to the first pixel in the image to be detected.
The probability density function of a normal distribution model can be expressed by the following formula:
Where μ represents an average value of the pixel values of the plurality of third pixels at the same position, and σ represents a variance of the pixel values of the plurality of third pixels at the same position.
Then, the pixel value of each first pixel in the second feature map is taken as an x value, and the x value is substituted into a probability density function of a normal distribution model corresponding to the first pixel position, so that a probability density value corresponding to each first pixel can be obtained, and when the probability density value corresponding to the first pixel meets a preset condition, the region corresponding to the first pixel is determined to be abnormal.
In the above embodiment, the probability density value of each first pixel is determined respectively, so that the abnormal region corresponding to the first pixel in the image to be detected can be accurately determined, and an accurate basis is provided for determining the abnormal region.
In some embodiments, determining whether the probability density value corresponding to the pixel value of the first pixel meets a preset condition includes: judging whether the probability density value corresponding to the pixel value of the first pixel is smaller than the probability density threshold value corresponding to the first pixel; and if so, determining that the probability density value corresponding to the pixel value of the first pixel meets the preset condition.
For example, taking the probability density threshold value of 0.5 as an example, the pixel values of the first pixels are respectively substituted into the normal distribution model to perform calculation, and when the probability density value corresponding to the pixel value of a certain first pixel is smaller than 0.5, the situation that the position feature is abnormal can be judged, so that the region is determined to be abnormal.
Alternatively, the probability density threshold corresponding to the first pixel may be set to a different probability density threshold according to the pixel position. For example, the pixel probability density value corresponding to a pixel located in the middle region of the image may be set relatively high, and the pixel probability density value corresponding to a pixel located in the edge region of the image may be set relatively low.
The probability density threshold may be preset according to the actual detection requirements. Different probability density thresholds may be set for different detection objects. The probability density threshold setting method may be various. For example, some negative samples and some positive samples may be collected in advance as a test set, and then one F1Score is calculated, and a probability density threshold value is determined according to the F1 Score. Or may be set according to the distribution of the pixel values of the respective pixel positions in the first feature map. For example, probability density values for μ -4σ and μ+4σ, respectively, of pixel values in each normal distribution model may also be calculated in advance. And either of the two probability density values is taken as a probability density threshold.
In the above embodiment, whether the region corresponding to the first pixel is an abnormal region is determined by determining whether the probability density value corresponding to the pixel value of the first pixel satisfies the probability density threshold value corresponding to the first pixel. The method for detecting the abnormality has small calculated amount and can further improve the accuracy, efficiency and reliability of the abnormality detection.
In some embodiments, the probability density thresholds for the different first pixels are different.
In the above embodiment, different probability density thresholds are set in the areas corresponding to different first pixels, so that the detection requirements for images can be more attached and met, the anomaly detection method can be more suitable for the characteristics of different image areas, and the effectiveness and accuracy of the anomaly detection method are improved.
In some embodiments, the plurality of first feature maps, the second feature maps, and the plurality of first images are all equal in size. As previously described, the first and second feature maps may be made the same size as the first image by means of upsampling. In this way, each pixel in the first feature map and the second feature map corresponds to the position of each pixel in the first image and the image to be detected one by one. Therefore, whether the pixel at the corresponding pixel position in the image to be detected has a defect can be directly determined according to the matching result of each pixel in the image to be detected, and other conversion calculation is not needed. For example, it may be determined whether a pixel at each pixel location in the image to be detected has a defect directly from the calculated probability density value and the corresponding probability density threshold. For example, a mask map having the same size as the image to be detected may be directly output, and the pixel value of the pixel position in the mask map where the probability density value is smaller than the corresponding probability density threshold value may be 1, and the pixel value of the pixel at the other position may be 0. In this way, the user can intuitively see the position of the abnormal pixel, so that the defect can be accurately located.
In the above embodiment, the first feature map, the second feature map, and the first image are set to the same size, and the correspondence relationship with the image area to be detected is not required to be determined. Therefore, the abnormal region in the image to be detected can be rapidly determined according to the first pixel, and the efficiency and convenience of the abnormal detection method are effectively improved.
Fig. 2 shows a flowchart of an abnormality detection method according to another embodiment of the present application. As shown, first, a large amount of positive sample data may be collected. Taking a wafer as an example, the positive sample collection amount is large and the time is short because of the characteristics of the wafer product, about 10w pieces of image data can be collected in about 1 hour, and enough positive sample data can be quickly obtained to obtain a positive sample data set. The WIDE RESNET-50-2 model may then be pre-trained using Imagenet data. And the feature of each positive sample image in the positive sample dataset may be extracted using a pre-trained WIDE RESNET-50-2 model to obtain a plurality of first initial feature maps. And then, carrying out cluster analysis on pixel values of pixels at the same pixel position in the first initial feature map, and obtaining a calibration feature map according to a cluster analysis result aiming at each pixel position. For example, when the first initial feature map is 64×64, each position feature value is clustered into 5 clusters by using a k-means algorithm according to grid position-by-position statistics, and the duty ratio of the number of each cluster is calculated, and the pixel value of the pixel at the center of one cluster with the largest duty ratio is used as the feature value of the current grid. Thus a 64 x 64 calibration profile is obtained. The feature extraction network WIDE RESNET-50-2 may then be retrained using the positive sample dataset and the calibration feature map. Specifically, each image in the positive sample dataset is used as a training sample, and the generated calibration feature map is used as a training target to retrain the feature extraction network WIDE RESNET-50-2. In this way, the model can be enabled to acquire the capability of noise removal and positive sample feature stabilization, and a calibrated feature extraction network is acquired after training is completed. All features of each image in the positive sample dataset are then extracted using the calibrated feature extraction network WIDE RESNET-50-2 to obtain a plurality of first feature maps (e.g., 64 x 64 in size). And according to the normal distribution model, carrying out position-by-position statistics on the 64 x 64 feature grids of each first feature map, carrying out statistics on the mean value and the variance of the feature value (namely the pixel value of the feature map) at each position, and fitting the normal distribution model (or Gaussian distribution) corresponding to each feature pixel according to the mean value and the variance. According to the position index, 64 Gaussian distribution models corresponding to the pixel positions are obtained in a total mode. Then, a detection flow of the image to be detected may be performed. First, the image features of the image to be detected may be extracted using the calibrated feature extraction network WIDE RESNET-50-2, resulting in a second feature map. Then, for a second feature map of size 64×64, the probability density values of the pixel values of the pixels of the position in the probability density function of the corresponding normal distribution model are calculated position by position. Specifically, substituting the pixel value of each first pixel in the second feature map into a probability density function of a normal distribution model corresponding to the position of the first pixel, and obtaining a probability density value corresponding to the first pixel. From these probability density values, a probability map of 64×64 is generated. The probability map may then be calculated using a uniform threshold of 0.5. For each pixel position, when the probability is less than 0.5, the position feature is abnormal, and the abnormal feature is obtained. And the corresponding region of the abnormal feature in the image to be detected can be marked as an abnormal defect region.
The abnormality detection method can realize abnormality detection by only positive samples without threshold selection or collection of negative samples, and can quickly finish product change (only 1-2 hours). Therefore, the detection efficiency and the transformation efficiency are greatly improved.
According to another aspect of the present application, there is also provided an abnormality detection system. FIG. 3 shows a schematic block diagram of an anomaly detection system 300 in accordance with one embodiment of the present application. As shown in fig. 3, the anomaly detection system includes a first extraction module 310, a fitting module 320, a second extraction module 330, a matching module 340, and a determination module 350.
The first extraction module 310 is configured to perform feature extraction on the acquired plurality of first images, respectively, to obtain a plurality of first feature maps, where the first images are images of non-defective objects.
A fitting module 320, configured to fit a probability distribution model corresponding to each pixel position according to the pixel values of each pixel corresponding to the same pixel position in the plurality of first feature maps.
The second extraction module 330 is configured to perform feature extraction on the image to be detected to obtain a second feature image, where each first pixel in the second feature image corresponds to a region in the image to be detected.
And the matching module 340 is configured to match, for each first pixel, a pixel value of the first pixel with a probability distribution model corresponding to a position where the first pixel is located, and obtain a matching result.
The determining module 350 is configured to determine whether a region corresponding to the first pixel in the image to be detected has a defect according to the matching result.
According to another aspect of the invention, an electronic device is also provided. Fig. 4 shows a schematic block diagram of an electronic device 400 according to an embodiment of the invention. As shown in fig. 4, electronic device 400 includes a processor 410 and a memory 420. Wherein the memory 420 has stored therein computer program instructions which, when executed by the processor 410, are adapted to carry out the above-described anomaly detection method.
Furthermore, according to still another aspect of the present invention, there is also provided a storage medium. Program instructions are stored on a storage medium. The program instructions, when executed by a computer or processor, cause the computer or processor to perform the respective steps of the above-described abnormality detection method 100 of an embodiment of the invention and to implement the respective modules of the above-described abnormality detection system or the respective modules in the above-described electronic device according to an embodiment of the invention. The storage medium may include, for example, a storage component of a tablet computer, a hard disk of a personal computer, read-only memory (ROM), erasable programmable read-only memory (EPROM), portable read-only memory (CD-ROM), USB memory, or any combination of the foregoing storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
Those skilled in the art can understand the specific implementation schemes of the abnormality detection system, the electronic device and the storage medium by reading the above description about the abnormality detection method, and for brevity, the description is omitted here.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the application. All such changes and modifications are intended to be included within the scope of the present application as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the application and aid in understanding one or more of the various inventive aspects, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the application. However, the method of the present application should not be construed as reflecting the following intent: i.e., the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules in an anomaly detection system according to an embodiment of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application can also be implemented as an apparatus program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
The foregoing description is merely illustrative of specific embodiments of the present application and the scope of the present application is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the scope of the present application. The protection scope of the application is subject to the protection scope of the claims.

Claims (12)

1. An abnormality detection method, comprising:
performing feature extraction on the acquired first images respectively to obtain a plurality of first feature images, wherein the first images are images of non-defective objects;
Fitting a probability distribution model corresponding to each pixel position according to the pixel values of each pixel corresponding to the same pixel position in the plurality of first feature maps, wherein the probability distribution model is a normal distribution model, and the normal distribution model corresponding to each pixel position represents the distribution condition of the pixel values of each pixel corresponding to the pixel position;
Extracting features of an image to be detected to obtain a second feature map, wherein each first pixel in the second feature map corresponds to a region in the image to be detected;
For each of the first pixels,
Matching the pixel value of the first pixel with a probability distribution model corresponding to the position of the first pixel, and obtaining a matching result;
Determining whether a region corresponding to the first pixel in the image to be detected has a defect or not according to the matching result;
the matching the pixel value of the first pixel with the probability distribution model corresponding to the position of the first pixel includes: substituting the pixel value of the first pixel into a probability density function of a normal distribution model corresponding to the position of the first pixel, and obtaining a probability density value corresponding to the pixel value of the first pixel.
2. The abnormality detection method according to claim 1, characterized in that the feature extraction of each of the plurality of acquired first images includes:
inputting the plurality of first images into a first model respectively, and outputting a first feature map of each first image by the first model;
The feature extraction of the image to be detected comprises the following steps:
Inputting the image to be detected into the first model, and outputting the second feature map by the first model;
wherein the first model is a deep learning model trained using an image classification dataset.
3. The abnormality detection method according to claim 2, characterized in that the method further comprises:
inputting the plurality of first images into an initial model pre-trained by using the image classification data set, and outputting a plurality of initial feature maps by the initial model;
Generating a calibration feature map according to pixel values of second pixels corresponding to the same pixel position in the initial feature maps, wherein the pixel value of each pixel in the calibration feature map is determined according to the pixel value of the second pixel corresponding to the pixel position;
And taking the plurality of first images as training samples, taking the calibration feature images as training targets, and performing secondary training on the initial model to obtain the first model.
4. The anomaly detection method of claim 3, wherein generating a calibration feature map from pixel values of respective second pixels corresponding to the same pixel location in the plurality of initial feature maps comprises:
For a plurality of second pixels corresponding to any pixel location in the plurality of initial feature maps,
Dividing the plurality of second pixels into a plurality of clusters by using a clustering algorithm according to the pixel values of the plurality of second pixels; and
And taking the pixel value of the second pixel representing the central position of the main cluster in the clusters as the pixel value of the pixel corresponding to the pixel position in the calibration characteristic map.
5. The anomaly detection method according to any one of claims 1 to 4, wherein fitting a probability distribution model corresponding to each pixel position based on pixel values of each pixel corresponding to the same pixel position in the plurality of first feature maps comprises:
For a plurality of third pixels corresponding to any pixel position in the plurality of first feature maps,
Calculating an average value and a variance of pixel values of the plurality of third pixels; and
Fitting a normal distribution model describing the distribution of the pixel values of the plurality of third pixels according to the determined mean and variance.
6. The abnormality detection method according to any one of claims 1 to 4,
The determining whether the region corresponding to the first pixel in the image to be detected has a defect includes:
judging whether a probability density value corresponding to a pixel value of the first pixel meets a preset condition or not; and
If yes, determining that a defect exists in the region corresponding to the first pixel in the image to be detected.
7. The anomaly detection method of claim 6, wherein determining whether the probability density value corresponding to the pixel value of the first pixel satisfies a predetermined condition comprises:
Judging whether the probability density value corresponding to the pixel value of the first pixel is smaller than the probability density threshold value corresponding to the first pixel; and
If yes, determining that the probability density value corresponding to the pixel value of the first pixel meets the preset condition.
8. The anomaly detection method of claim 7, wherein probability density thresholds corresponding to different first pixels are different.
9. The abnormality detection method according to any one of claims 1 to 4, characterized in that the plurality of first feature maps, the second feature map, and the plurality of first images are all equal in size.
10. An abnormality detection system, characterized by comprising:
The first extraction module is used for respectively carrying out feature extraction on the acquired plurality of first images to obtain a plurality of first feature images, wherein the first images are images of non-defective objects;
The fitting module is used for fitting a probability distribution model corresponding to each pixel position according to the pixel values of each pixel corresponding to the same pixel position in the plurality of first feature maps, wherein the probability distribution model is a normal distribution model, and the normal distribution model corresponding to each pixel position represents the distribution condition of the pixel values of each pixel corresponding to the pixel position;
the second extraction module is used for extracting features of the image to be detected to obtain a second feature map, wherein each first pixel in the second feature map corresponds to one region in the image to be detected;
the matching module is used for matching the pixel value of each first pixel with the probability distribution model corresponding to the position of the first pixel, and obtaining a matching result;
the determining module is used for determining whether the region corresponding to the first pixel in the image to be detected has a defect or not according to the matching result;
The matching module is specifically configured to substitute the pixel value of the first pixel into a probability density function of a normal distribution model corresponding to the position of the first pixel, and obtain a probability density value corresponding to the pixel value of the first pixel.
11. An electronic device comprising a processor and a memory, wherein the memory has stored therein computer program instructions which, when executed by the processor, are adapted to carry out the anomaly detection method of any one of claims 1 to 9.
12. A storage medium having stored thereon program instructions for performing the anomaly detection method of any one of claims 1 to 9 when run.
CN202410011307.5A 2024-01-04 2024-01-04 Abnormality detection method, abnormality detection system, electronic device, and storage medium Active CN117541832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410011307.5A CN117541832B (en) 2024-01-04 2024-01-04 Abnormality detection method, abnormality detection system, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410011307.5A CN117541832B (en) 2024-01-04 2024-01-04 Abnormality detection method, abnormality detection system, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN117541832A CN117541832A (en) 2024-02-09
CN117541832B true CN117541832B (en) 2024-04-16

Family

ID=89792269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410011307.5A Active CN117541832B (en) 2024-01-04 2024-01-04 Abnormality detection method, abnormality detection system, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN117541832B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025657A (en) * 2016-01-31 2017-08-08 天津新天星熠测控技术有限公司 A kind of vehicle action trail detection method based on video image
CN110119687A (en) * 2019-04-17 2019-08-13 浙江工业大学 Detection method based on the road surface slight crack defect that image procossing and convolutional neural networks combine
CN111127383A (en) * 2019-03-15 2020-05-08 杭州电子科技大学 Digital printing online defect detection system and implementation method thereof
CN114299066A (en) * 2022-03-03 2022-04-08 清华大学 Defect detection method and device based on salient feature pre-extraction and image segmentation
CN116523922A (en) * 2023-07-05 2023-08-01 上海圣德曼铸造海安有限公司 Bearing surface defect identification method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025657A (en) * 2016-01-31 2017-08-08 天津新天星熠测控技术有限公司 A kind of vehicle action trail detection method based on video image
CN111127383A (en) * 2019-03-15 2020-05-08 杭州电子科技大学 Digital printing online defect detection system and implementation method thereof
CN110119687A (en) * 2019-04-17 2019-08-13 浙江工业大学 Detection method based on the road surface slight crack defect that image procossing and convolutional neural networks combine
CN114299066A (en) * 2022-03-03 2022-04-08 清华大学 Defect detection method and device based on salient feature pre-extraction and image segmentation
CN116523922A (en) * 2023-07-05 2023-08-01 上海圣德曼铸造海安有限公司 Bearing surface defect identification method

Also Published As

Publication number Publication date
CN117541832A (en) 2024-02-09

Similar Documents

Publication Publication Date Title
CN110148130B (en) Method and device for detecting part defects
CN110021425B (en) Comparison detector, construction method thereof and cervical cancer cell detection method
CN113324864B (en) Pantograph carbon slide plate abrasion detection method based on deep learning target detection
CN113469951B (en) Hub defect detection method based on cascade region convolutional neural network
US20230288345A1 (en) Automatic optimization of an examination recipe
CN116664559B (en) Machine vision-based memory bank damage rapid detection method
CN111768392B (en) Target detection method and device, electronic equipment and storage medium
CN107590512A (en) The adaptive approach and system of parameter in a kind of template matches
JP2020085546A (en) System for supporting inspection and repair of structure
US20200292463A1 (en) Apparatus for optimizing inspection of exterior of target object and method thereof
CN116703251B (en) Rubber ring production quality detection method based on artificial intelligence
WO2024021461A1 (en) Defect detection method and apparatus, device, and storage medium
KR20220014805A (en) Generating training data usable for examination of a semiconductor specimen
CN111814825B (en) Apple detection grading method and system based on genetic algorithm optimization support vector machine
CN116664565A (en) Hidden crack detection method and system for photovoltaic solar cell
CN116740728A (en) Dynamic acquisition method and system for wafer code reader
CN116823725A (en) Aeroengine blade surface defect detection method based on support vector machine
CN117541832B (en) Abnormality detection method, abnormality detection system, electronic device, and storage medium
CN114742849B (en) Leveling instrument distance measuring method based on image enhancement
Shaw et al. Automation of digital historical map analyses
CN115294377A (en) System and method for identifying road cracks
Li et al. Crack Detection and Recognition Model of Parts Based on Machine Vision.
CN115485740A (en) Abnormal wafer image classification
CN117474915B (en) Abnormality detection method, electronic equipment and storage medium
CN111860616A (en) General acquisition method for weak contrast collimation image target center of comprehensive diagnosis system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant