WO2021191908A1 - Deep learning-based anomaly detection in images - Google Patents

Deep learning-based anomaly detection in images Download PDF

Info

Publication number
WO2021191908A1
WO2021191908A1 PCT/IL2021/050339 IL2021050339W WO2021191908A1 WO 2021191908 A1 WO2021191908 A1 WO 2021191908A1 IL 2021050339 W IL2021050339 W IL 2021050339W WO 2021191908 A1 WO2021191908 A1 WO 2021191908A1
Authority
WO
WIPO (PCT)
Prior art keywords
target image
images
target
feature
layers
Prior art date
Application number
PCT/IL2021/050339
Other languages
French (fr)
Inventor
Yedid Hoshen
Liron BERGMAN
Niv Cohen
Tal REISS
Original Assignee
Yissum Research Development Company Of The Hebrew University Of Jerusalem Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yissum Research Development Company Of The Hebrew University Of Jerusalem Ltd. filed Critical Yissum Research Development Company Of The Hebrew University Of Jerusalem Ltd.
Priority to US17/913,905 priority Critical patent/US20230281959A1/en
Publication of WO2021191908A1 publication Critical patent/WO2021191908A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the invention relates to the field of machine learning.
  • Agents interacting with the world are constantly exposed to a continuous stream of data. Agents can benefit from classifying particular data as anomalous, i.e., particularly interesting or unexpected. Such discrimination is helpful in allocating attention to the observations that warrant particular scrutiny. Anomaly detection by artificial intelligence has many important applications, such as fraud detection, cyber intrusion detection, and predictive maintenance of critical industrial equipment.
  • the task of anomaly detection consists of learning a classifier that can label a data point as normal or anomalous.
  • supervised classification methods attempt to perform well on normal data, whereas anomalous data is considered noise.
  • the goal of anomaly detection methods is to specifically detect extreme cases, which are highly variable and hard to predict. This makes the task of anomaly detection challenging (and often poorly specified).
  • a system comprising at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program instruction, the program instructions executable by the at least one hardware processor to: receive, as input, training images, wherein at least a majority of the training images represent normal data instances, receive, as input, a target image, extract (i) a set of feature representations from a plurality of image locations within each of the training images, and (ii) target feature representations from a plurality of target image locations within the target image, calculate, with respect to a target image location of the plurality of target image locations in the target image, a distance between (iii) the target feature representation of the target image location, and (iv) a subset from the set of feature representations comprising the k nearest the feature representations to the target feature representation, and determine that the target image location is anomalous, when the calculated distance exceeds a predetermined threshold.
  • a computer-implemented method comprising: receiving, as input, training images, wherein at least a majority of the training images represent normal data instances; receiving, as input, a target image; extracting (i) a set of feature representations from a plurality of image locations within each of the training images, and (ii) target feature representations from a plurality of target image locations within the target image; calculating, with respect to a target image location of the plurality of target image locations in the target image, a distance between (iii) the target feature representation of the target image location, and (iv) a subset from the set of feature representations comprising the k nearest the feature representations to the target feature representation; and determining that the target image location is anomalous, when the calculated distance exceeds a predetermined threshold.
  • a computer program product comprising a non-transitory computer-readable storage medium having program instructions embodied therewith, the program instructions executable by at least one hardware processor to comprising: receive, as input, training images, wherein at least a majority of the training images represent normal data instances; receive, as input, a target image; extract (i) a set of feature representations from a plurality of image locations within each of the training images, and (ii) target feature representations from a plurality of target image locations within the target image; calculate, with respect to a target image location of the plurality of target image locations in the target image, a distance between (iii) the target feature representation of the target image location, and (iv) a subset from the set of feature representations comprising the k nearest the feature representations to the target feature representation; and determine that the target image location is anomalous, when the calculated distance exceeds a predetermined threshold.
  • the program instructions are further executable to perform, and the method further comprises performing, the calculating and the determining with respect to all of the plurality of target image locations.
  • the program instructions are further executable to designate, and the method further comprises designating, a segment of the target image as comprising anomalous target image locations, based, at least in part, on the determining.
  • the program instructions are further executable to apply, and the method further comprises applying, a clustering algorithm to the set of feature representations, to obtain clusters of the feature representations, wherein the calculating comprises calculating, with respect to a target image location of the plurality of target image locations, a distance between (i) the target feature representation of the target image location, and (ii) the k nearest means of the clusters to the target feature representation.
  • the extracting is performed by applying a trained machine learning model to the training images and the target image, wherein the machine learning model is trained on a provided dataset of images.
  • the trained machine learning model undergoes additional training using the training images.
  • the trained machine learning model comprises a deep-learning neural network architecture comprising a plurality of layers, and wherein the extracting comprises concatenating features from two or more layers of the plurality of layers.
  • the extracting comprises extracting the feature representations separately from each of two or more layers of the machine learning model; the calculating comprises calculating a distance separately with respect to the feature representations extracted from each of the two or more layers; and the determining is based on a summation of all of the distance calculations.
  • the two or more layers include the uppermost M layers of the plurality of layers.
  • the extracting is performed by applying a trained machine learning model to the training images and the target image, wherein the trained machine learning model comprises a self- attention architecture comprising vision transformers.
  • the calculating comprises: selecting, from the training images, a specified number n of nearest images to the target image; and calculating, with respect to a target image location of the plurality of target image locations in the target image, a distance between (a) the target feature representation of the target image location, and (b) the feature representations from all of the image locations in the n nearest images; and determining that the target image location is anomalous, when the calculated distance exceeds a predetermined threshold.
  • the feature representation encodes high spatial resolution and semantic context.
  • each of the image locations represents a pixel in (i) each of the training images, and (ii) the target image.
  • the extracting is performed with respect to all image locations in (i) each of the training images, and (ii) the target image.
  • Fig. 1 is a flowchart of the functional steps in a process of the present disclosure for automated detection of anomalous patterns in images, according to some embodiments of the present disclosure
  • Figs. 2A and 2B illustrate the results of various network depths (i.e., number of ResNet layers) with respect to the CifarlO and FashionMNIST datasets, according to some embodiments of the present disclosure
  • Figs. 3A-3C show a comparison of average CIFAR10 and FashionMNIST ROCAUC for different numbers of nearest neighbors, as well as a comparison between the present model and Geometric on CIFAR10 and FashionMNIST, according to some embodiments of the present disclosure
  • Fig. 4 shows the performance of the present model as function of the percentage of anomalies in the training set, according to some embodiments of the present disclosure
  • Fig. 5 shows the average ROCAUC for anomaly detection using the present model on the concatenated features of each individual image in the set, according to some embodiments of the present disclosure
  • Figs. 6A-6B shows t-SNE plots of the test set features of CIFAR10, according to some embodiments of the present disclosure
  • FIG. 7 is an illustration of the present feature adaptation procedure, wherein the pre- trained feature extractor ⁇ 0 is adapted to make the normal features more compact resulting in feature extractor ⁇ , according to some embodiments of the present disclosure;
  • Figs. 8A-8B illustrate anomaly detection accuracy as correlated to the ratio between the average compactness loss of test set anomalies and the average compactness loss of training set normal images, according to some embodiments of the present disclosure;
  • Figs 9A-9C show an evaluation of the present method on detecting anomalies between flowers with or without insects, and bird varieties, according to some embodiments of the present disclosure
  • Fig. 10 shows an anomalous image (a hazelnut which contains a scratched area) (A), the retrieved nearest neighbor normal image, which contains a complete nut without scratches, the mask detected by the present method (C), and the predicted anomalous image pixels (D)
  • Fig. 11 shows an example of the effective contexts of CNNs and transformers, and the anomaly segmentation results on an anomalous image from MVTec Screw class, according to some embodiments of the present disclosure
  • Fig. 12 shows the attention maps of ViT drawn for the 2, 6 and 10 layers (left to right), for illustration, according to some embodiments of the present disclosure.
  • Fig. 13 illustrates (left to right) original input image and its 6th layer ViT attention maps (normalized) for normal and anomalous images, according to some embodiments of the present disclosure.
  • Disclosed herein are a system, method, and computer program product for automated detection of anomalous patterns in images.
  • the present disclosure provides for a machine learning model which uses deep-learning techniques to extract feature embeddings from a training image dataset.
  • the present machine learning model then applies one or more distribution- based approaches (e.g., nearest-neighbors approaches), to calculate a distance between features extracted from a target image and the embeddings of the training dataset learned during training, wherein the present model may designate the target image as anomalous when the calculated distance exceeds a specified threshold.
  • a machine learning model of the present disclosure may be trained in a semi-supervised manner, wherein the training dataset may be assumed to only include normal data instances.
  • a machine learning model of the present disclosure may be trained in an unsupervised manner, wherein the training dataset may be assumed to include a small proportion of anomalous data instances.
  • a machine learning model of the present disclosure may be trained to perform group image anomaly detection, wherein an input data sample consists of a set of images, and wherein each image in the set may be individually normal, but the set as a whole may be anomalous.
  • the present disclosure provides for deep-learning group- level feature embedding, based on orderless pooling over all the features of the images in a set.
  • the extracted group level features may then be classified as normal or anomalous based on, e.g., nearest-neighbors approaches.
  • the present disclosure provides for a pre-trained deep-learning model which extracts features from a provided dataset of images of general availability, wherein the training dataset may not be directly related to the anomaly detection task.
  • a pre-trained feature extracting model may be trained on a provided dataset, e.g., using self-supervised techniques.
  • the features extracted using the pre- trained model may undergo a feature adaptation stage, wherein the general pre-trained extracted features are adapted to the task of anomaly detection on the target distribution by, e.g., fine-tuning the pre-trained model with a compactness loss and/or using continual learning adaptive regularization.
  • the present disclosure provides for sub-image anomaly detection, wherein a segmentation map may be provided which describes a segment where an anomaly is present inside an image.
  • the present disclosure provides for a novel anomaly segmentation approach based on alignment between a target image and a specified number of nearest normal images.
  • the present disclosure provides for determining correspondences between the target image and the nearest images based on a multi- resolution feature pyramid.
  • the present disclosure provides for a machine learning model which uses deep-learning techniques to extract feature embeddings from a training image dataset.
  • the present machine learning model then applies one or more distribution-based approaches (e.g., nearest-neighbors approaches), to calculate a distance between features extracted from a target image and the embeddings of the training dataset learned during training, wherein the present model may designate the target image as anomalous when the calculated distance exceeds a specified threshold.
  • distribution-based approaches e.g., nearest-neighbors approaches
  • a target image classified as anomalous may undergo sub-image anomaly detection, wherein a specified number of nearest normal images may be selected from the training dataset, based on a distance between the target image and the selected nearest images which may be measured using any suitable distance measure.
  • the present disclosure thus provides for determining, with respect to each pixel in a target image, an anomaly score which represents a distance between the relevant pixel and the nearest corresponding pixel in the nearest-neighbor normal images.
  • the features extracted from the training dataset images and the target image represent a pyramid of features, wherein bottom layers result is higher resolution features which encode less semantic context, and upper layers encode lower spatial resolution features but with more semantic context.
  • each location is represented using features from the different layers of the feature pyramid, e.g., features from the output of the last specified number of blocks may be concatenated to represent a location in the images.
  • the feature representation of each location in the images encodes both fine-grained local features as well as global context. In some embodiments, this allows to find correspondence between the target image and nearest-neighbor normal images, without having to perform image alignment.
  • the present method is scalable and easy to deploy in practice.
  • the present disclosure provides for representing each location in the images based on calculating an anomaly score of each pixel using each feature layer individually, and combining the scores to obtain a total multi-layer anomaly score for each pixel.
  • the present disclosure further provides for sub-image anomaly detection and segmentation based on transferring pretrained features.
  • the present disclosure provides for using a Vision Transformers feature extraction architecture, wherein each pixel representation may gain its context from across the entire image, with a tendency to focus only on context features that are deemed relevant according to attention layers in the network architecture, and wherein the attention layers in each transformer unit allow the network to learn to avoid including irrelevant context.
  • the feature representation extracted by the Vision Transformers network may be combined in a multi- resolution construction to improve resolution performance while still provide for strong local and global context.
  • the attentional patterns learned by the Vision Transformers focus on anomalous regions in the images. In some embodiments, this approach may be sued for zero-shot anomaly detection and segmentation, i.e., detecting anomalies without having previously seen normal or anomalous images.
  • Fig. 1 is a flowchart of the functional steps in a process of the present disclosure for automated detection of anomalous patterns in images, according to some embodiments of the present disclosure.
  • step 100 the present disclosure provides for receiving, as input, a set of training images, wherein at least a majority of the training images represent normal data instances.
  • a target image may be classified as anomalous as a whole.
  • a target image may undergo sub-image anomaly detection, to classify each pixel in the target image as anomalous.
  • step 104 the present disclosure provides for extracting a set of deep features from multiple locations (e.g., individual pixels or groups of pixels) within each of the training images, as well as similar features from locations within the target image.
  • step 106 the present disclosure provides for calculating distances between the features of each location in the target image, and the k nearest feature representations from the training images.
  • the present disclosure may classify a location in the target image as anomalous, when the calculated distance exceeds a predetermined threshold.
  • the present disclosure provides for designating a segment of the target image as comprising anomalous locations (e.eg, pixels), based, at least in part, on determining that each location (e.g., pixel) in the segment is anomalous.
  • anomalous locations e.e., pixels
  • the present disclosure provides for applying a clustering algorithm to the deep feature representations, to obtain clusters of the feature representations.
  • the distance calculation then comprises calculating distances between the features of each location in the target image and the k nearest means of the clusters.
  • the deep features extracting is performed by applying a trained machine learning model to the training images and the target image.
  • the machine learning model is pre-trained on a provided dataset of images, e.g., a database of images.
  • the trained machine learning model may undergo additional training using the training images.
  • the extracted deep features encode high spatial resolution and semantic context.
  • the trained machine learning model comprises a deep-learning neural network architecture comprising a plurality of layers, wherein the extracting comprises concatenating features from two or more layers of the plurality of layers.
  • the two or more layers include the uppermost M layers of the plurality of layers.
  • the extracting comprises extracting the feature representations separately from each of two or more layers of the machine learning model, wherein the calculating of the distances comprises calculating a distance separately with respect to the feature representations extracted from each of the two or more layers, and wherein the determining is based on a summation of all of the distance calculations.
  • the trained machine learning model comprises a self-attention architecture comprising vision transformers.
  • the distance calculation comprises selecting, from the training images, a specified number n of nearest images to the target image, and calculating a distance between the features of each location in the target image and the feature representations from all of the image locations in the n nearest images.
  • the present disclosure provides for an anomaly detection process which learns general features (using any available level of supervision) on related datasets, and then uses the learned features to apply nearest-neighbors anomaly detection methods (e.g. kNN, k-means).
  • a pretrained feature extraction process may provide for faster deployment times than self-supervised methods.
  • the present disclosure employs one or more feature extraction methods, e.g., ResNet extractor (He, Kaiming, et al. "Deep residual learning for image recognition.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.) pre-trained on a provided image dataset (e.g., the Imagenet dataset, http://www.image-net.org/).
  • images e.g., Imagenet
  • X train x 1 , x 2 , . x N .
  • all the images in the training set may be assumed to be within a normal distribution.
  • a feature extractor such as a ResNet feature extractor may be used, which may be pre-trained on the provided training dataset. At first sight it might appear that this supervision is a strong requirement, however such feature extractors are widely available. We will later show experimentally that the normal or anomalous images do not need to be particularly closely related to the Imagenet dataset.
  • the present disclosure may then provide for calculating a K nearest-neighbors (kNN) distance and use it as an anomaly score: (2) where N k (f y ) denotes the k nearest embeddings to f y in the training set F train .
  • the present model may use the Euclidean distance, which often achieves strong results on features extracted by deep networks, however, other distance measures may be used in a similar way. By verifying whether the distance d(y) is larger than a specified threshold, target data instance y may be designated as normal or anomalous.
  • the present disclosure provides for an unsupervised approach, wherein the training dataset may not be assumed to consist of only normal data samples. In some embodiments, it is assumed that a small proportion of input images in the training dataset are anomalous.
  • the present disclosure provides for a data cleaning stage which removes ag least some of the anomalous training images, accordingly, after performing a feature extraction stage as further explained above, the kNN distance between each input image and the rest of the input images, based on the assumption that anomalous images lie in low density regions, a fraction of the images with the largest kNN distances may be removed, wherein this fraction is selected such that it is larger than the estimated proportion of anomalous input images in the training dataset.
  • the percentage of removed images may be large enough to ensure that the kept the images are likely to be normal (e.g., the cleaning process may remove 50% of training images).
  • the images are now assumed to have a very high-proportion of normal images.
  • the present disclosure may then provide for calculating a kNN distance and use it as an anomaly score to determine whether a target data instance y may be designated as normal or anomalous.
  • Group anomaly detection tackles the setting where the input sample consists of a set of images. The particular combination is important, but not the order. It is possible that each image in the set will individually be normal but the set as a whole will be anomalous.
  • a training set comprising a plurality of groups consisting of M normal images, each randomly sampled from multiple classes.
  • a trained image-level anomaly detection model will be able to detect anomalous groups containing individual anomalous images, e.g., images taken from classes not seen in training. However, an anomalous group containing multiple images from a seen class, but no images from any other class, will still be classified as normal, because all images in the group are individually normal.
  • the present disclosure provides for a kNN-based approach, which embeds the set by orderless-pooling (e.g., averaging) over all the features of the images in each group.
  • the disclosed method comprises:
  • a target group may similarly undergo a feature extraction stage to extract pooled group-level features.
  • the present disclosure may then provide for calculating a kNN distance and use it as an anomaly score to determine whether a target group instance may be designated as normal or anomalous.
  • the present inventors conducted experiments to determine the performance of the present method.
  • ROCAUC ROC area under the curve
  • CifarlO dataset used in the experiments is a common dataset for evaluating unimodal anomaly detection.
  • CIFAR10 contains 32 X 32 color images from 10 object classes. Each class has 5000 training images and 1000 test images. The results are presented in Table 1 below. As can be seen, the present model significantly outperforms all other methods. Table 1: Anomaly Detection Accuracy on CifarlO (ROCAUC %)
  • the performance of the present model is deterministic for a given training and test set (e.g., no variation between runs). It may be observed that OC-SVM and Deep-SVDD are the weakest performers. This is because both the raw pixels as well as features learned by Deep- SVDD are not discriminative enough for the distance to the center of the normal distribution to be successful. Geometric and later approaches (GOAD and MHRot) perform better, but do not exceed 90% ROCAUC. The performance evaluation were made without finetuning between the dataset and simulated anomalies (which improves performance on all methods).
  • Geometric, GOAD and the present method were further evaluated on the Fashion MNIST dataset, consisting of 6000 training images per class and a test set of 1000 images per class.
  • a comparison of the present method against OCSVM, Deep SVDD, Geometric and GOAD is shown in Table 2 below. As can be seen, the present method outperforms all other methods, despite the data being visually quite different from the Imagenet dataset from which the features were extracted.
  • CIFAR100 has 100 fine-grained classes with 500 training images each, or 20 coarse- grained classes with 2500 training images each. In the present experiments, the coarse-grained version is used. The experiment protocol is the same as CIFAR10.
  • a comparison of the present method against OCSVM, Deep SVDD, Geometric and GOAD is shown in Table 2 below. As can be seen, the results are consistent with those obtained for CIFAR10.
  • Cats Vs Dogs This dataset consists of 2 categories - dogs and cats, with 10,000 training images each.
  • the test set consist of 2,500 images for each class. Each image contains either a dog or a cat in various scenes and taken from different angles.
  • the data was extracted from the ASIRRA dataset, we split each class to the first 10,000 images as training and the last 2,500 as test.
  • Figs. 2A and 2B illustrate the results of various network depths (i.e., number of ResNet layers) with respect to the Cifar10 and FashionMNIST datasets. Effect of the number of neighbors:
  • Fig. 3A shows a comparison of average CIFAR10 and FashionMNIST ROCAUC for different numbers of nearest neighbors. The differences are not particularly large, but 2 neighbors usually provide the best results.
  • Dior is an aerial image dataset.
  • the images are registered but do not have a preferred orientation.
  • the dataset consists of 19 object categories that have more than 50 images each, with resolution above 120 X 120 (the median number of images per-class is 578).
  • a bounding boxes is provided with the data, such that each object may be extracted with a bounding box of at least 120 pixels in each axis.
  • the bounding box is then resized to 256 X 256 pixels.
  • Table 4 The same experimental protocol as in the earlier datasets is then followed. The results are summarized in Table 4 below.
  • the present model significantly outperforms MHRot. This is due both to the generally stronger performance of the feature extractor as well as the lack of rotational prior that is strongly used by RotNet- type methods. Note that the images are centered, a prior used by the MHRot translation heads.
  • WBC WBC Image Dataset
  • WBC Image Dataset which consists of high- resolution microscope images of different categories of white blood cells. The data do not have a preferred orientation. Additionally the dataset is very small, only a few tens of images per-class. Dataset 1 was used, which was obtained from Jiangxi Telecom Science Corporation, China, and was split into the 4 different classes that contain more than 20 images each. The first 80% of images in each class were used for the training set, and the last 20% were used as the test set. The results are presented in Table 4 below. As expected, the present model outperforms MHRot by a significant margin showing its greater applicability to real world data.
  • the present inventors compared the present model against Geometric on CIFAR10 and CIFAR100 on this setting.
  • the average ROCAUC across all the classes is detailed in Table 5.
  • the present model achieves significantly stronger performance than Geometric. It is believed that occurs because Geometric requires the network not to generalize on the anomalous data. However, once the training data is sufficiently varied the network can generalize even on unseen classes, making the method less effective. This is particularly evident on CIFAR100.
  • Table 5 Anomaly Detection Accuracy on Multimodal Normal Image Distributions (ROCAUC %)
  • One of the advantage of the present model is its ability to generalize from very small datasets. This is not possible with self-supervised learning-based methods, which do not learn general enough features to generalize to normal test images.
  • a comparison between the present model and Geometric on CIFAR10 is presented in Fig. 3B, wherein the number of training images is plotted against the average ROCAUC. As can be seen, the present model can detect anomalies very accurately even from as few as 10 images, while Geometric deteriorates quickly with decreasing number of training images.
  • a similar plot is presented for FashionMNIST in Fig. 3C. Geometric is not shown as it suffered from numerical issues for small numbers of images. The present model again achieved strong performance from very few images.
  • the training set does not consist of purely normal images, but rather a mixture of unlabeled normal and anomalous images. In most cases, it may be assumed that that anomalous images comprise only a small fraction of the number of the normal images.
  • the performance of the present model as function of the percentage of anomalies in the training set is presented in Fig. 4. The performance is somewhat degraded as the percentage of training set impurities exist.
  • a cleaning stage may be performed, which removes approx. 50% of the training set images that have the most distant kNN inside the training set. The cleaning procedure is clearly shown to significantly improve the performance degradation as percentage of impurities.
  • the normal class was designated as sets consisting of exactly one image from each of the M CIFAR10 classes (specifically the classes with ID 0.. M — 1) while each anomalous set consisted of M images selected randomly among the same classes (some classes had more than one image and some had zero).
  • Fig. 5 shows the average ROCAUC for anomaly detection using the present model on the concatenated features of each individual image in the set.
  • this baseline works well for small values of M where there is a sufficient number of examples of all possible permutations of the class ordering.
  • M grows larger (M > 3)
  • its performance decreases, as the number of permutations grows exponentially.
  • This method with 1000 image sets for training, is also compared to nearest neighbors of the orderless max -pooled and average-pooled features, wherein the result shows that mean-pooling significantly outperforms the baseline for large values of M. While performance of the concatenated features may be improved by augmenting the dataset with all possible orderings of the training sets, it will grow exponentially for a non-trivial number of M making it an ineffective approach.
  • the input images are resized to 256 x 256, a center crop of size 224 x 224 is taken, and pre-trained ResNet (consisting of 101 layers) pre-trained on the Imagenet dataset, is used to extract the features after the global pooling layer. This feature is the image embedding.
  • ResNet Consisting of 101 layers
  • Figs. 6A-6B shows t-SNE plots of the test set features of CIFAR10.
  • the normal class is plotted in light color, while the anomalous data is marked in marked in dark color.
  • the t-SNE plots of the features learned by SVDD are shown on the left, Geometric in the center, and the Imagenet dataset pre-trained feature extractor on the right, where the normal class is Airplane (Fig. 6A) and Automobile (Fig. 6B.
  • the Imagenet- pretrained features clearly separate the normal class (light) and anomalies (dark).
  • Deep-SVDD does not learn features that allow clean separation. It is clear that the pre-trained features embed images from the same class into a fairly compact region. It is therefore expected that the density of normal training images is much higher around normal test images than around anomalous test images. This may explain the success of kNN methods.
  • kNN has linear complexity in the number of training data samples.
  • Methods such as One- Class SVM or SVDD attempt to learn a single hypersphere, and use the distance to the center of the hypersphere as a measure of anomaly.
  • the inference runtime is constant in the size of the training set, rather than linear as in the kNN case.
  • the drawback is the typical lower performance.
  • Another potential way of decreasing the inference time is using K-means clustering of the training features. This speeds up inference by a ratio of It may be therefore suggested to speed up the present model by clustering the training features into K clusters and then performing kNN on the clusters rather than the original features.
  • Table 6 presents a comparison of performance of the present model and its K-means approximations with different numbers of means (we use the sum of the distances to the 2 nearest neighbors). As can be seen, for a small loss in accuracy, the retrieval speed can be reduced significantly.
  • the present disclosure provides for an anomaly detection method that adapts pretrained features and mitigates or avoids catastrophic collapse.
  • Experimental results show that the present disclosure significantly outperform current methods while addressing their limitations.
  • Anomaly detection methods require high-quality features.
  • One way of obtaining strong features is to adapt pre-trained features to anomaly detection on the target distribution.
  • simple adaptation methods often result in feature deterioration and degraded performance.
  • DeepSVDD see Lukas Ruff, et al. Deep one-class classification. In ICML, 2018) combats collapse by removing biases from architectures, but this limits the adaptation performance gain. Accordingly, in some embodiments, the present disclosure provides for two methods for combating feature collapse:
  • the learner observes a set of training examples. The learner is then tasked to classify novel test samples as normal or anomalous.
  • anomaly detection settings investigated in the literature, corresponding to different training conditions. One such setting assumes that only normal images are used for training. Another setting provides data samples simulating anomalies.
  • DeepSVDD DeepSVDD was proposed to overcome collapse by removing biases from the model architecture, but this restricts network expressively and limits the pre-trained models that can be borrowed off-the-shelf. It was also proposed to jointly train anomaly detection with the original task which has several limitations and achieves only limited adaptation success.
  • the present disclosure provides for two techniques to overcome catastrophic collapse:
  • An adaptive early stopping method that selects the stopping iteration per-sample, using a novel generalization criterion, and an elastic regularization, motivated by continual learning, that postpones the collapse.
  • the present disclosure also provides an extensive evaluation of Imagenet -pretrained features on one-class anomaly detection. Thorough experiments demonstrate that the present method outperform the state-of-the-art by a wide margin.
  • the present general framework examines several adaptation -based anomaly detection methods. Assume a set D train of normal training samples: x 1 , x 2 , . x N . The framework consists of three steps:
  • Feature extractor pretraining A pre-trained feature extractor ⁇ 0 is typically learned using self-supervised learning (auto-encoding, rotation or jigsaw prediction).
  • the loss function of the auxiliary task may be denoted L pretrain .
  • the auxiliary task can be learned either on the training set D train or on an external dataset D pretrain (such as the Imagenet dataset).
  • Feature adaptation Features trained on auxiliary tasks or datasets may require adaptation before being used for anomaly scoring on the target data. This can be seen as a finetuning stage of the pre-trained features on the target training data.
  • the feature extractor after adaptation may be denoted ⁇ .
  • Anomaly scoring Having adapted the features for anomaly detection, the features ⁇ (x 1 ), ⁇ (x 2 ).. ⁇ (x N ) of the training set samples are extracted.
  • the method then proceeds to learn a scoring function, which describes how anomalous a sample is.
  • the scoring function seeks to measure the density of normal data around the test sample ⁇ (x) (either by direct estimation or via some auxiliary task) and assign a high anomaly score to low density regions.
  • the present disclosure provides for feature adaptation for anomaly detection, which adapts general pre-trained features to anomaly detection on the target distribution.
  • the present method is agnostic to the specific pretrained feature extractor. Based on experiments conducted by the present inventors, it was found that the Imagenet dataset pretrained features achieve better results.
  • the present method uses the compactness loss (Eq. 3) to adapt the general pre-trained features to the task of anomaly detection on the target distribution.
  • Eq. 3 the compactness loss
  • the present method tackles catastrophic collapse directly.
  • the present method provides for two options: (i) finetuning the pretrained extractor with compactness loss (Eq.3) and using sample-wise early stopping, and (ii) when collapse happens prematurely, before any significant adaptation happens, mitigating it using a Continual Learning-inspired adaptive regularization.
  • Fig. 7 is an illustration of the present feature adaptation procedure, wherein the pre- trained feature extractor ⁇ 0 is adapted to make the normal features more compact resulting in feature extractor ⁇ . After adaptation, anomalous test images lie in a less dense region of the feature space.
  • SES Sample-wise Early Stopping
  • Early stopping is one of the simplest methods used to regularize neural network. While stopping the training process after constant number of iterations helps to control the collapse of the original features in most examined datasets, in other cases, collapse occurs earlier in the training process, thus the best number of early stopping iterations may vary between datasets. Accordingly, in some embodiments, the present disclosure provides for "samplewise early stopping" (SES). The intuition for the method can be obtained from Figs. 8A-8B. As can be seen, anomaly detection accuracy is correlated to the ratio between the average compactness loss of test set anomalies and the average compactness loss of training set normal images.
  • the present disclosure provides for saving checkpoints of the network at fixed intervals during the training process, e.g., corresponding to different early stopping iterations ( ⁇ 1 , ⁇ 2 . . ⁇ T ) ⁇
  • the average loss on the training set images s t is calculated.
  • the maximal normalized score is set as the anomaly score of this sample, as this roughly estimates the model that achieves the best separation between normal and anomalous samples. Note that each sample is scored using only its features f t , and the normal train set average score s t , without seeing the labels of any other test set samples.
  • the present disclosure provides for a novel solution for overcoming premature feature collapse that draws inspiration from the field of continual learning.
  • the task of continual learning tackles learning new tasks without forgetting the previously learned ones. It may be noted, however, that the present task is not identical to standard continual learning as (i) it deals with the one-class classification setting whereas continual-learning typically deals with multi-class classification, and (ii) it aims to avoid forgetting the expressivity of the features but do not particularly care if the actual classification performance on the old task is degraded.
  • a simple solution for preventing feature collapse is by regularization of the change in value of the weights of the feature extractor ⁇ from those of the pre-trained extractor ⁇ 0 . However, this solution is lacking as the features are more sensitive to some weights than others and this can be "exploited" by the adaptation method.
  • the present disclosure provides for using elastic weight consolidation (EWC).
  • EWC elastic weight consolidation
  • the diagonal of the Fisher information matrix is used to weight the Euclidean distance of the change of each network parameter ⁇ i ⁇ 0 and its corresponding parameter .
  • This weighted distance can be interpreted as a measure of the curvature of the loss landscape as function of the parameters - larger values imply high curvature, inelastic weights.
  • Network ⁇ is initialized with the parameters of the pretrained extractor ⁇ 0 and trained with SGD.
  • the present transformed data typically follows the standard anomaly detection assumption, i.e., high-density in regions of normal data.
  • scoring can be done by density estimation.
  • the present method performs better with strong non-parametric anomaly scoring methods.
  • Several anomaly scoring methods can be evaluated: (i) Euclidean Distance to the mean of the training features, (ii) the K nearest-neighbor distance between the target (test set) features and the features of the training set images, and/or (iii) computing the K-means of the training set features, and computing the distance between the target sample features to the nearest mean.
  • An extension of the typical image anomaly detection task assumes the existence of an auxiliary dataset of images D 0E , which are more similar to the anomalies than normal data.
  • a linear classification w layer may be trained together with the features y under a logistic regression loss (Eq. 7).
  • is initialized with the weights from ⁇ 0 .
  • w ⁇ (x) may be used as the anomaly score.
  • the present inventors have compared the EWC variant of the present method to One- class SVM (see Bernhard Scholkopf, et al. Support vector method for novelty detection. In NIPS, 2000), DeepSVDD, and Multi-Head RotNet. The present method is also comrade to raw (un- adapted) pretrained features.
  • the main results show the: (i) pre-trained features achieve significantly better results than self-supervised features on all datasets; (ii) Feature adaptation significantly improves the performance on larger datasets; and (iii) outlier exposure (OE) can further improve performance in the case where the given outliers are more similar to the anomalies than the normal data.
  • OE achieves near perfect performance on CIFAR10/100 but hurts performance for Fashion MNIST/CatsVsDogs which are less similar to the 80M Tiny images dataset.
  • Tables 7 above and 8 below present a comparison between methods that use self- supervised and pre-trained feature representations.
  • the autoencoder used by DeepSVDD is particularly poor.
  • the results of the MHRotNet as a feature extractor are better, but still underperform the present methods.
  • the performance of the raw deep ResNet features without adaptation significantly outperforms all methods, including Fashion MNIST and DIOR which have significant differences from the ImageNet dataset. It may be therefore concluded that ImageNet-pretrained features typically have significant advantages over self-supervised features.
  • Table 8 shows that self-supervised methods do not perform well on small datasets as such methods require large numbers of normal samples in order to learn strong features. On the other hand ImageNet-pretrained features obtain very strong results.
  • Table 8 Pretrained feature performance on various small datasets (Average ROC AUC %)
  • anomaly detection methods employ different levels of supervision.
  • OE outlier exposure
  • an external dataset e.g. the ImageNet dataset
  • pretrained features e.g. the ImageNet dataset
  • no external supervision e.g. the most extensive supervision is used by OE, which requires a large external dataset at training time, and performs well only when such a dataset is from a similar domain to the anomalies.
  • the network may not learn to distinguish between normal and anomalous data, as the normal and anomalous data may have more in common than the OE dataset.
  • Pretraining like Outlier Exposure, is also achieved through an external labelled dataset, but differently from OE, the external dataset is only required once - at the pretraining stage and is not used again. Additionally, the same features are applicable for very different image domains from that of the pretraining dataset. Self-supervised feature learning requires no external dataset at all, which can potentially be an advantage. While there might be image anomaly detection tasks where ImageNet-pretrained weights are not applicable, there was no evidence for such cases after examining a broad spectrum of domains and datasets. This indicates that the extra supervision of the ImageNet-pre trained weights comes at virtually no cost.
  • pretrained features improve the performance of RotNet-based AD methods.
  • Table 9 pretrained features improve the auxiliary task performance on the normal data, but also on the anomalous samples.
  • deep features actually reduce this gap, as a solution to the auxiliary task becomes feasible for both types of images.
  • Feature adaptation aims to make the distribution of the normal samples more compact, with respect to the anomalous samples.
  • the present approach of finetuning pretrained features for compactness under EWC regularization significantly improves the performance over "raw" pretrained features. While the distance from the normal train samples center, of both normal and anomalous test samples is reduced, the average distance from the center of anomalous test samples is typically further than that of normal samples, in relative terms. This makes anomalies easier to detect by standard classifiers such as kNN.
  • Fine-tuning all the layers is prone to feature collapse, even with continual learning (see Table 11 below).
  • Finetuning Blocks 3 & 4, or 2, 3 & 4, results in similar performance.
  • Finetuning only block 4 results in a very similar performance to linear whitening of the features according to the train samples (94.6 with whitening vs. 94.8 with finetuning only the last block). Similar effect as can be seen in the original DeepSVDD architecture. Accordingly, it is recommended to finetune
  • kNN achieves an improvement of around 2% on average with respect to distance to the center.
  • a naive implementation of kNN has linear runtime complexity in the number of training samples.
  • K-means with a small number of clusters gives ⁇ 1% decrease. It is noted that even for very large datasets, or many thousands of means, both kNN and K-means can run faster than real- time.
  • kNN Nearest neighbor
  • the present disclosure further provides for a novel anomaly segmentation approach based on alignment between the anomalous image and a constant number of the nearest normal images.
  • the present method termed Semantic Pyramid Anomaly Detection, uses correspondences based on a multi-resolution feature pyramid. The present method is shown to achieve state-of-the-art performance on unsupervised anomaly detection and localization while requiring virtually no training time.
  • a key human ability is to detect novel images that stand out in the succession of like images observed day-to-day, e.g., those images indicating opportunity or danger, that deviate from previous patterns. Such ability typically triggers particular vigilance on the part of the human agent. Due to the importance of this task, allowing computers to detect anomalies is a key task for artificial intelligence.
  • assembly-line fault detection Assembly lines manufacture many instances of a particular product. Most products are normal and fault-free. However, on occasion, the manufactured products contain some faults, e.g. dents, wrong labels or part duplication. As reputable manufacturers strive to keep a consistent quality of products, prompt detection of the faulty products is very valuable.
  • test classes come from a similar distribution to the training data.
  • the distribution of anomalies is not observed during training time.
  • Different anomaly detection methods differ by the way the anomalies are observed at training time. For example, in some cases, at training time only normal data is observed. This is a practically useful setting, as obtaining normal data (e.g., products that contain no faults) is relatively easy. This setting is sometimes called semi-supervised or normal-only training setting.
  • An easier scenario is fully-supervised, i.e., both labelled normal and anomalous examples are presented during training.
  • Another challenge particular to visual anomaly detection is the localization of anomalies, i.e., segmenting the parts of the image which the algorithm deems anomalous. This is very important for explainability of the decision made by the algorithm, as well as for building trust between operators and novel AI systems. It is particularly important for anomaly detection, as the objective is to detect novel changes not seen before, and with which humans might not be familiar.
  • the algorithm may teach the human operator of the existence of new anomalies or alternatively the human may decide that this anomaly is not of interest, thus not rejecting the product and resulting is cost-savings.
  • the present disclosure provides for a novel method for solving the task of sub-image anomaly detection and segmentation.
  • the present method does not require an extended training stage, it is fast, robust, and achieves state of the art performance, in some embodiments, the present method consists of several stage:
  • Image feature extraction using a pre-trained deep neural network e.g. , a ResNet pre-trained on the ImageNet dataset, http://www.image-net.org/
  • a ResNet pre-trained on the ImageNet dataset http://www.image-net.org/
  • nearest neighbor retrieval of the nearest K normal images to a target data sample finding dense pixel-level correspondence between the target data sample and the nearest neighbor normal images, and identification of target image regions that do not have near matches in the retrieved normal images as anomalous.
  • the present disclosure computes sub-image feature representations for each image in a set of normal images and for a given target image.
  • a sub-image feature representations may consist of a set of features, each feature may give a description of the image around some image location.
  • One example of a set of locations can be the centers of each pixel.
  • the present disclosure classifies a target location within the target image as normal or anomalous, given the similarity of its feature representation to that of other sub-image feature representations.
  • the present disclosure may use one or more suitable classifier to perform this task, e.g., K-nearest neighbors (kNN), K means, OCSVM, SVDD, neural network, and the like.
  • the classifier may search for the nearest features to the target feature within the sub-image feature representation of the normal images and/or within the sub- image feature representation of the target image. Locations with distances to the nearest features larger than a pre-specified threshold may be classified as anomalous. In some embodiments, such distance measures may include the Euclidean distance.
  • features may be extracted by any suitable method, e.g., a deep neural network (pre-trained or otherwise); a hand-crafted pipeline (e.g., HOG, color histograms, image location); and/or using the raw data itself.
  • a deep neural network pre-trained or otherwise
  • a hand-crafted pipeline e.g., HOG, color histograms, image location
  • neural network activations extracted at multiple resolutions may be used.
  • a dense sub-image feature representations of uniform resolution may be formed using upscaling of the activations of the different resolutions within a neural network to that of the highest resolution. The highest resolution can be the same as the input resolution or some intermediate layer.
  • training data may comprise normal-only images.
  • a method for detecting the whole normal images may be first performed (e.g., the whole image anomaly detection method disclosed hereinabove).
  • the training dataset may be pruned by selecting the images that are most similar to the target image, e.g., as measured using, e.g., a global deep feature representation.
  • the present method may also be applied to video.
  • a target frame sequence within a video segment may be as the target segment.
  • other frame sequences in the video segment may be treated as the normal segments.
  • the kNN classification can be performed similarly to the above.
  • obtaining features for video may be performed using any suitable method, e.g., extraction by a deep neural network (pre-trained or otherwise), wherein the network may take in single or multiple frame inputs; a hand-crafted pipeline (e.g. HOG, color histograms, clip time or location); and/or the raw data itself. It is possible to use neural network activations extracted at multiple resolutions (feature pyramid).
  • One way of forming a dense sub-image feature representations of uniform resolution is upscaling the activations of the different resolutions to that of the highest resolution.
  • the highest resolution can be the same as the input resolution or some intermediate layer. This can also be performed in the temporal domain.
  • the entire video training set or a part of it may be selected. If some of the segments given for training are anomalous, a method for detecting the normal segments can be first performed.
  • the present disclosure is more accurate, faster, and more stable than previous methods, and does not require a dedicated training stage.
  • the present inventors have evaluated the present method on two high quality datasets for evaluating sub-image anomaly detection task:
  • MVTec (Bergmann, P. et al. MVTec ad-a comprehensive real-world dataset for unsupervised anomaly detection.
  • CVPR 2019: A dataset simulating an industrial fault detection where the objective is to detect parts of images a products that contain faults such as dents or missing parts
  • the first stage of the present method is the extraction of strong image level features.
  • the same features are later used for pixel-level image alignment.
  • the most commonly used option is self-supervised feature learning, that is, learning features from scratch directly on the input normal images. Although it is an attractive option, it is not obvious that the features learned on small training datasets will indeed be sufficient for serving as high-quality similarity measures.
  • the present disclosure employs a ResNet feature extractor pre-trained on the ImageNet dataset.
  • image-level features the present disclosure uses the feature vector obtained after global-pooling the last convolutional layer.
  • the first stage in the present method is determining which images contain anomalies using, e.g., the whole-image anomaly detection method disclosed herein above. For a given test image y, its K nearest normal images are retrieved from the training set, N k (f y ). The distance is measured using the Euclidean metric between the image-level feature representations.
  • Target image y is labelled at this stage as normal or anomalous. Positive classification is determined by verifying if the kNN distance is larger than a threshold t. Is classified as anomalous, target image y is further processed in order to determine the sub-image anomaly locations.
  • a sub-image anomaly detection via image alignment stage is performed.
  • the input to this stage is target image y that was classified as anomalous on a whole-image basis.
  • the objective is to locate and segment the pixels of one or multiple anomalies within the target image y. In the case that the target image y was falsely classified as anomalous, the present method would mark no pixels as anomalous.
  • the present disclosure provides for aligning the target image y to multiple retrieved normal images.
  • the present disclosure extracts deep features at every pixel location p ⁇ P using feature extractor F L (x i , p) of the relevant target image y and retrieved normal training images.
  • the anomaly score of pixel p in target image y is therefore given by: (10)
  • a pixel is determined as anomalous if d(y, p ) > ⁇ p , that is, if no closely corresponding pixel in the K nearest neighbor normal images may be found.
  • Alignment by dense correspondences is an effective way of determining the parts of the image that are normal vs. those that are anomalous. In order to perform the alignment effectively, it is necessary to determine the features for matching.
  • the present method uses features from a pre-trained ResNet deep CNN.
  • the ResNet results in a pyramid of features.
  • earlier layers (levels) result is higher resolution features encoding less context.
  • Later layers encode lower resolution features which encode more context but at lower spatial resolution.
  • each location is described using features from the different levels of the feature pyramid. Specifically, features from the output of the last M blocks are concatenated. The features thus encode both fine-grained local features and global context. This allows the present method to find correspondence between the target image y and K > 1 normal images, rather than having to explicitly align the images, which is more technically challenging and less robust.
  • Figs 9A-9B show an evaluation of the present method on detecting anomalies between flowers with or without insects, and bird varieties.
  • Figs. 9A-9B shows an anomalous image (A), the retrieved top normal neighbor image (B), the mask detected by the present method (C), and the predicted anomalous image pixels (D).
  • Fig. 9C shows a red spot of an anomalous woodpecker (A), the retrieved top normal neighbor image (B), the mask detected by the present method (C), and the predicted anomalous image pixels (D).
  • the present inventors conducted an evaluation of the present method against the state- of-the-art in sub-image anomaly detection.
  • a first set of experiments was conducted on the MVTec dataset, which comprises images from 15 different classes. Five classes consist of textures such as wood or leather. The other 10 classes contain objects (mostly rigid).
  • the training set is composed of normal images.
  • the test set is composed of normal images as well as images containing different types of anomalies.
  • This dataset therefore follows the standard protocol where no anomalous images are used in training.
  • the anomalies in this dataset are more fine-grained than those typically used in the literature, e.g., in CIFAR10 evaluation, where anomalous images come from a completely different image category.
  • anomalies in MVTec take the form of, e.g., a slightly scratched object or a lightly deformed (e.g., bent) object.
  • the dataset provides segmentation maps indicating the precise pixel positions of the anomalous regions.
  • FIG. 10 shows an anomalous image (a hazelnut which contains a scratched area) (A), the retrieved nearest neighbor normal image, which contains a complete nut without scratches, the mask detected by the present method (C), and the predicted anomalous image pixels (D).
  • the present method By searching for correspondences between the two images, the present method is able to find correspondences for the normal image regions but not for the anomalous region. This results in an accurate detection of the anomalous image region.
  • the present method was compared against several methods that were introduced over the last several months, as well as longer standing baseline such as OCSVM and nearest neighbors. For each setting, the present method was compared against the methods that reported the suitable metric.
  • the quality of deep nearest neighbor matching was evaluated as a means for finding anomalous images. This is computed by the distance between the test image and the K nearest neighbor normal images. Larger distances indicate more anomalous images.
  • the ROC area under the curve (ROCAUC) of the present method and other state-of-the-art methods are compared and the average ROCAUC across the 15 classes is reported in Table 12 below. This comparison is important as it verifies whether deep nearest neighbors are effective on these datasets.
  • the present method is shown to outperform a range of state-of-the-art methods utilizing a range of self- supervised anomaly detection learning techniques. This gives evidence that deep features trained on the ImageNet dataset (which is very different from MVTec) are very effective even on such a distant dataset.
  • the present method was then evaluated on the task of pixel-level anomaly detection.
  • the objective here is to segment the particular pixels that contain anomalies.
  • the present method was evaluated using two established metrics. The first is per-pixel ROCAUC. This metric is calculated by scoring each pixel by the distance to its K nearest correspondences. By scanning over the range of thresholds, the pixel-level ROCAUC curve can be computed. The anomalous category is designated as positive. It was noted by several previous works that ROCAUC is biased in favor of large anomalies. In order to reduce this bias, the PRO (per-region overlap) curve metric was previously proposed, which first separates anomaly masks into their connected components, thereby dividing them into individual anomaly regions.
  • the calculation scans over false positive rates (FPR), and for each FPR, PRO is computed, i.e., the proportion of the pixels of each region that are detected as anomalous.
  • the PRO score at this FPR is the average coverage across all regions.
  • the PRO curve metric computes the integral across FPR rates from 0 to 0.3.
  • the PRO score is the normalized value of this integral.
  • Table 13 compares the present methods on the per-pixel ROCAUC metric against results reported by Bergmann et al. (Bergmann, P., et al. MVTec ad-a comprehensive real-world dataset for unsupervised anomaly detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9592-9600 (2019)), as well as newer results by V enkataram an an et al. (CAVGA -Ru, see Venkataramanan, S. et al. Attention guided anomaly detection and localization in images. arXiv preprint arXiv: 1911.08616 (2019)). Most of the methods use different varieties of autoencoders, including the top-performer CAVGA -R u . The present method significantly outperforms all methods. This attest to the strength of the present method pyramid based correspondence approach.
  • Table 14 compares the present method in terms of PRO. As explained above, this is another per-pixel accuracy measure which gives larger weight to anomalies which cover few pixels.
  • STC Shanghai Tech Campus
  • STC simulates a surveillance setting, where the input consists of videos captured by surveillance cameras observing a busy campus.
  • the dataset contains 12 scenes, each scene consists of training videos and a smaller number of test images.
  • the training videos do not contain anomalies while the test images contain normal and anomalous images.
  • Anomalies are defined as pedestrians performing non-standard activities (e.g. fighting) as well as any moving object which is not a pedestrian (e.g. motorbikes).
  • the present method was evaluated at a first stage for detecting image -level anomalies against other state-of-the-art methods.
  • the pixel-level ROCAUC performance was then compared with the best reported method, CAVGA -R u .
  • the present method significantly outperforms the best reported method by a significant margin. The results are reported in Tables 15 and 16 below.
  • StackRNN Luo, W., et al. A revisit of sparse coding based anomaly detection in stacked RNN framework. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 341(349 (2017)
  • AE-Conv3D Zhao, Y., et al. Spatio-temporal autoencoder for video anomaly detection. In: Proceedings of the 25th ACM international conference on Multimedia, pp. 1933( 1941 (2017).
  • MemAE Gong, D., et al.
  • Memorizing normality to detect anomaly Memory-augmented deep autoencoder for unsupervised anomaly detection.
  • AE(2D) Hasan, M., et al. Learning temporal regularity in video sequences. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 733(742 (2016).
  • Table 17 compares using different level of the feature pyramid. As can be observed, using too low a level by itself (56 X 56) significantly hurts performance while using the higher levels on their own results in diminished performance due to lower resolution. Using a combination of all features in the pyramid results in the best performance.
  • Table 18 compares using the top K neighboring normal images as performed by the present method first stage vs. choosing them randomly from the dataset. It is observed that choosing the kNN images improves performance. This does not affect all classes equally. As an example, the numbers for “Grid” which has much variation between images are reported. For this category, using the kNN images results in much better performance than randomly choosing K images.
  • Table 17 Pyramid level ablation for Subpixel Anomaly Detection Accuracy on MVTec (PRO %)
  • Table 18 Evaluating the effectiveness of the present method kNN retrieval state.
  • the present method does not require feature training and can work on very small datasets.
  • a difference between the present method and standard image alignment is that the present method finds correspondences between the target image and K normal images, as opposed to a single normal image in simple alignment approaches.
  • the quality of the alignment or correspondence between the anomalous image and retrieved normal images is strongly affected by the quality of extracted features, wherein context is very important.
  • Local context is needed for achieving segmentation maps with high-pixel resolutions.
  • Such features may be generally found in the shallow layers of a deep neural networks. Local context is typically insufficient for alignment without understanding the global context, i.e., where in the object does the part lie.
  • Global context is generally found in the deepest layers of a neural network, however global context features are of low resolution. The combination of feature from different levels allows both global context and local resolution giving high quality correspondences.
  • the present method is significantly reliant on the K nearest neighbors algorithm.
  • the complexity of kNN scales linearly with the size of the dataset used for search which can be an issue when the dataset is very large or of high dimensionality.
  • the present method approach is designed to mitigate the complexity issues.
  • the initial image-level anomaly classification is computed on global -pooled features which are 2048 dimensional vectors.
  • Such kNN computation can be achieved very quickly for moderate sized datasets and different speedup techniques (e.g. KDTrees) can be used for large scale datasets.
  • the anomaly segmentation stage requires pixel-level kNN computation which is significantly slower than image-level kNN.
  • the present method limits the sub-image kNN search to only the K nearest neighbors of the anomalous image, thus significantly limiting computation time. It is assumed that the vast majority of images are normal, therefore only a small fraction of images require the next stage of anomaly segmentation. The present method is therefore quite suitable for practical deployment from a complexity and runtime perspective.
  • Previous sub-image anomaly detection methods have either used self-learned features or a combination of self-learned and pre-trained images features.
  • Self-learned approaches in this context, typically train an autoencoder and use its reconstruction error for anomaly detection.
  • Other approaches have used a combination of pre-trained and self-learned methods.
  • the present method numerical results have shown that the present method significantly outperforms such approaches. It is believed that given the limited supervision and small dataset size in normal-only training set as tackled in this work, it is rather hard to beat very deep pre-trained networks. Therefore, pre- trained features are used, without modification. The strong results achieved by the present method attest to the effectiveness of this approach.
  • the present disclosure presents new anomaly segmentation methods based on transferring pretrained features.
  • the present disclosure provides for a baseline method that outperforms all previous anomaly segmentation methods on the MVTec dataset.
  • the approach represents images using ImageNet-pretrained convolutional feature pyramids. Target image pixels are classified using multi-scale nearest neighbor retrieval, wherein large distances correspond to anomalous pixels.
  • the present disclosure further provides for fully exploiting contextual information from the whole image, based on the vision transformer (ViT), a recently introduced attentional-approach.
  • ViT vision transformer
  • the ViT architecture learns patch embedding that encode global context well.
  • the present disclosure improves it by combining it in a multi-resolution construction - which significantly improve performance and enjoys strong local and global context.
  • the present method is based on retrieval of contextual features for detecting anomalies.
  • the present method uses standard feature extraction using a pre-trained ResNet.
  • using CNN-based methods involves issues associated with non-adaptive contexts which include areas of the image that make it hard to find similar normal contexts.
  • the present disclosure provides for using attentional mechanisms that learn the relevant context.
  • the present disclosure provides for a simple baseline method for anomaly segmentation.
  • the method consists of two stages:
  • Feature extraction Extracting a feature descriptor for each pixel, combining the activations of one or more layers of a convolutional deep network.
  • Similarity estimation Calculating the similarity of the descriptor of each pixel to the closest descriptor found in the train set.
  • Feature extraction may be performed to extract features f p for every pixel p in the image x using a pre-trained feature extractor ⁇ .
  • f p ⁇ (x, p) (11)
  • the activations of a deep ResNet pre-trained on the ImageNet dataset may be used.
  • a pre-trained deep neural network is applied on each of the training images x, to extract the feature activations at a particular layer l at position p.
  • all the training images are normal.
  • the number of stored features may be reduced by K-means, and only store the K means themselves.
  • features are extracted from each of its pixels in an identical way.
  • the present disclosure then proceeds to estimate the similarity of the features extracted from the training images and the target image.
  • the features of each of the pixels of the target pixels are compared with each of the features in the gallery G (which have been potentially reduced to the K means).
  • the similarity is scored using the sum of the L 2 distance to the K nearest features: (12) where N K (f p , G ) indicates the K nearest neighbors in the gallery G to the target feature
  • threshold invariant metrics such as ROCAUC may be used rather than a threshold.
  • CNNs convolutional neural networks
  • ⁇ l The feature extractor that outputs the activations of layer l
  • the present disclosure provides for relaxing the rigid design of the spatial feature pyramid.
  • the context in CNNs is non-adaptive and is determined by the level of the pyramid.
  • Fig. 11 shows an example of the effective contexts of CNNs and transformers, and the anomaly segmentation results on an anomalous image from MVTec Screw class.
  • the effective context of the CNN is limited, while the actual attention pattern of the transformer is able to focus on the entire object.
  • the anomaly segmentation of the transformer is significantly more similar to the ground truth than that of the CNN.
  • CNN features that are reliant on the context may not find a good similarity correspondence, as random background patterns may not repeat between the training and the test sets.
  • the present disclosure provides for using Vision Transformers (ViT) for anomaly detection.
  • ViT Vision Transformers
  • each pixel may gain its context from across the entire image on the one hand, but tends to focus only on context features that are deemed relevant according to the attention layers.
  • the attention layers in each transformer unit allow the network to learn to avoid including irrelevant context and therefore outperform CNNs.
  • Video Transformers were very recently proposed by Dosovitskiy et al. (see, Alexey Dosovitskiy, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.) Transformers consist of a set of multi-headed self- attention (MSA) layers and multi-layer perceptron (MLP) blocks.
  • MSA multi-headed self- attention
  • MLP multi-layer perceptron
  • Each layer l first takes as input a representation f (where the layer superscript ⁇ of f l is dropped in the present notation for convenience) and linearly projects it to three representations: value v ⁇ , ke and query calculated for each patch p (N d is the y representation dimension per head).
  • Each of the representations v, k, q a is then split long the channel dimension into H equal parts which are called attention heads ( v h , k h , q h ).
  • Each one of the attention heads calculates an attention map using an inner product between its query and the keys of all patches. The attention map is normalized by the square root of the number of attention heads H: (13)
  • the multihead self- attention layer concatenates the per-head attention maps A h multiplied by the per-head values v h , and projects it to the representation dimension using matrix U: (14)
  • the patches are initialized using trainable linear projection E of the input image x (split into P patches, patch p is denoted x p ) together with an embedding representing the image position
  • the x class dimension (called the "token"), is initialized with zeros, and eventually used (at the last layer) as the final features for classification in pretraining.
  • the representation f l-1 is normalized using layer norm, and then updated in a residual fashion using MSA (i.e. It is then normalized again and updated with a residual MLP block , to achieve the next layer's representation f l .
  • pixel-level anomaly scores are extracted using the same transformer representation twice: once when applying the network and similarity estimation on the entire image ⁇ t1 , and again by splitting the image to four quarters, and applying the same method for each quarter ⁇ t2 ⁇
  • the patch is scored using features extracted from each of the resolutions. The sum of the scores from each resolutions is taken as the total score for the high-resolution patch.
  • the present disclosure was quantitatively evaluated on the MVTec dataset, which is the main dataset used by most methods to evaluate anomaly segmentation performance. It simulates industrial fault detection where the objective is to detect parts of images of products that contain faults such e.g. dents, missing parts, misalignment or unexpected textures.
  • Each of the 15 classes contains a training set of normal images, normal test images and images of faults of different types as anomalies.
  • the present disclosure was also evaluated on the CUB200 dataset, using two categories of Woodpecker - the normal training image have a breed that does not have a red dot on the head, while the anomalous images do.
  • examples are presented on the Oxford Flowers 101 dataset, wherein normal flowers do not have insects on them, while anomalous images do.
  • the present method is compared against a large set of known methods. Each method scores each of the pixels of the test image as normal or anomalous.
  • the previous methods include: classical anomaly detection methods (1-NN, OCSVM), autoencoders with L 2 and SSIM losses , variational autoencoders with reconstruction loss and GRAD-CAM (VAE, GAVGA), texture distributional models (TI), shape -based matching (VM), K-means of deep features from context-less patches (CNN-Diet), GAN-based (AnoGAN), and student-teacher regression of pre-trained features (Student).
  • the experimental architectures used by the present disclosure comprises BiT-M-R50xl ResNet and ViT-Base, both pretrained on ImageNet-21k.
  • Tables 19 and 20 below present the results of the baseline method. As can be seen, the use of pretrained convolutional features and simple kNN retrieval is enough to outperform all the existing methods on both anomaly detection metrics.
  • the results of the present transformer-based method are reported in in Tables 19, 20. It outperforms all other methods, including the simple CNN-based baseline method. All the pretrained models used by the present methods, including the ResNet BiT models, were trained based on ImageNet-21k dataset.
  • the ViT transformer architecture serves as a better anomaly segmentation feature extractor even when it is worse as a classifier (suggesting that the contextual patch description is the main factor here). Interestingly, it was often found that the present method was penalized for detecting, or failing to detect, anomalies where the ground truth was ambiguous to us.
  • Table 19 Anomaly segmentation accuracy on MVTec (ROCAUC %)
  • Table 20 Anomaly segmentation accuracy on MVTec (PRO %)
  • Fig. 12 shows the attention maps of ViT drawn for the 2, 6 and 10 layers (left to right), for illustration. The rightmost image is the input to the network. The attention map to the classification token is shown in the top row, and the attention map to the center pixel is shown on the bottom row. As can be seen, most attention is paid to the bird rather than the background. Deeper layers pay most attention to the remarkable dot on the head of the bird. For the patch of interest, first, attention maps are calculated for each attention head as explained above. Then, the attention is averaged across the different attention heads, and plotted after normalizing to a grey-scale map between 0 to 255.
  • the attention maps of the classification token of low level layers are able to identify the outline of the inspected object while refraining from including much of the background area.
  • the higher layer attention maps tend to lose their localization properties, as each patch already incorporates information from many other patches.
  • the attention maps of the center pixel show results that are quite similar. The center pixel incorporates more information from the representations of its previous layers, and its neighboring patches.
  • Fig. 13 illustrates (left to right) original input image and its 6th layer ViT attention maps (normalized) for normal and anomalous images the top row shows results with no training set, wherein both transistor rotations can be considered normal, and the attention map cannot determine which transistor is anomalous. The bottom row shows pixels that contain anomalies and attract much more attention than their neighboring pixels and suggest where the anomalies are located. Inspection of the attention patterns of transformers in Fig. 13 illustrates an intriguing phenomenon. The transformer often pays disproportionate attention to image regions that contain anomalies. This provides some explanation for why the learned context is useful for anomaly segmentation, it highlights the parts of the context the provide evidence that a certain image region is anomalous.
  • This phenomenon can be used in a profitable way for a new task, zero-shot anomaly segmentation.
  • the objective of the task is to detect the parts of the image that contain anomalies, just based on a single image and without being given other examples (normal or anomalous) from the same class.
  • the ability to segment anomalies based on a single image is based on the pretraining properties of the networks.
  • Table 22 Accuracy for Zero Shot Anomaly Segmentation (avg. ROCAUC %) [00199] It was further evaluated whether the attention-based method can be used for zero-shot image-level anomaly detection, where the objective is to determine if an image is anomalous given just a single image and no training set of images from a similar class. A simple approach was tested of taking the maximum over the attention map averaged over all heads. The hypothesis is that anomalous images will have a larger maximal attention value than normal images. The method was evaluated over the MVTec dataset (Table 23 below). It was found that this works quite well on textures where repetitions provide evidence for normal patterns and deviation from the repetitions indicates anomalous regions (the exception is Grid, probably because the scale of repetitions is larger than the patch size).
  • the anomaly is a texture, e.g., Hazelnut and Bottle.
  • the attention-map-based method outperforms the internal kNN baseline. While those results are of course weaker than the standard setting where normal-only training images are available, they illustrate the strength of the transformer-based approach for zero-shot anomaly detection.
  • This kind of algorithm can achieve near-perfect pixel-level ROC and PRO (as it finds all the anomalous pixels with very low false positive ratio) but without being informative on whether the image is anomalous.
  • anomalies are indeed very small, and therefore this scenario is quite common.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro- magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object- oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration can be implemented by special purpose hardware -based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

A method comprising: receiving, as input, training images, wherein at least a majority of the training images represent normal data instances; receiving, as input, a target image; extracting (i) a set of feature representations from a plurality of image locations within each of the training images, and (ii) target feature representations from a plurality of target image locations within the target image; calculating, with respect to a target image location of the plurality of target image locations in the target image, a distance between (iii) the target feature representation of the target image location, and (iv) a subset from the set of feature representations comprising the k nearest the feature representations to the target feature representation; and determining that the target image location is anomalous, when the calculated distance exceeds a predetermined threshold.

Description

DEEP LEARNING-BASED ANOMALY DETECTION IN IMAGES
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority from U.S. Provisional Patent Application No. 62/994,694, filed March 25, 2020, the contents of which are incorporated by reference herein in their entirety.
FIELD OF THE INVENTION
[0002] The invention relates to the field of machine learning.
BACKGROUND
[0003] Agents interacting with the world are constantly exposed to a continuous stream of data. Agents can benefit from classifying particular data as anomalous, i.e., particularly interesting or unexpected. Such discrimination is helpful in allocating attention to the observations that warrant particular scrutiny. Anomaly detection by artificial intelligence has many important applications, such as fraud detection, cyber intrusion detection, and predictive maintenance of critical industrial equipment.
[0004] In machine learning, the task of anomaly detection consists of learning a classifier that can label a data point as normal or anomalous. In supervised classification, methods attempt to perform well on normal data, whereas anomalous data is considered noise. The goal of anomaly detection methods is to specifically detect extreme cases, which are highly variable and hard to predict. This makes the task of anomaly detection challenging (and often poorly specified).
[0005] The foregoing examples of the related art and limitations related therewith are intended to be illustrative and not exclusive. Other limitations of the related art will become apparent to those of skill in the art upon a reading of the specification and a study of the figures. SUMMARY
[0006] The following embodiments and aspects thereof are described and illustrated in conjunction with systems, tools and methods which are meant to be exemplary and illustrative, not limiting in scope.
[0007] There is provided, in an embodiment, a system comprising at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program instruction, the program instructions executable by the at least one hardware processor to: receive, as input, training images, wherein at least a majority of the training images represent normal data instances, receive, as input, a target image, extract (i) a set of feature representations from a plurality of image locations within each of the training images, and (ii) target feature representations from a plurality of target image locations within the target image, calculate, with respect to a target image location of the plurality of target image locations in the target image, a distance between (iii) the target feature representation of the target image location, and (iv) a subset from the set of feature representations comprising the k nearest the feature representations to the target feature representation, and determine that the target image location is anomalous, when the calculated distance exceeds a predetermined threshold.
[0008] There is also provided, in an embodiment, a computer-implemented method comprising: receiving, as input, training images, wherein at least a majority of the training images represent normal data instances; receiving, as input, a target image; extracting (i) a set of feature representations from a plurality of image locations within each of the training images, and (ii) target feature representations from a plurality of target image locations within the target image; calculating, with respect to a target image location of the plurality of target image locations in the target image, a distance between (iii) the target feature representation of the target image location, and (iv) a subset from the set of feature representations comprising the k nearest the feature representations to the target feature representation; and determining that the target image location is anomalous, when the calculated distance exceeds a predetermined threshold.
[0009] There is further provided, in an embodiment, a computer program product comprising a non-transitory computer-readable storage medium having program instructions embodied therewith, the program instructions executable by at least one hardware processor to comprising: receive, as input, training images, wherein at least a majority of the training images represent normal data instances; receive, as input, a target image; extract (i) a set of feature representations from a plurality of image locations within each of the training images, and (ii) target feature representations from a plurality of target image locations within the target image; calculate, with respect to a target image location of the plurality of target image locations in the target image, a distance between (iii) the target feature representation of the target image location, and (iv) a subset from the set of feature representations comprising the k nearest the feature representations to the target feature representation; and determine that the target image location is anomalous, when the calculated distance exceeds a predetermined threshold.
[0010] In some embodiments, the program instructions are further executable to perform, and the method further comprises performing, the calculating and the determining with respect to all of the plurality of target image locations.
[0011] In some embodiments, the program instructions are further executable to designate, and the method further comprises designating, a segment of the target image as comprising anomalous target image locations, based, at least in part, on the determining.
[0012] In some embodiments, the program instructions are further executable to apply, and the method further comprises applying, a clustering algorithm to the set of feature representations, to obtain clusters of the feature representations, wherein the calculating comprises calculating, with respect to a target image location of the plurality of target image locations, a distance between (i) the target feature representation of the target image location, and (ii) the k nearest means of the clusters to the target feature representation.
[0013] In some embodiments, the extracting is performed by applying a trained machine learning model to the training images and the target image, wherein the machine learning model is trained on a provided dataset of images.
[0014] In some embodiments, the trained machine learning model undergoes additional training using the training images. [0015] In some embodiments, the trained machine learning model comprises a deep-learning neural network architecture comprising a plurality of layers, and wherein the extracting comprises concatenating features from two or more layers of the plurality of layers.
[0016] In some embodiments, the extracting comprises extracting the feature representations separately from each of two or more layers of the machine learning model; the calculating comprises calculating a distance separately with respect to the feature representations extracted from each of the two or more layers; and the determining is based on a summation of all of the distance calculations.
[0017] In some embodiments, the two or more layers include the uppermost M layers of the plurality of layers.
[0018] In some embodiments, the extracting is performed by applying a trained machine learning model to the training images and the target image, wherein the trained machine learning model comprises a self- attention architecture comprising vision transformers.
[0019] In some embodiments, the calculating comprises: selecting, from the training images, a specified number n of nearest images to the target image; and calculating, with respect to a target image location of the plurality of target image locations in the target image, a distance between (a) the target feature representation of the target image location, and (b) the feature representations from all of the image locations in the n nearest images; and determining that the target image location is anomalous, when the calculated distance exceeds a predetermined threshold.
[0020] In some embodiments, the feature representation encodes high spatial resolution and semantic context.
[0021] In some embodiments, each of the image locations represents a pixel in (i) each of the training images, and (ii) the target image.
[0022] In some embodiments, the extracting is performed with respect to all image locations in (i) each of the training images, and (ii) the target image. [0023] In addition to the exemplary aspects and embodiments described above, further aspects and embodiments will become apparent by reference to the figures and by study of the following detailed description.
BRIEF DESCRIPTION OF THE FIGURES
[0024] Exemplary embodiments are illustrated in referenced figures. Dimensions of components and features shown in the figures are generally chosen for convenience and clarity of presentation and are not necessarily shown to scale. The figures are listed below.
[0025] Fig. 1 is a flowchart of the functional steps in a process of the present disclosure for automated detection of anomalous patterns in images, according to some embodiments of the present disclosure;
[0026] Figs. 2A and 2B illustrate the results of various network depths (i.e., number of ResNet layers) with respect to the CifarlO and FashionMNIST datasets, according to some embodiments of the present disclosure;
[0027] Figs. 3A-3C show a comparison of average CIFAR10 and FashionMNIST ROCAUC for different numbers of nearest neighbors, as well as a comparison between the present model and Geometric on CIFAR10 and FashionMNIST, according to some embodiments of the present disclosure;
[0028] Fig. 4 shows the performance of the present model as function of the percentage of anomalies in the training set, according to some embodiments of the present disclosure;
[0029] Fig. 5 shows the average ROCAUC for anomaly detection using the present model on the concatenated features of each individual image in the set, according to some embodiments of the present disclosure;
[0030] Figs. 6A-6B shows t-SNE plots of the test set features of CIFAR10, according to some embodiments of the present disclosure;
[0031] Fig. 7 is an illustration of the present feature adaptation procedure, wherein the pre- trained feature extractor ψ0 is adapted to make the normal features more compact resulting in feature extractorψ, according to some embodiments of the present disclosure; [0032] Figs. 8A-8B illustrate anomaly detection accuracy as correlated to the ratio between the average compactness loss of test set anomalies and the average compactness loss of training set normal images, according to some embodiments of the present disclosure;
[0033] Figs 9A-9C show an evaluation of the present method on detecting anomalies between flowers with or without insects, and bird varieties, according to some embodiments of the present disclosure;
[0034] Fig. 10 shows an anomalous image (a hazelnut which contains a scratched area) (A), the retrieved nearest neighbor normal image, which contains a complete nut without scratches, the mask detected by the present method (C), and the predicted anomalous image pixels (D)
[0035] Fig. 11 shows an example of the effective contexts of CNNs and transformers, and the anomaly segmentation results on an anomalous image from MVTec Screw class, according to some embodiments of the present disclosure;
[0036] Fig. 12 shows the attention maps of ViT drawn for the 2, 6 and 10 layers (left to right), for illustration, according to some embodiments of the present disclosure; and
[0037] Fig. 13 illustrates (left to right) original input image and its 6th layer ViT attention maps (normalized) for normal and anomalous images, according to some embodiments of the present disclosure.
DETAILED DESCRIPTION
[0038] Disclosed herein are a system, method, and computer program product for automated detection of anomalous patterns in images.
[0039] In some embodiments, the present disclosure provides for a machine learning model which uses deep-learning techniques to extract feature embeddings from a training image dataset. In some embodiments, the present machine learning model then applies one or more distribution- based approaches (e.g., nearest-neighbors approaches), to calculate a distance between features extracted from a target image and the embeddings of the training dataset learned during training, wherein the present model may designate the target image as anomalous when the calculated distance exceeds a specified threshold. [0040] In some embodiments, a machine learning model of the present disclosure may be trained in a semi-supervised manner, wherein the training dataset may be assumed to only include normal data instances. In some embodiments, a machine learning model of the present disclosure may be trained in an unsupervised manner, wherein the training dataset may be assumed to include a small proportion of anomalous data instances.
[0041] In some embodiments, a machine learning model of the present disclosure may be trained to perform group image anomaly detection, wherein an input data sample consists of a set of images, and wherein each image in the set may be individually normal, but the set as a whole may be anomalous. In some embodiments, the present disclosure provides for deep-learning group- level feature embedding, based on orderless pooling over all the features of the images in a set. In some embodiments, the extracted group level features may then be classified as normal or anomalous based on, e.g., nearest-neighbors approaches.
[0042] In some embodiments, the present disclosure provides for a pre-trained deep-learning model which extracts features from a provided dataset of images of general availability, wherein the training dataset may not be directly related to the anomaly detection task. Accordingly, in some embodiments, a pre-trained feature extracting model may be trained on a provided dataset, e.g., using self-supervised techniques. In some embodiments, the features extracted using the pre- trained model may undergo a feature adaptation stage, wherein the general pre-trained extracted features are adapted to the task of anomaly detection on the target distribution by, e.g., fine-tuning the pre-trained model with a compactness loss and/or using continual learning adaptive regularization.
[0043] In some embodiments, the present disclosure provides for sub-image anomaly detection, wherein a segmentation map may be provided which describes a segment where an anomaly is present inside an image. In some embodiments, the present disclosure provides for a novel anomaly segmentation approach based on alignment between a target image and a specified number of nearest normal images. In some embodiments, the present disclosure provides for determining correspondences between the target image and the nearest images based on a multi- resolution feature pyramid. [0044] Accordingly, in some embodiments, the present disclosure provides for a machine learning model which uses deep-learning techniques to extract feature embeddings from a training image dataset. In some embodiments, the present machine learning model then applies one or more distribution-based approaches (e.g., nearest-neighbors approaches), to calculate a distance between features extracted from a target image and the embeddings of the training dataset learned during training, wherein the present model may designate the target image as anomalous when the calculated distance exceeds a specified threshold.
[0045] In some embodiments, a target image classified as anomalous may undergo sub-image anomaly detection, wherein a specified number of nearest normal images may be selected from the training dataset, based on a distance between the target image and the selected nearest images which may be measured using any suitable distance measure. In some embodiments, the present disclosure thus provides for determining, with respect to each pixel in a target image, an anomaly score which represents a distance between the relevant pixel and the nearest corresponding pixel in the nearest-neighbor normal images.
[0046] In some embodiments, the features extracted from the training dataset images and the target image represent a pyramid of features, wherein bottom layers result is higher resolution features which encode less semantic context, and upper layers encode lower spatial resolution features but with more semantic context. In some embodiments, to find correspondence between pixels in the selected nearest-neighbor images and the target image, each location is represented using features from the different layers of the feature pyramid, e.g., features from the output of the last specified number of blocks may be concatenated to represent a location in the images. Thus, the feature representation of each location in the images encodes both fine-grained local features as well as global context. In some embodiments, this allows to find correspondence between the target image and nearest-neighbor normal images, without having to perform image alignment. In some embodiments, the present method is scalable and easy to deploy in practice. In some embodiments, the present disclosure provides for representing each location in the images based on calculating an anomaly score of each pixel using each feature layer individually, and combining the scores to obtain a total multi-layer anomaly score for each pixel. [0047] In some embodiments, the present disclosure further provides for sub-image anomaly detection and segmentation based on transferring pretrained features. In some embodiments, the present disclosure provides for using a Vision Transformers feature extraction architecture, wherein each pixel representation may gain its context from across the entire image, with a tendency to focus only on context features that are deemed relevant according to attention layers in the network architecture, and wherein the attention layers in each transformer unit allow the network to learn to avoid including irrelevant context. In some embodiments the feature representation extracted by the Vision Transformers network may be combined in a multi- resolution construction to improve resolution performance while still provide for strong local and global context. In some embodiments, the attentional patterns learned by the Vision Transformers focus on anomalous regions in the images. In some embodiments, this approach may be sued for zero-shot anomaly detection and segmentation, i.e., detecting anomalies without having previously seen normal or anomalous images.
[0048] Fig. 1 is a flowchart of the functional steps in a process of the present disclosure for automated detection of anomalous patterns in images, according to some embodiments of the present disclosure.
[0049] In some embodiments, in step 100, the present disclosure provides for receiving, as input, a set of training images, wherein at least a majority of the training images represent normal data instances.
[0050] In some embodiments, in step 102, the present disclosure provides for receiving a target image for classification. In some embodiments, a target image may be classified as anomalous as a whole. In some embodiments, a target image may undergo sub-image anomaly detection, to classify each pixel in the target image as anomalous.
[0051] In some embodiments, in step 104, the present disclosure provides for extracting a set of deep features from multiple locations (e.g., individual pixels or groups of pixels) within each of the training images, as well as similar features from locations within the target image. [0052] In some embodiments, in step 106, the present disclosure provides for calculating distances between the features of each location in the target image, and the k nearest feature representations from the training images.
[0053] In some embodiments, in step 108, the present disclosure may classify a location in the target image as anomalous, when the calculated distance exceeds a predetermined threshold.
[0054] In some embodiments, in step 110, the present disclosure provides for designating a segment of the target image as comprising anomalous locations (e.eg, pixels), based, at least in part, on determining that each location (e.g., pixel) in the segment is anomalous.
[0055] In some embodiments, the present disclosure provides for applying a clustering algorithm to the deep feature representations, to obtain clusters of the feature representations. In some embodiments, the distance calculation then comprises calculating distances between the features of each location in the target image and the k nearest means of the clusters.
[0056] In some embodiments, the deep features extracting is performed by applying a trained machine learning model to the training images and the target image. In some embodiments, the machine learning model is pre-trained on a provided dataset of images, e.g., a database of images. In some embodiments, the trained machine learning model may undergo additional training using the training images. In some embodiments, the extracted deep features encode high spatial resolution and semantic context.
[0057] In some embodiments, the trained machine learning model comprises a deep-learning neural network architecture comprising a plurality of layers, wherein the extracting comprises concatenating features from two or more layers of the plurality of layers. In some embodiments, the two or more layers include the uppermost M layers of the plurality of layers.
[0058] In some embodiments, the extracting comprises extracting the feature representations separately from each of two or more layers of the machine learning model, wherein the calculating of the distances comprises calculating a distance separately with respect to the feature representations extracted from each of the two or more layers, and wherein the determining is based on a summation of all of the distance calculations. [0059] In some embodiments, the trained machine learning model comprises a self-attention architecture comprising vision transformers.
[0060] In some embodiments, the distance calculation comprises selecting, from the training images, a specified number n of nearest images to the target image, and calculating a distance between the features of each location in the target image and the feature representations from all of the image locations in the n nearest images.
WHOLE IMAGE ANOMALY DETECTION
Semi-Supervised Anomaly Detection
[0061] In some embodiments, the present disclosure provides for an anomaly detection process which learns general features (using any available level of supervision) on related datasets, and then uses the learned features to apply nearest-neighbors anomaly detection methods (e.g. kNN, k-means). In some embodiments, a pretrained feature extraction process may provide for faster deployment times than self-supervised methods. In some embodiments, the present disclosure employs one or more feature extraction methods, e.g., ResNet extractor (He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.) pre-trained on a provided image dataset (e.g., the Imagenet dataset, http://www.image-net.org/).
[0062] In some embodiments, a machine learning model of the present disclosure provides for a training set comprising images (e.g., Imagenet), denoted as Xtrain = x1, x2, . xN. In some embodiments, all the images in the training set may be assumed to be within a normal distribution. The present model then uses a feature extractor F, e.g., a provided pre-trained feature extractor, to extract features from the entire training set: fi = F(xi) (1)
[0063] In some embodiments, a feature extractor such as a ResNet feature extractor may be used, which may be pre-trained on the provided training dataset. At first sight it might appear that this supervision is a strong requirement, however such feature extractors are widely available. We will later show experimentally that the normal or anomalous images do not need to be particularly closely related to the Imagenet dataset.
[0064] In some embodiments, the feature extraction stage results in a set of embeddings of the images in the training dataset denoted Ftrain = f1, f2..fN.
[0065] In some embodiments, a target data sample y may similarly undergo a feature extraction stage denoted fy = F(y) . In some embodiments, the present disclosure may then provide for calculating a K nearest-neighbors (kNN) distance and use it as an anomaly score: (2)
Figure imgf000014_0001
where Nk(fy) denotes the k nearest embeddings to fy in the training set Ftrain. In some embodiments, the present model may use the Euclidean distance, which often achieves strong results on features extracted by deep networks, however, other distance measures may be used in a similar way. By verifying whether the distance d(y) is larger than a specified threshold, target data instance y may be designated as normal or anomalous.
Unsupervised Anomaly Detection
[0066] In some embodiments, the present disclosure provides for an unsupervised approach, wherein the training dataset may not be assumed to consist of only normal data samples. In some embodiments, it is assumed that a small proportion of input images in the training dataset are anomalous.
[0067] In some embodiments, the present disclosure provides for a data cleaning stage which removes ag least some of the anomalous training images, accordingly, after performing a feature extraction stage as further explained above, the kNN distance between each input image and the rest of the input images, based on the assumption that anomalous images lie in low density regions, a fraction of the images with the largest kNN distances may be removed, wherein this fraction is selected such that it is larger than the estimated proportion of anomalous input images in the training dataset. As will be further explained below, because the present model requires only a small number of training data instances, the percentage of removed images may be large enough to ensure that the kept the images are likely to be normal (e.g., the cleaning process may remove 50% of training images). After removal of the suspected anomalous input images, the images are now assumed to have a very high-proportion of normal images.
[0068] The remainder of the process is identical to the semi-supervised approach described above, wherein the feature extraction stage results in a set of embeddings of the remaining images in the training dataset denoted Ftrain = f1, f2..fN. In some embodiments, a target data sample y may similarly undergo an image extraction stage denoted fy = F (y) . In some embodiments, the present disclosure may then provide for calculating a kNN distance and use it as an anomaly score to determine whether a target data instance y may be designated as normal or anomalous.
Group Image Anomaly Detection
[0069] Group anomaly detection tackles the setting where the input sample consists of a set of images. The particular combination is important, but not the order. It is possible that each image in the set will individually be normal but the set as a whole will be anomalous. As an example, assume a training set comprising a plurality of groups consisting of M normal images, each randomly sampled from multiple classes. A trained image-level anomaly detection model will be able to detect anomalous groups containing individual anomalous images, e.g., images taken from classes not seen in training. However, an anomalous group containing multiple images from a seen class, but no images from any other class, will still be classified as normal, because all images in the group are individually normal. Known autoencoder-based group anomaly detection models typically suffer from multiple drawbacks, e.g., high sample complexity, sensitivity to reconstruction metrics, and potential lack of sensitivity to the groups. Accordingly, in some embodiments, the present disclosure provides for a kNN-based approach, which embeds the set by orderless-pooling (e.g., averaging) over all the features of the images in each group. In some embodiments, the disclosed method comprises:
[0070] Feature extraction from all images in the group g, and
Figure imgf000015_0001
[0071] orderless pooling of features across the group fg =
Figure imgf000015_0002
[0072] The remainder of the process is similar to the semi-supervised and unsupervised approaches described above, wherein the feature extraction stage results in a set of pooled group features for the training dataset. In some embodiments, a target group may similarly undergo a feature extraction stage to extract pooled group-level features. In some embodiments, the present disclosure may then provide for calculating a kNN distance and use it as an anomaly score to determine whether a target group instance may be designated as normal or anomalous.
Experimental Results
[0073] The present inventors conducted experiments to determine the performance of the present method.
Unimodal Anomaly Detection
[0074] The most common setting for evaluating anomaly detection methods is unimodal. In this setting, a classification dataset is adapted by designating one class as normal, while the other classes as anomalies. The normal training set is used to train a model of the present disclosure, wherein all the test data are used to evaluate the inference performance of the model, reported in as ROC area under the curve (ROCAUC).
[0075] The experiments were conducted against state-of-the-art methods, including deep-SVDD (Ruff, L., et al. Deep one-class classification. In ICML, 2018) which combines OCSVM with deep feature learning; geometric (Golan, I. and El-Yaniv, R. Deep anomaly detection using geometric transformations. In NeurlPS, 2018); GOAD (Bergman, 1. and Hoshen, Y. Classification-based anomaly detection for general data. In ICLR, 2020); and Multi-Head RotNet (MHRot) (Hendry cks, D., et al. Using self-supervised learning can improve model robustness and uncertainty. In NeurlPS, 2019).
[0076] The CifarlO dataset used in the experiments is a common dataset for evaluating unimodal anomaly detection. CIFAR10 contains 32 X 32 color images from 10 object classes. Each class has 5000 training images and 1000 test images. The results are presented in Table 1 below. As can be seen, the present model significantly outperforms all other methods. Table 1: Anomaly Detection Accuracy on CifarlO (ROCAUC %)
Figure imgf000017_0001
[0077] Note that the performance of the present model is deterministic for a given training and test set (e.g., no variation between runs). It may be observed that OC-SVM and Deep-SVDD are the weakest performers. This is because both the raw pixels as well as features learned by Deep- SVDD are not discriminative enough for the distance to the center of the normal distribution to be successful. Geometric and later approaches (GOAD and MHRot) perform better, but do not exceed 90% ROCAUC. The performance evaluation were made without finetuning between the dataset and simulated anomalies (which improves performance on all methods).
[0078] Geometric, GOAD and the present method were further evaluated on the Fashion MNIST dataset, consisting of 6000 training images per class and a test set of 1000 images per class. A comparison of the present method against OCSVM, Deep SVDD, Geometric and GOAD is shown in Table 2 below. As can be seen, the present method outperforms all other methods, despite the data being visually quite different from the Imagenet dataset from which the features were extracted.
[0079] Geometric, GOAD and the present method were further evaluated on the CIFAR100 dataset. CIFAR100 has 100 fine-grained classes with 500 training images each, or 20 coarse- grained classes with 2500 training images each. In the present experiments, the coarse-grained version is used. The experiment protocol is the same as CIFAR10. A comparison of the present method against OCSVM, Deep SVDD, Geometric and GOAD is shown in Table 2 below. As can be seen, the results are consistent with those obtained for CIFAR10.
Table 2: Anomaly Detection Accuracy on Fashion MNIST and CIFAR10 (ROCAUC %)
Figure imgf000018_0001
Comparisons Against MHRot:
[0080] A further comparison between the present model and MHRot was conducted on several commonly -used datasets. This comparison gives further evidence for the generality of the present model, in datasets where RotNet-based methods are not restricted by low -resolution, or by image invariance to rotations. A ROCAUC score was computed with respect to each of the first 20 categories in each dataset, by alphabetical order, designated as normal for training. The standard training and test splits are used. All test images from all dataset categories are used for inference, with the respective category designated as normal and all the rest as anomalies. For brevity of presentation, the average ROCAUC score of the tested classes is reported for the following datasets:
• 102 Category Flowers: This dataset consists of 102 categories of flowers, consisting of 10 training images each. The test set consists of between 30-200 images per-class. • Caltech-UCSD Birds 200: This dataset consists of 200 categories of bird species. Classes typically contain between 55-60 images split evenly between training and test.
• Cats Vs Dogs: This dataset consists of 2 categories - dogs and cats, with 10,000 training images each. The test set consist of 2,500 images for each class. Each image contains either a dog or a cat in various scenes and taken from different angles. The data was extracted from the ASIRRA dataset, we split each class to the first 10,000 images as training and the last 2,500 as test.
[0081] The results are shown in Table 3 below. As can be seen , the present model significantly outperforms MHRot on all datasets.
Table 3: MHRot vs. the present model on Flowers, Birds, CatsVsDogs (Average Class ROCAUC %)
Figure imgf000019_0001
Effect of Network Depth:
[0082] Deeper networks trained on large datasets such as the Imagenet dataset, learn features that generalize better than shallow network. Accordingly, the present inventors investigated the performance of the present model when using features from networks of different depths. Specifically, ROCAUC was plotted for a ResNet-based neural network with 50, 101, and 152 layers. The present model works well with all networks but performance is improved with greater network depth.
[0083] Figs. 2A and 2B illustrate the results of various network depths (i.e., number of ResNet layers) with respect to the Cifar10 and FashionMNIST datasets. Effect of the number of neighbors:
[0084] The only free parameter in the present model is the number of neighbors used in kNN. Fig. 3A shows a comparison of average CIFAR10 and FashionMNIST ROCAUC for different numbers of nearest neighbors. The differences are not particularly large, but 2 neighbors usually provide the best results.
Effect of Data Invariance:
[0085] Methods that rely on predicting geometric transformations typically use a data prior to the effect that images have a predetermined orientation (for rotation prediction) and centering (for translation prediction). This assumption is often unwarranted in the case of actual real-life images. Two interesting cases not satisfying this assumption are aerial and microscope images, as they do not have a preferred orientation, making rotation prediction ineffective. Accordingly, the present inventors have conducted experiments with respect to the following datasets:
• DIOR: Dior is an aerial image dataset. The images are registered but do not have a preferred orientation. The dataset consists of 19 object categories that have more than 50 images each, with resolution above 120 X 120 (the median number of images per-class is 578). A bounding boxes is provided with the data, such that each object may be extracted with a bounding box of at least 120 pixels in each axis. The bounding box is then resized to 256 X 256 pixels. The same experimental protocol as in the earlier datasets is then followed. The results are summarized in Table 4 below. As can be seen, the present model significantly outperforms MHRot. This is due both to the generally stronger performance of the feature extractor as well as the lack of rotational prior that is strongly used by RotNet- type methods. Note that the images are centered, a prior used by the MHRot translation heads.
• WBC: To further investigate the performance on difficult real world data, the present inventors performed an experiment on the WBC Image Dataset, which consists of high- resolution microscope images of different categories of white blood cells. The data do not have a preferred orientation. Additionally the dataset is very small, only a few tens of images per-class. Dataset 1 was used, which was obtained from Jiangxi Telecom Science Corporation, China, and was split into the 4 different classes that contain more than 20 images each. The first 80% of images in each class were used for the training set, and the last 20% were used as the test set. The results are presented in Table 4 below. As expected, the present model outperforms MHRot by a significant margin showing its greater applicability to real world data.
Table 4: Anomaly Detection Accuracy on DIOR and WBC (ROCAUC %)
Figure imgf000021_0001
Multimodal Anomaly Detection
[0086] It has been argued that unimodal anomaly detection is less realistic as in practice, normal distributions contain multiple classes. While it may be assumed that both settings occur in practice, the present inventors further present results on the scenario where all classes are designated as normal apart from a single class that is taken as anomalous (e.g., all CIFAR10 classes are normal apart from "Cat"). Note that class labels of the different classes that compose the normal class are not provided, but rather they are considered to be a single multimodal class. This setup is believed to simulate the realistic case of having a complex normal class consisting of many different unlabeled types of data.
[0087] Accordingly, the present inventors compared the present model against Geometric on CIFAR10 and CIFAR100 on this setting. The average ROCAUC across all the classes is detailed in Table 5. the present model achieves significantly stronger performance than Geometric. It is believed that occurs because Geometric requires the network not to generalize on the anomalous data. However, once the training data is sufficiently varied the network can generalize even on unseen classes, making the method less effective. This is particularly evident on CIFAR100. Table 5: Anomaly Detection Accuracy on Multimodal Normal Image Distributions (ROCAUC %)
Figure imgf000022_0001
Generalization from Small Training Datasets
[0088] One of the advantage of the present model is its ability to generalize from very small datasets. This is not possible with self-supervised learning-based methods, which do not learn general enough features to generalize to normal test images. A comparison between the present model and Geometric on CIFAR10 is presented in Fig. 3B, wherein the number of training images is plotted against the average ROCAUC. As can be seen, the present model can detect anomalies very accurately even from as few as 10 images, while Geometric deteriorates quickly with decreasing number of training images. A similar plot is presented for FashionMNIST in Fig. 3C. Geometric is not shown as it suffered from numerical issues for small numbers of images. The present model again achieved strong performance from very few images.
Unsupervised Anomaly Detection
[0089] There are settings where the training set does not consist of purely normal images, but rather a mixture of unlabeled normal and anomalous images. In most cases, it may be assumed that that anomalous images comprise only a small fraction of the number of the normal images. The performance of the present model as function of the percentage of anomalies in the training set is presented in Fig. 4. The performance is somewhat degraded as the percentage of training set impurities exist. To improve the performance, a cleaning stage may be performed, which removes approx. 50% of the training set images that have the most distant kNN inside the training set. The cleaning procedure is clearly shown to significantly improve the performance degradation as percentage of impurities. Group Anomaly Detection
[0090] To compare to existing baselines, the present method was tested on a group anomaly detection task detailed in D’Oro, P., et al. Group anomaly detection via graph autoencoders. 2019. The data consists of normal sets containing 10 — 50 MNIST images of the same digit, and anomalous sets containing 10 — 50 images of different digits. By simply computing the trace- diagonal of the covariance matrix of the per-image ResNet features in each set of images, a 0.92 ROCAUC was achieved.
[0091] As a harder task for group anomaly detection in unordered image sets, the normal class was designated as sets consisting of exactly one image from each of the M CIFAR10 classes (specifically the classes with ID 0.. M — 1) while each anomalous set consisted of M images selected randomly among the same classes (some classes had more than one image and some had zero). Fig. 5 shows the average ROCAUC for anomaly detection using the present model on the concatenated features of each individual image in the set. As expected, this baseline works well for small values of M where there is a sufficient number of examples of all possible permutations of the class ordering. However, as M grows larger (M > 3), its performance decreases, as the number of permutations grows exponentially. This method, with 1000 image sets for training, is also compared to nearest neighbors of the orderless max -pooled and average-pooled features, wherein the result shows that mean-pooling significantly outperforms the baseline for large values of M. While performance of the concatenated features may be improved by augmenting the dataset with all possible orderings of the training sets, it will grow exponentially for a non-trivial number of M making it an ineffective approach.
Implementation
[0092] In all experiments of the present model reported hereinabove, the input images are resized to 256 x 256, a center crop of size 224 x 224 is taken, and pre-trained ResNet (consisting of 101 layers) pre-trained on the Imagenet dataset, is used to extract the features after the global pooling layer. This feature is the image embedding. Analysis — kNN vs. One-Class Classification
[0093] In the experiments reported hereinabove, it was found that kNN achieved very strong performance for anomaly detection tasks. Figs. 6A-6B shows t-SNE plots of the test set features of CIFAR10. The normal class is plotted in light color, while the anomalous data is marked in marked in dark color. The t-SNE plots of the features learned by SVDD are shown on the left, Geometric in the center, and the Imagenet dataset pre-trained feature extractor on the right, where the normal class is Airplane (Fig. 6A) and Automobile (Fig. 6B. As can be seen, the Imagenet- pretrained features clearly separate the normal class (light) and anomalies (dark). Geometric learns poor features of Airplane and reasonable features on Automobile. Deep-SVDD does not learn features that allow clean separation. It is clear that the pre-trained features embed images from the same class into a fairly compact region. It is therefore expected that the density of normal training images is much higher around normal test images than around anomalous test images. This may explain the success of kNN methods.
[0094] kNN has linear complexity in the number of training data samples. Methods such as One- Class SVM or SVDD attempt to learn a single hypersphere, and use the distance to the center of the hypersphere as a measure of anomaly. In this case the inference runtime is constant in the size of the training set, rather than linear as in the kNN case. The drawback is the typical lower performance. Another potential way of decreasing the inference time is using K-means clustering of the training features. This speeds up inference by a ratio of It may be therefore suggested to
Figure imgf000024_0001
speed up the present model by clustering the training features into K clusters and then performing kNN on the clusters rather than the original features. Table 6 below presents a comparison of performance of the present model and its K-means approximations with different numbers of means (we use the sum of the distances to the 2 nearest neighbors). As can be seen, for a small loss in accuracy, the retrieval speed can be reduced significantly.
Table 6: Accuracy on CIFAR10 using K-means approximations and full kNN (ROCAUC %)
Figure imgf000024_0002
Use of Pre-Trained Features
[0095] In some embodiments, the present disclosure provides for an anomaly detection method that adapts pretrained features and mitigates or avoids catastrophic collapse. Experimental results show that the present disclosure significantly outperform current methods while addressing their limitations.
[0096] Anomaly detection methods require high-quality features. One way of obtaining strong features is to adapt pre-trained features to anomaly detection on the target distribution. Unfortunately, simple adaptation methods often result in feature deterioration and degraded performance. DeepSVDD (see Lukas Ruff, et al. Deep one-class classification. In ICML, 2018) combats collapse by removing biases from architectures, but this limits the adaptation performance gain. Accordingly, in some embodiments, the present disclosure provides for two methods for combating feature collapse:
A variant of early stopping that dynamically learns the stopping iteration, and elastic regularization inspired by continual learning.
[0097] As noted earlier, in the computational anomaly detection task, the learner observes a set of training examples. The learner is then tasked to classify novel test samples as normal or anomalous. There are multiple anomaly detection settings investigated in the literature, corresponding to different training conditions. One such setting assumes that only normal images are used for training. Another setting provides data samples simulating anomalies.
[0098] In recent years, deep learning methods have been introduced for anomaly detection, typically extending classical methods with deep neural networks. Different auxiliary tasks (e.g. autoencoders or rotation classification) are used to learn representations of the data, while a great variety of anomaly criteria are then used to determine if a given sample is normal or anomalous. An important issue for current methods is the reliance on limited normal training data for representation learning, which limits the quality of learned representations. One solution is to pre- train features on a large external dataset, and use the features for anomaly detection. However, as there is likely to be some mismatch between the external dataset and the task of anomaly detection on the target distribution, feature adaptation is an attractive option. Unfortunately, feature adaptation for anomaly detection often suffers from catastrophic collapse - a form of deterioration of the pre-trained features, where all the samples, including anomalous, are mapped to the same point. DeepSVDD was proposed to overcome collapse by removing biases from the model architecture, but this restricts network expressively and limits the pre-trained models that can be borrowed off-the-shelf. It was also proposed to jointly train anomaly detection with the original task which has several limitations and achieves only limited adaptation success.
[0099] Accordingly, the present disclosure provides for two techniques to overcome catastrophic collapse:
An adaptive early stopping method that selects the stopping iteration per-sample, using a novel generalization criterion, and an elastic regularization, motivated by continual learning, that postpones the collapse.
[00100] the present disclosure also provides an extensive evaluation of Imagenet -pretrained features on one-class anomaly detection. Thorough experiments demonstrate that the present method outperform the state-of-the-art by a wide margin.
Feature Adaptation for Anomaly Detection
[00101] The present general framework examines several adaptation -based anomaly detection methods. Assume a set Dtrain of normal training samples: x1, x2, . xN. The framework consists of three steps:
Feature extractor pretraining: A pre-trained feature extractorψ0 is typically learned using self-supervised learning (auto-encoding, rotation or jigsaw prediction). The loss function of the auxiliary task may be denoted Lpretrain. The auxiliary task can be learned either on the training set Dtrain or on an external dataset Dpretrain (such as the Imagenet dataset).
Feature adaptation: Features trained on auxiliary tasks or datasets may require adaptation before being used for anomaly scoring on the target data. This can be seen as a finetuning stage of the pre-trained features on the target training data. The feature extractor after adaptation may be denotedψ. Anomaly scoring: Having adapted the features for anomaly detection, the features ψ(x1),ψ(x2)..ψ(xN) of the training set samples are extracted. The method then proceeds to learn a scoring function, which describes how anomalous a sample is. Typically, the scoring function seeks to measure the density of normal data around the test sampleψ(x) (either by direct estimation or via some auxiliary task) and assign a high anomaly score to low density regions.
[00102] DeepS VDD was proposed, which suggests to first train an autoencoder E on the normal- only train images. The encoder is then used as the initial feature extractorψ0(x) = E(x). As the features of the encoder are not specifically adapted to anomaly detection, DeepSVDD adaptsψ on the training data. The adaptation takes place by minimizing the compactness loss: (3)
Figure imgf000027_0001
where c is a constant vector, typically the average ofψ0(x) on the training set. However, the trivial solutionψ = c poses a concern, and therefore an architectural restrictions may be implemented to mitigate it, most importantly removing the biases from all layers. However, the effect of adaptation of the features in DeepSVDD does not outperform simple feature whitening.
[00103] Joint optimization (JO) was proposed, and suggests using a deep feature extractor trained for object classification on the ImageNet dataset. Due to fear of "learning a trivial solution due to the absence of a penalty for miss-classification,” the method does not adapt by finetuning on the compactness loss only. Instead, the task setting is relaxed, by assuming that a number (~ 50k ) of labelled original ImageNet images, Dpretrain, are still available at adaptation time. They proposed to train the featuresψ under the compactness loss jointly with the original ImageNet classification linear layer W and its classification loss, here the CE loss with the true label
Figure imgf000027_0003
Figure imgf000027_0004
(4)
Figure imgf000027_0002
where W is the final linear classification layer and α is a hyper-parameter weighting the two losses. It is noted that the method has two main weaknesses: (i) it requires retaining a significant number of the original training images which can be storage intensive, and (ii) jointly training the two tasks may reduce the anomaly detection task accuracy, which is the only task of interest in this context.
[00104] Accordingly, in some embodiments, the present disclosure provides for feature adaptation for anomaly detection, which adapts general pre-trained features to anomaly detection on the target distribution. In some embodiments, the present method is agnostic to the specific pretrained feature extractor. Based on experiments conducted by the present inventors, it was found that the Imagenet dataset pretrained features achieve better results.
[00105] In some embodiments, the present method uses the compactness loss (Eq. 3) to adapt the general pre-trained features to the task of anomaly detection on the target distribution. However, instead of constraining the architecture or introducing external data into the adaptation procedure, the present method tackles catastrophic collapse directly. The main issue is that the optimal solution of the compactness loss can result in "collapse," where all possible input values are mapped to the same point (ψ (x) = c, ∀x). Learning such features will not be useful for anomaly detection, as both normal and anomalous images will be mapped to the same output, preventing separability. The issue is broader than the trivial "collapsed" solution after full convergence, but rather the more general issue of feature deterioration, where the original good properties of the pretrained features are lost. Even a non-trivial solution might not require the full discriminative ability of the original features which are none-the-less important for anomaly detection.
[00106] To avoid this collapse, the present method provides for two options: (i) finetuning the pretrained extractor with compactness loss (Eq.3) and using sample-wise early stopping, and (ii) when collapse happens prematurely, before any significant adaptation happens, mitigating it using a Continual Learning-inspired adaptive regularization.
[00107] Fig. 7 is an illustration of the present feature adaptation procedure, wherein the pre- trained feature extractor ψ0 is adapted to make the normal features more compact resulting in feature extractorψ. After adaptation, anomalous test images lie in a less dense region of the feature space. Sample-wise Early Stopping (SES):
[00108] Early stopping is one of the simplest methods used to regularize neural network. While stopping the training process after constant number of iterations helps to control the collapse of the original features in most examined datasets, in other cases, collapse occurs earlier in the training process, thus the best number of early stopping iterations may vary between datasets. Accordingly, in some embodiments, the present disclosure provides for "samplewise early stopping" (SES). The intuition for the method can be obtained from Figs. 8A-8B. As can be seen, anomaly detection accuracy is correlated to the ratio between the average compactness loss of test set anomalies and the average compactness loss of training set normal images. Accordingly, the present disclosure provides for saving checkpoints of the network at fixed intervals during the training process, e.g., corresponding to different early stopping iterations (ψ12. .ψT)· Thus, for each networkψt, the average loss on the training set images st is calculated. During inference, a target image x is scored using each modelψt(x) = ft, and the score is normalized by the relevant average score st. The maximal normalized score is set as the anomaly score of this sample, as this roughly estimates the model that achieves the best separation between normal and anomalous samples. Note that each sample is scored using only its features ft, and the normal train set average score st, without seeing the labels of any other test set samples.
Continual Learning (EWC)
[00109] In some embodiments, the present disclosure provides for a novel solution for overcoming premature feature collapse that draws inspiration from the field of continual learning. The task of continual learning tackles learning new tasks without forgetting the previously learned ones. It may be noted, however, that the present task is not identical to standard continual learning as (i) it deals with the one-class classification setting whereas continual-learning typically deals with multi-class classification, and (ii) it aims to avoid forgetting the expressivity of the features but do not particularly care if the actual classification performance on the old task is degraded. A simple solution for preventing feature collapse is by regularization of the change in value of the weights of the feature extractor ψ from those of the pre-trained extractor ψ0. However, this solution is lacking as the features are more sensitive to some weights than others and this can be "exploited" by the adaptation method.
[00110] Accordingly, in some embodiments, the present disclosure provides for using elastic weight consolidation (EWC). Using a number of mini-batches (e.g., 100 batches) of pretraining on the auxiliary task, the diagonal of the Fisher information matrix F is computed for all weight parameters of the network. Note that this only needs to happen once at the end of the pretraining stage and does not need to be repeated. The value of the Fisher matrix for diagonal element θ' is given by:
(5)
Figure imgf000030_0001
[00111] The diagonal of the Fisher information matrix is used to weight the Euclidean distance
Figure imgf000030_0003
of the change of each network parameter θi ∈ψ0 and its corresponding parameter . This
Figure imgf000030_0004
weighted distance can be interpreted as a measure of the curvature of the loss landscape as function of the parameters - larger values imply high curvature, inelastic weights. This regularization is used in combination with the compactness loss, the losses are weighted by the factor λ, which is a hyperparameter of the method (we always use λ = 104):
(6)
Figure imgf000030_0002
[00112] Networkψ is initialized with the parameters of the pretrained extractorψ0 and trained with SGD.
Anomaly Scoring
[00113] Given strong features and appropriate adaptation, the present transformed data typically follows the standard anomaly detection assumption, i.e., high-density in regions of normal data. As in classical anomaly detection, scoring can be done by density estimation. The present method performs better with strong non-parametric anomaly scoring methods. Several anomaly scoring methods can be evaluated: (i) Euclidean Distance to the mean of the training features, (ii) the K nearest-neighbor distance between the target (test set) features and the features of the training set images, and/or (iii) computing the K-means of the training set features, and computing the distance between the target sample features to the nearest mean.
Outlier Exposure
[00114] An extension of the typical image anomaly detection task assumes the existence of an auxiliary dataset of images D0E, which are more similar to the anomalies than normal data. In case such information is available, a linear classification w layer may be trained together with the features y under a logistic regression loss (Eq. 7). As before,ψ is initialized with the weights from ψ0. After training y and w, w ·ψ(x) may be used as the anomaly score. (7)
Figure imgf000031_0001
Experimental Results - EWC
[00115] The present inventors have compared the EWC variant of the present method to One- class SVM (see Bernhard Scholkopf, et al. Support vector method for novelty detection. In NIPS, 2000), DeepSVDD, and Multi-Head RotNet. The present method is also comrade to raw (un- adapted) pretrained features. To investigate performance in domains significantly different from the dataset used to pretrain the features, the present evaluated the present method across a large range of datasets: standard datasets (CIFARlO/100, CatsVsDogs), Black-and-white dataset (Fashion MNIST), Small fine-grained datasets (Birds200/Oxford Flowers), Medical dataset (WBC), Very fine-grained anomalies (MVTec), and aerial images (DIOR). Table 7 below shows the results.
Table 7: Anomaly detection performance (Average ROC AUC %)
Figure imgf000031_0002
Figure imgf000032_0001
[00116] The main results show the: (i) pre-trained features achieve significantly better results than self-supervised features on all datasets; (ii) Feature adaptation significantly improves the performance on larger datasets; and (iii) outlier exposure (OE) can further improve performance in the case where the given outliers are more similar to the anomalies than the normal data. OE achieves near perfect performance on CIFAR10/100 but hurts performance for Fashion MNIST/CatsVsDogs which are less similar to the 80M Tiny images dataset.
Analysis and further Evaluation
[00117] Tables 7 above and 8 below present a comparison between methods that use self- supervised and pre-trained feature representations. As can be seen, the autoencoder used by DeepSVDD is particularly poor. The results of the MHRotNet as a feature extractor are better, but still underperform the present methods. The performance of the raw deep ResNet features without adaptation significantly outperforms all methods, including Fashion MNIST and DIOR which have significant differences from the ImageNet dataset. It may be therefore concluded that ImageNet-pretrained features typically have significant advantages over self-supervised features. Table 8 shows that self-supervised methods do not perform well on small datasets as such methods require large numbers of normal samples in order to learn strong features. On the other hand ImageNet-pretrained features obtain very strong results.
Table 8: Pretrained feature performance on various small datasets (Average ROC AUC %)
Figure imgf000032_0002
Figure imgf000033_0001
[00118] The results in Table 7 on FMNIST, DIOR, WBC, MVTec suggest that pretrained features generalize to anomaly detection on domains far from the pretraining dataset. The ImageNet- pretrained features were evaluated on datasets of various sizes, domains, resolutions and symmetries. On all those datasets pretrained features outperformed tother methods. These datasets include significantly different objects from those of ImageNet, but also fine-grained intra-object anomalies, and represent a spectrum of data types: aerial images, microscopy, industrial images. This shows that one of the main concerns of using pre-trained features, namely, generalizing to distant domains is not an issue in practice.
[00119] Typically, anomaly detection methods employ different levels of supervision. Within the one-class classification task, one may use outlier exposure (OE) - an external dataset (e.g. the ImageNet dataset), pretrained features, or no external supervision at all. The most extensive supervision is used by OE, which requires a large external dataset at training time, and performs well only when such a dataset is from a similar domain to the anomalies. In cases where the dataset used for OE has significantly different properties, the network may not learn to distinguish between normal and anomalous data, as the normal and anomalous data may have more in common than the OE dataset.
[00120] Pretraining, like Outlier Exposure, is also achieved through an external labelled dataset, but differently from OE, the external dataset is only required once - at the pretraining stage and is not used again. Additionally, the same features are applicable for very different image domains from that of the pretraining dataset. Self-supervised feature learning requires no external dataset at all, which can potentially be an advantage. While there might be image anomaly detection tasks where ImageNet-pretrained weights are not applicable, there was no evidence for such cases after examining a broad spectrum of domains and datasets. This indicates that the extra supervision of the ImageNet-pre trained weights comes at virtually no cost.
[00121] The present inventors did not find evidence that pretrained features improve the performance of RotNet-based AD methods. As can be seen in Table 9 below, pretrained features improve the auxiliary task performance on the normal data, but also on the anomalous samples. As such methods rely on a generalization gap between normal and anomalous samples, deep features actually reduce this gap, as a solution to the auxiliary task becomes feasible for both types of images.
Table 9: Comparison of average transformation prediction accuracy (%)
Figure imgf000034_0001
[00122] Feature adaptation aims to make the distribution of the normal samples more compact, with respect to the anomalous samples. The present approach of finetuning pretrained features for compactness under EWC regularization, significantly improves the performance over "raw" pretrained features. While the distance from the normal train samples center, of both normal and anomalous test samples is reduced, the average distance from the center of anomalous test samples is typically further than that of normal samples, in relative terms. This makes anomalies easier to detect by standard classifiers such as kNN.
[00123] While the present method-EWC may train more than 7.8 k minibatches without catastrophic collapse on CIFAR10, performance of training without regularization usually peaks higher but collapse earlier. Therefore, the constant early stopping epoch was set such that the net trains with to 2.3 k minibatches on all datasets for comparison. The present method-SES usually achieves an anomaly score not far from the unregularized early stopping peak performance, but is most important in cases where unregularized training fails completely. [00124] Table 10 below compares the present method against
Joint optimization (JO), co-training compactness with ImageNet classification which requires ImageNet data at training time. It can be seen that the present method-EWC always outperforms JO feature adaptation.
Early stopping (ImageNet pretraining + adaptation, with early stopping after constant iterations number), generally has higher performance than the present method-EWC, but has severe collapse issues on some classes.
Present method-SES is similar to early stopping, but the present method-SES does not collapse as badly on CatsVsDogs dataset. It is noted that weighting equally the changes in all parameters achieves similar results to early stopping.
Figure imgf000035_0001
Table 10: A comparison of different feature adaptation methods (Avg. ROC AUC %)
Figure imgf000035_0002
[00125] Fine-tuning all the layers is prone to feature collapse, even with continual learning (see Table 11 below). Finetuning Blocks 3 & 4, or 2, 3 & 4, results in similar performance. Finetuning only block 4 results in a very similar performance to linear whitening of the features according to the train samples (94.6 with whitening vs. 94.8 with finetuning only the last block). Similar effect as can be seen in the original DeepSVDD architecture. Accordingly, it is recommended to finetune
Blocks 3 & 4. Table 11: Performance of finetuning different ResNet blocks (CIFAR10 w. EWC, ROC AUC %)
Figure imgf000036_0001
Anomaly Scoring Functions
[00126] kNN achieves an improvement of around 2% on average with respect to distance to the center. A naive implementation of kNN has linear runtime complexity in the number of training samples. K-means with a small number of clusters gives ~1% decrease. It is noted that even for very large datasets, or many thousands of means, both kNN and K-means can run faster than real- time.
SUB-IMAGE ANOMALY DETECTION WITH DEEP PYRAMID CORRESPONDENCES
[00127] Nearest neighbor (kNN) methods utilizing deep pre-trained features exhibit very strong anomaly detection performance when applied to entire images, as described above. However, a potential limitation of kNN methods is the lack of segmentation map describing where the anomaly lies inside the image.
[00128] Accordingly, in some embodiments, the present disclosure further provides for a novel anomaly segmentation approach based on alignment between the anomalous image and a constant number of the nearest normal images. The present method, termed Semantic Pyramid Anomaly Detection, uses correspondences based on a multi-resolution feature pyramid. The present method is shown to achieve state-of-the-art performance on unsupervised anomaly detection and localization while requiring virtually no training time.
[00129] A key human ability is to detect novel images that stand out in the succession of like images observed day-to-day, e.g., those images indicating opportunity or danger, that deviate from previous patterns. Such ability typically triggers particular vigilance on the part of the human agent. Due to the importance of this task, allowing computers to detect anomalies is a key task for artificial intelligence. [00130] As a motivational example, let us consider assembly-line fault detection. Assembly lines manufacture many instances of a particular product. Most products are normal and fault-free. However, on occasion, the manufactured products contain some faults, e.g. dents, wrong labels or part duplication. As reputable manufacturers strive to keep a consistent quality of products, prompt detection of the faulty products is very valuable. As mentioned earlier, humans are quite adept at anomaly detection, however, having a human operator oversee every product manufactured by an assembly line has several key limitations, e.g., costs associated with employing skilled operators, difficulty to obtain and train skilled operators, limited human attention span, and difficulty to obtain consistent results over time and across various operators.
[00131] Although computer visual anomaly detection is very valuable, it is also quite challenging. One challenge common to all anomaly detection methods is the unexpectedness of anomalies. Typically, in supervised classification, test classes come from a similar distribution to the training data. In most anomaly detection settings, the distribution of anomalies is not observed during training time. Different anomaly detection methods differ by the way the anomalies are observed at training time. For example, in some cases, at training time only normal data is observed. This is a practically useful setting, as obtaining normal data (e.g., products that contain no faults) is relatively easy. This setting is sometimes called semi-supervised or normal-only training setting. An easier scenario is fully-supervised, i.e., both labelled normal and anomalous examples are presented during training.
[00132] Another challenge particular to visual anomaly detection (rather than non-image anomaly detection methods) is the localization of anomalies, i.e., segmenting the parts of the image which the algorithm deems anomalous. This is very important for explainability of the decision made by the algorithm, as well as for building trust between operators and novel AI systems. It is particularly important for anomaly detection, as the objective is to detect novel changes not seen before, and with which humans might not be familiar. In this case, the algorithm may teach the human operator of the existence of new anomalies or alternatively the human may decide that this anomaly is not of interest, thus not rejecting the product and resulting is cost-savings.
[00133] Accordingly, in some embodiments, the present disclosure provides for a novel method for solving the task of sub-image anomaly detection and segmentation. The present method does not require an extended training stage, it is fast, robust, and achieves state of the art performance, in some embodiments, the present method consists of several stage:
Image feature extraction using a pre-trained deep neural network (e.g. , a ResNet pre-trained on the ImageNet dataset, http://www.image-net.org/), nearest neighbor retrieval of the nearest K normal images to a target data sample, finding dense pixel-level correspondence between the target data sample and the nearest neighbor normal images, and identification of target image regions that do not have near matches in the retrieved normal images as anomalous.
[00134] In some embodiments, the present disclosure computes sub-image feature representations for each image in a set of normal images and for a given target image. A sub-image feature representations may consist of a set of features, each feature may give a description of the image around some image location. One example of a set of locations can be the centers of each pixel.
[00135] In some embodiments, the present disclosure classifies a target location within the target image as normal or anomalous, given the similarity of its feature representation to that of other sub-image feature representations. In some embodiments, the present disclosure may use one or more suitable classifier to perform this task, e.g., K-nearest neighbors (kNN), K means, OCSVM, SVDD, neural network, and the like.
[00136] In some embodiments, the classifier may search for the nearest features to the target feature within the sub-image feature representation of the normal images and/or within the sub- image feature representation of the target image. Locations with distances to the nearest features larger than a pre-specified threshold may be classified as anomalous. In some embodiments, such distance measures may include the Euclidean distance.
[00137] In some embodiments, features may be extracted by any suitable method, e.g., a deep neural network (pre-trained or otherwise); a hand-crafted pipeline (e.g., HOG, color histograms, image location); and/or using the raw data itself. In some embodiments, neural network activations extracted at multiple resolutions (feature pyramid) may be used. In some embodiments, a dense sub-image feature representations of uniform resolution may be formed using upscaling of the activations of the different resolutions within a neural network to that of the highest resolution. The highest resolution can be the same as the input resolution or some intermediate layer.
[00138] In some embodiments, training data may comprise normal-only images. In some embodiments, if some of the images in a training dataset are anomalous, a method for detecting the whole normal images may be first performed (e.g., the whole image anomaly detection method disclosed hereinabove). In some embodiments, the training dataset may be pruned by selecting the images that are most similar to the target image, e.g., as measured using, e.g., a global deep feature representation.
[00139] In some embodiments, the present method may also be applied to video. Thus, a target frame sequence within a video segment may be as the target segment. Wherein other frame sequences in the video segment may be treated as the normal segments. The kNN classification can be performed similarly to the above. In some embodiments, obtaining features for video may be performed using any suitable method, e.g., extraction by a deep neural network (pre-trained or otherwise), wherein the network may take in single or multiple frame inputs; a hand-crafted pipeline (e.g. HOG, color histograms, clip time or location); and/or the raw data itself. It is possible to use neural network activations extracted at multiple resolutions (feature pyramid). One way of forming a dense sub-image feature representations of uniform resolution is upscaling the activations of the different resolutions to that of the highest resolution. The highest resolution can be the same as the input resolution or some intermediate layer. This can also be performed in the temporal domain. In some embodiments, for the normal video segments, the entire video training set or a part of it may be selected. If some of the segments given for training are anomalous, a method for detecting the normal segments can be first performed.
[00140] The present disclosure is more accurate, faster, and more stable than previous methods, and does not require a dedicated training stage. The present inventors have evaluated the present method on two high quality datasets for evaluating sub-image anomaly detection task:
MVTec (Bergmann, P. et al. MVTec ad-a comprehensive real-world dataset for unsupervised anomaly detection. In: CVPR (2019): A dataset simulating an industrial fault detection where the objective is to detect parts of images a products that contain faults such as dents or missing parts
The ShanghaiTech Campus dataset (STC, Luo, W. et al. A revisit of sparse coding based anomaly detection in stacked RNN framework. In: ICCV (2017)): Simulates a surveillance setting where camera observe a busy campus and the objective is to detect anomalous objects and activities such as fights.
Correspondence-based Sub-Image Anomaly Detection
[00141] The first stage of the present method is the extraction of strong image level features. The same features are later used for pixel-level image alignment. There are multiple options for extracting features. The most commonly used option is self-supervised feature learning, that is, learning features from scratch directly on the input normal images. Although it is an attractive option, it is not obvious that the features learned on small training datasets will indeed be sufficient for serving as high-quality similarity measures. Accordingly, in some embodiments, the present disclosure employs a ResNet feature extractor pre-trained on the ImageNet dataset. As image-level features the present disclosure uses the feature vector obtained after global-pooling the last convolutional layer. The global feature extractor may be denoted F, wherein for a given image xL, the extracted features are denoted fi : fi = F(xi) (8)
[00142] At initialization, the features for all training images (which are all normal) are computed and stored. At inference, only the features of the target image are extracted.
[00143] The first stage in the present method is determining which images contain anomalies using, e.g., the whole-image anomaly detection method disclosed herein above. For a given test image y, its K nearest normal images are retrieved from the training set, Nk(fy). The distance is measured using the Euclidean metric between the image-level feature representations.
(9)
Figure imgf000040_0001
[00144] Target image y is labelled at this stage as normal or anomalous. Positive classification is determined by verifying if the kNN distance is larger than a threshold t. Is classified as anomalous, target image y is further processed in order to determine the sub-image anomaly locations.
[00145] Next, a sub-image anomaly detection via image alignment stage is performed. The input to this stage is target image y that was classified as anomalous on a whole-image basis. The objective is to locate and segment the pixels of one or multiple anomalies within the target image y. In the case that the target image y was falsely classified as anomalous, the present method would mark no pixels as anomalous.
[00146] In some embodiments, the present disclosure provides for aligning the target image y to multiple retrieved normal images. In some embodiments, the present disclosure extracts deep features at every pixel location p ∈ P using feature extractor FL (xi, p) of the relevant target image y and retrieved normal training images. A gallery of features is constricted comprising all pixel locations of the K nearest neighbors G = {FL(x1,p)|p ∈ P} U {FL(x2,p)|p ∈ P}}..U {FL(xk, p)|p ∈ P }}. The anomaly score of pixel p in target image y is therefore given by: (10)
Figure imgf000041_0001
[00147] For a given threshold θp, a pixel is determined as anomalous if d(y, p ) > θp, that is, if no closely corresponding pixel in the K nearest neighbor normal images may be found.
[00148] Alignment by dense correspondences is an effective way of determining the parts of the image that are normal vs. those that are anomalous. In order to perform the alignment effectively, it is necessary to determine the features for matching. As in the previous stage, the present method uses features from a pre-trained ResNet deep CNN. The ResNet results in a pyramid of features. Similarly to image pyramid, earlier layers (levels) result is higher resolution features encoding less context. Later layers encode lower resolution features which encode more context but at lower spatial resolution. To perform effective alignment, each location is described using features from the different levels of the feature pyramid. Specifically, features from the output of the last M blocks are concatenated. The features thus encode both fine-grained local features and global context. This allows the present method to find correspondence between the target image y and K > 1 normal images, rather than having to explicitly align the images, which is more technically challenging and less robust.
[00149] Figs 9A-9B show an evaluation of the present method on detecting anomalies between flowers with or without insects, and bird varieties. Figs. 9A-9B shows an anomalous image (A), the retrieved top normal neighbor image (B), the mask detected by the present method (C), and the predicted anomalous image pixels (D). Fig. 9C shows a red spot of an anomalous woodpecker (A), the retrieved top normal neighbor image (B), the mask detected by the present method (C), and the predicted anomalous image pixels (D).
Fxnerimental Results
[00150] The present inventors conducted an evaluation of the present method against the state- of-the-art in sub-image anomaly detection.
[00151] The experiments used a Wide-ResNet50 X 2 feature extractor, which was pre-trained on the ImageNet dataset (http://www.image-net.org/). MVTec images were resized to 256 X 256 and cropped to 224 X 224. ShanghaiTech Campus dataset (STC) images were resized to 256 X 256 using cv2.INTERAREA. Due to the large size of STC, the data samples were subsampled by a factor of 5 to roughly 5000 images. All metrics were calculated at 256 X 256 image resolution. The features from the ResNet were obtained at the end of the first block (56 X 56), second block (28 X 28) and third layer (14 X 14), all with equal weights. K = 50 nearest neighbor was used for the MVTec experiments, and K = 1 nearest neighbor for the STC experiments (due to the larger dataset size). After achieving the pixel-wise anomaly score for each images, skimage Gaussian filter was used with sigma = 4.
[00152] A first set of experiments was conducted on the MVTec dataset, which comprises images from 15 different classes. Five classes consist of textures such as wood or leather. The other 10 classes contain objects (mostly rigid). For each class, the training set is composed of normal images. The test set is composed of normal images as well as images containing different types of anomalies. This dataset therefore follows the standard protocol where no anomalous images are used in training. The anomalies in this dataset are more fine-grained than those typically used in the literature, e.g., in CIFAR10 evaluation, where anomalous images come from a completely different image category. Instead, anomalies in MVTec take the form of, e.g., a slightly scratched object or a lightly deformed (e.g., bent) object. As the anomalies are at the sub-image level, i.e., only affect a part of the image, the dataset provides segmentation maps indicating the precise pixel positions of the anomalous regions.
[00153] An example of the operation of the present method on the MVTec dataset can be observed in Fig. 10, which shows an anomalous image (a hazelnut which contains a scratched area) (A), the retrieved nearest neighbor normal image, which contains a complete nut without scratches, the mask detected by the present method (C), and the predicted anomalous image pixels (D).
[00154] By searching for correspondences between the two images, the present method is able to find correspondences for the normal image regions but not for the anomalous region. This results in an accurate detection of the anomalous image region.
[00155] The present method was compared against several methods that were introduced over the last several months, as well as longer standing baseline such as OCSVM and nearest neighbors. For each setting, the present method was compared against the methods that reported the suitable metric.
[00156] First, the quality of deep nearest neighbor matching was evaluated as a means for finding anomalous images. This is computed by the distance between the test image and the K nearest neighbor normal images. Larger distances indicate more anomalous images. The ROC area under the curve (ROCAUC) of the present method and other state-of-the-art methods are compared and the average ROCAUC across the 15 classes is reported in Table 12 below. This comparison is important as it verifies whether deep nearest neighbors are effective on these datasets. The present method is shown to outperform a range of state-of-the-art methods utilizing a range of self- supervised anomaly detection learning techniques. This gives evidence that deep features trained on the ImageNet dataset (which is very different from MVTec) are very effective even on such a distant dataset.
Table 12: Image-level Anomaly Detection Accuracy on MVTec (Average ROCAUC %)
Figure imgf000043_0001
Figure imgf000044_0001
[00157] The present method was then evaluated on the task of pixel-level anomaly detection. The objective here is to segment the particular pixels that contain anomalies. The present method was evaluated using two established metrics. The first is per-pixel ROCAUC. This metric is calculated by scoring each pixel by the distance to its K nearest correspondences. By scanning over the range of thresholds, the pixel-level ROCAUC curve can be computed. The anomalous category is designated as positive. It was noted by several previous works that ROCAUC is biased in favor of large anomalies. In order to reduce this bias, the PRO (per-region overlap) curve metric was previously proposed, which first separates anomaly masks into their connected components, thereby dividing them into individual anomaly regions. By changing the detection threshold, the calculation scans over false positive rates (FPR), and for each FPR, PRO is computed, i.e., the proportion of the pixels of each region that are detected as anomalous. The PRO score at this FPR is the average coverage across all regions. The PRO curve metric computes the integral across FPR rates from 0 to 0.3. The PRO score is the normalized value of this integral.
[00158] Table 13 compares the present methods on the per-pixel ROCAUC metric against results reported by Bergmann et al. (Bergmann, P., et al. MVTec ad-a comprehensive real-world dataset for unsupervised anomaly detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9592-9600 (2019)), as well as newer results by V enkataram an an et al. (CAVGA -Ru, see Venkataramanan, S. et al. Attention guided anomaly detection and localization in images. arXiv preprint arXiv: 1911.08616 (2019)). Most of the methods use different varieties of autoencoders, including the top-performer CAVGA -Ru. The present method significantly outperforms all methods. This attest to the strength of the present method pyramid based correspondence approach.
Table 13: Subpixel Anomaly Detection Accuracy on MVTec (ROCAUC %)
Figure imgf000044_0002
Figure imgf000045_0001
[00159] Table 14 compares the present method in terms of PRO. As explained above, this is another per-pixel accuracy measure which gives larger weight to anomalies which cover few pixels.
Table 14: Subpixel Anomaly Detection Accuracy on MVTec (PRO %)
Figure imgf000045_0002
Figure imgf000046_0001
[00160] A further set of experiments was conducted with respect to the Shanghai Tech Campus (STC) Dataset. STC simulates a surveillance setting, where the input consists of videos captured by surveillance cameras observing a busy campus. The dataset contains 12 scenes, each scene consists of training videos and a smaller number of test images. The training videos do not contain anomalies while the test images contain normal and anomalous images. Anomalies are defined as pedestrians performing non-standard activities (e.g. fighting) as well as any moving object which is not a pedestrian (e.g. motorbikes). [00161] The present method was evaluated at a first stage for detecting image -level anomalies against other state-of-the-art methods. The pixel-level ROCAUC performance was then compared with the best reported method, CAVGA -Ru. The present method significantly outperforms the best reported method by a significant margin. The results are reported in Tables 15 and 16 below.
Table 15: Image-level Anomaly Detection Accuracy on STC (Average ROCAUC %)
Figure imgf000047_0001
TSC: Luo, W., et al. A revisit of sparse coding based anomaly detection in stacked RNN framework. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 341(349 (2017).
StackRNN: Luo, W., et al. A revisit of sparse coding based anomaly detection in stacked RNN framework. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 341(349 (2017)
AE-Conv3D: Zhao, Y., et al. Spatio-temporal autoencoder for video anomaly detection. In: Proceedings of the 25th ACM international conference on Multimedia, pp. 1933( 1941 (2017).
MemAE: Gong, D., et al. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1705( 1714 (2019).
AE(2D): Hasan, M., et al. Learning temporal regularity in video sequences. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 733(742 (2016).
ITAE: Huang, C., Cao, et al. Inverse-transform autoencoder for anomaly detection. arXiv preprint arXiv : 1911.10676 (2019) Table 16: Pixel-level Anomaly Detection Accuracy on STC (Average ROCAUC %)
Figure imgf000048_0001
[00162] The present inventors further conducted an ablation study on the present method in order to understand the relative performance of its different parts. Table 17 compares using different level of the feature pyramid. As can be observed, using too low a level by itself (56 X 56) significantly hurts performance while using the higher levels on their own results in diminished performance due to lower resolution. Using a combination of all features in the pyramid results in the best performance. Table 18 compares using the top K neighboring normal images as performed by the present method first stage vs. choosing them randomly from the dataset. It is observed that choosing the kNN images improves performance. This does not affect all classes equally. As an example, the numbers for “Grid” which has much variation between images are reported. For this category, using the kNN images results in much better performance than randomly choosing K images.
Table 17: Pyramid level ablation for Subpixel Anomaly Detection Accuracy on MVTec (PRO %)
Figure imgf000048_0002
Figure imgf000049_0001
Table 18: Evaluating the effectiveness of the present method kNN retrieval state.
Figure imgf000049_0002
[00163] In Table 18, 10 nearest neighbor methods are used, chosen according to stage 1, or randomly selected.
[00164] In some embodiments, the present method does not require feature training and can work on very small datasets. A difference between the present method and standard image alignment is that the present method finds correspondences between the target image and K normal images, as opposed to a single normal image in simple alignment approaches. In some embodiments, the quality of the alignment or correspondence between the anomalous image and retrieved normal images is strongly affected by the quality of extracted features, wherein context is very important. Local context is needed for achieving segmentation maps with high-pixel resolutions. Such features may be generally found in the shallow layers of a deep neural networks. Local context is typically insufficient for alignment without understanding the global context, i.e., where in the object does the part lie. Global context is generally found in the deepest layers of a neural network, however global context features are of low resolution. The combination of feature from different levels allows both global context and local resolution giving high quality correspondences.
[00165] In some embodiments, the present method is significantly reliant on the K nearest neighbors algorithm. The complexity of kNN scales linearly with the size of the dataset used for search which can be an issue when the dataset is very large or of high dimensionality. The present method approach is designed to mitigate the complexity issues. First, the initial image-level anomaly classification is computed on global -pooled features which are 2048 dimensional vectors. Such kNN computation can be achieved very quickly for moderate sized datasets and different speedup techniques (e.g. KDTrees) can be used for large scale datasets. The anomaly segmentation stage requires pixel-level kNN computation which is significantly slower than image-level kNN. However, the present method limits the sub-image kNN search to only the K nearest neighbors of the anomalous image, thus significantly limiting computation time. It is assumed that the vast majority of images are normal, therefore only a small fraction of images require the next stage of anomaly segmentation. The present method is therefore quite suitable for practical deployment from a complexity and runtime perspective.
[00166] Previous sub-image anomaly detection methods have either used self-learned features or a combination of self-learned and pre-trained images features. Self-learned approaches in this context, typically train an autoencoder and use its reconstruction error for anomaly detection. Other approaches have used a combination of pre-trained and self-learned methods. The present method numerical results have shown that the present method significantly outperforms such approaches. It is believed that given the limited supervision and small dataset size in normal-only training set as tackled in this work, it is rather hard to beat very deep pre-trained networks. Therefore, pre- trained features are used, without modification. The strong results achieved by the present method attest to the effectiveness of this approach. Transformer-Based Anomaly Segmentation
[00167] In some embodiments, the present disclosure presents new anomaly segmentation methods based on transferring pretrained features.
[00168] In some embodiments, the present disclosure provides for a baseline method that outperforms all previous anomaly segmentation methods on the MVTec dataset. The approach represents images using ImageNet-pretrained convolutional feature pyramids. Target image pixels are classified using multi-scale nearest neighbor retrieval, wherein large distances correspond to anomalous pixels.
[00169] In some embodiments, the present disclosure further provides for fully exploiting contextual information from the whole image, based on the vision transformer (ViT), a recently introduced attentional-approach. it is found that the ViT architecture learns patch embedding that encode global context well. As the resolution of ViT is limited, the present disclosure improves it by combining it in a multi-resolution construction - which significantly improve performance and enjoys strong local and global context.
[00170] In some embodiments, the present method is based on retrieval of contextual features for detecting anomalies. In some embodiments, the present method uses standard feature extraction using a pre-trained ResNet. In some embodiments, using CNN-based methods involves issues associated with non-adaptive contexts which include areas of the image that make it hard to find similar normal contexts. In some embodiments, the present disclosure provides for using attentional mechanisms that learn the relevant context.
[00171] In some embodiments, the present disclosure provides for a simple baseline method for anomaly segmentation. The method consists of two stages:
Feature extraction: Extracting a feature descriptor for each pixel, combining the activations of one or more layers of a convolutional deep network.
Similarity estimation: Calculating the similarity of the descriptor of each pixel to the closest descriptor found in the train set. [00172] Feature extraction may be performed to extract features fp for every pixel p in the image x using a pre-trained feature extractor Φ. fp = Φ(x, p) (11)
[00173] In some embodiments, the activations of a deep ResNet pre-trained on the ImageNet dataset may be used. To extract deep features fp, a pre-trained deep neural network is applied on each of the training images x, to extract the feature activations at a particular layer l at position p. Note that in this setting, all the training images are normal. All the features in a gallery G. Optionally, the number of stored features may be reduced by K-means, and only store the K means themselves. For the target image, features are extracted from each of its pixels in an identical way.
[00174] In some embodiments, the present disclosure then proceeds to estimate the similarity of the features extracted from the training images and the target image. The features of each of the pixels of the target pixels
Figure imgf000052_0001
are compared with each of the features in the gallery G (which have been potentially reduced to the K means). The similarity is scored using the sum of the L2 distance to the K nearest features: (12)
Figure imgf000052_0002
where NK(fp, G ) indicates the K nearest neighbors in the gallery G to the target feature
Figure imgf000052_0004
[00175] In some embodiments, by comparing the distance with some threshold t,
Figure imgf000052_0003
which is a hyperparameter of the method, the pixel p in the target image is classified as normal or anomalous. In some embodiments, threshold invariant metrics such as ROCAUC may be used rather than a threshold.
[00176] In convolutional neural networks (CNNs), lower layers result in higher resolution features encoding less context. Deeper layers extract features which encode more context but at lower spatial resolution. The feature extractor that outputs the activations of layer l may be denoted as Φl . In some embodiments, the present disclosure describes each pixel by combining the levels of the feature pyramid. Although typically the features from different layers of the pyramid are concatenated, the present disclosure instead computes the score of each pixel using each feature layer individually
Figure imgf000053_0002
and combine the scores to obtain a total multi-layer score: s =
Figure imgf000053_0001
Transformer-Based Anomaly Segmentation
[00177] In some embodiments, the present disclosure provides for relaxing the rigid design of the spatial feature pyramid. It is noted that the context in CNNs is non-adaptive and is determined by the level of the pyramid. Fig. 11 shows an example of the effective contexts of CNNs and transformers, and the anomaly segmentation results on an anomalous image from MVTec Screw class. As can be seen, the effective context of the CNN is limited, while the actual attention pattern of the transformer is able to focus on the entire object. The anomaly segmentation of the transformer is significantly more similar to the ground truth than that of the CNN.
[00178] Although work has been presented previously on mitigating this issue, it has mostly not been widely adopted due to the deviance from the main design principles of CNNs. CNN features that are reliant on the context may not find a good similarity correspondence, as random background patterns may not repeat between the training and the test sets. Instead, the present disclosure provides for using Vision Transformers (ViT) for anomaly detection. In this architecture, each pixel may gain its context from across the entire image on the one hand, but tends to focus only on context features that are deemed relevant according to the attention layers. The attention layers in each transformer unit, allow the network to learn to avoid including irrelevant context and therefore outperform CNNs.
[00179] To overcome the limitation of the fixed context of CNNs, the present disclosure provides for using attention-based architectures. Vision Transformers were very recently proposed by Dosovitskiy et al. (see, Alexey Dosovitskiy, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.) Transformers consist of a set of multi-headed self- attention (MSA) layers and multi-layer perceptron (MLP) blocks. Each layer l first takes as input a representation f (where the layer superscript ί of fl is dropped in the present notation for convenience) and linearly projects it to three representations: value v ∈ , ke and query calculated for each patch p (Nd is the
Figure imgf000053_0003
y
Figure imgf000053_0004
Figure imgf000053_0005
representation dimension per head). Each of the representations v, k, q ais then split long the channel dimension into H equal parts which are called attention heads ( vh , kh, qh). Each one of the attention heads calculates an attention map using an inner product between its
Figure imgf000054_0002
query and the keys of all patches. The attention map is normalized by the square root of the number of attention heads H: (13)
Figure imgf000054_0001
[00180] The multihead self- attention layer concatenates the per-head attention maps Ah multiplied by the per-head values vh, and projects it to the representation dimension using matrix U: (14)
Figure imgf000054_0003
[00181] The patches are initialized using trainable linear projection E of the input image x (split into P patches, patch p is denoted xp ) together with an embedding representing the image position
Epos:
(15)
Figure imgf000054_0004
[00182] The xclass dimension (called the "token"), is initialized with zeros, and eventually used (at the last layer) as the final features for classification in pretraining. At each transformer layer l, the representation fl-1 is normalized using layer norm, and then updated in a residual fashion using MSA (i.e. It is then normalized again and updated with a residual
Figure imgf000054_0005
MLP block , to achieve the next layer's representation fl.
Figure imgf000054_0006
[00183] In the transformer architecture described above, all layers may potentially have all the input pixels as their context. Nevertheless, early layers learn to use relatively well localized context, while later layers learn higher level features, which require a wide context. Therefore, the activation of the 6th layer may be selected, which incorporates a sufficient amount of context while still retaining locality, yielding strong anomaly segmentation performance. Moreover, the attention maps tend to choose semantically meaningful contexts (Fig. 11), such as the object in which an anomaly may occur, rather than random background or boundary elements. [00184] ViT operates on input grids of size 14 X 14 or 16 X 16. This severely limits the resolution of the obtainable segmentation. In order to scale-up the resolution of the segmentation, the present disclosure provides for a multi-scale transformer. In this variant, pixel-level anomaly scores are extracted using the same transformer representation twice: once when applying the network and similarity estimation on the entire image Φt1, and again by splitting the image to four quarters, and applying the same method for each quarter Φt2 · The patch is scored using features extracted from each of the resolutions. The sum of the scores from each resolutions is taken as the total score for the high-resolution patch.
Experimental Results
[00185] The present disclosure was quantitatively evaluated on the MVTec dataset, which is the main dataset used by most methods to evaluate anomaly segmentation performance. It simulates industrial fault detection where the objective is to detect parts of images of products that contain faults such e.g. dents, missing parts, misalignment or unexpected textures. Each of the 15 classes contains a training set of normal images, normal test images and images of faults of different types as anomalies. The present disclosure was also evaluated on the CUB200 dataset, using two categories of Woodpecker - the normal training image have a breed that does not have a red dot on the head, while the anomalous images do. Similarly, examples are presented on the Oxford Flowers 101 dataset, wherein normal flowers do not have insects on them, while anomalous images do.
[00186] The present method is compared against a large set of known methods. Each method scores each of the pixels of the test image as normal or anomalous. The previous methods include: classical anomaly detection methods (1-NN, OCSVM), autoencoders with L2 and SSIM losses , variational autoencoders with reconstruction loss and GRAD-CAM (VAE,
Figure imgf000055_0001
GAVGA), texture distributional models (TI), shape -based matching (VM), K-means of deep features from context-less patches (CNN-Diet), GAN-based (AnoGAN), and student-teacher regression of pre-trained features (Student).
[00187] To evaluate the quality of segmentation, different evaluation metrics were proposed in the literature. As some baselines reported pixel ROCAUC while others reported PRO (and some reported both), the present disclosure compares each method on the metric that it reported. Pixel ROCAUC computes the area under the ROC curve for the pixel-segmentation accuracy. The other metric is PRO, which gives equal weighting to all the connected components of the ground truth anomaly segmentation. It integrates over different pixel-wise false positive ratios (between 0 to 0.3), and takes the cover ratio of each anomaly - averaged on all the individual anomalies in the test set (different connected components are deemed as different anomalies). In cases where the test images contain small anomalies, as well as very big ones, ROCAUC can be dominated by the big anomalies (containing many pixels) while neglecting the small ones. PRO on the other hand will give all anomalies an equal weight.
[00188] In some embodiments, the experimental architectures used by the present disclosure comprises BiT-M-R50xl ResNet and ViT-Base, both pretrained on ImageNet-21k.
[00189] Tables 19 and 20 below present the results of the baseline method. As can be seen, the use of pretrained convolutional features and simple kNN retrieval is enough to outperform all the existing methods on both anomaly detection metrics.
[00190] The results of the present transformer-based method are reported in in Tables 19, 20. It outperforms all other methods, including the simple CNN-based baseline method. All the pretrained models used by the present methods, including the ResNet BiT models, were trained based on ImageNet-21k dataset. The ViT transformer architecture serves as a better anomaly segmentation feature extractor even when it is worse as a classifier (suggesting that the contextual patch description is the main factor here). Interestingly, it was often found that the present method was penalized for detecting, or failing to detect, anomalies where the ground truth was ambiguous to us.
Table 19:Anomaly segmentation accuracy on MVTec (ROCAUC %)
Figure imgf000056_0001
Figure imgf000057_0001
Table 20: Anomaly segmentation accuracy on MVTec (PRO %)
Figure imgf000058_0001
[00191] Multiple ablations of the present method are reported in Table 21 below. The full method uses multi-scale transformer with kNN retrieval and achieves the best results. Replacing the base kNN retrieval by K-means with 2000 centroids was also evaluated, wherein while K-means results in a significant retrieval runtime and storage savings (particularly for very large datasets), it has only a minor impact on performance. The present multi-resolution transformer was also compared against the standard ViT without the addition of the higher resolution transformer (so only the 14 X 14 features map output by the 6th layer - denoted 'ViT 14 X 14'). It is clear that the multi- resolution formulation is essential for the strong performance.
[00192] Another CNN multi-scale feature combination approach was evaluated, wherein the features from all levels were concatenated to achieve a Single Feature Pyramid:
Figure imgf000059_0001
Each pixel p is then scored using score =
Figure imgf000059_0003
As can be seen, the
Figure imgf000059_0002
accuracy is quite similar between the two CNN-based approaches (97.3% vs. 97.2%) with a slight advantages to the approach of combining scores rather than features.
[00193] It was further found that a Wide ResNet50_3 CNN trained on ImageNetlk achieved very similar results to the base CNN, and yielded PRO of 92.5 (vs. 92.6 for ImageNet-21k). The largest ImageNet-21k pretrained architectures that was run (BiT-M-101x3) still unperformed ViT with PRO of 93.4, while being much larger than our transformer and being much slower (using larger transformer architectures is very likely to improve results).
[00194] Finally, the CNN approach was evaluated with the same multi-scale method used by the transformer (combining features from the full image and the 4 quarters), but it gave worse results than the multi-scale transformer method or any other convolutional method.
Table 21: Ablation on MVTec (avg. ROCAUC%)
Figure imgf000059_0004
[00195] In some embodiments, the present disclosure used transformer-based architectures to capture relevant context, while avoiding irrelevant context. Fig. 12 shows the attention maps of ViT drawn for the 2, 6 and 10 layers (left to right), for illustration. The rightmost image is the input to the network. The attention map to the classification token is shown in the top row, and the attention map to the center pixel is shown on the bottom row. As can be seen, most attention is paid to the bird rather than the background. Deeper layers pay most attention to the remarkable dot on the head of the bird. For the patch of interest, first, attention maps are calculated for each attention head as explained above. Then, the attention is averaged across the different attention heads, and plotted after normalizing to a grey-scale map between 0 to 255. The attention maps of the classification token of low level layers are able to identify the outline of the inspected object while refraining from including much of the background area. The higher layer attention maps tend to lose their localization properties, as each patch already incorporates information from many other patches. The attention maps of the center pixel show results that are quite similar. The center pixel incorporates more information from the representations of its previous layers, and its neighboring patches.
[00196] The performance of the present transformer-based approach for anomaly detection was tested at the level of the entire image. It was found that in this case, the performance of the transformer features is lower than that of the CNN-based method (87.8% vs. 85.4% ROCAUC averaged over all classes of MVTec). This demonstrates that the stronger performance of transformers on anomaly segmentation is not due to transformers having stronger features. Another supporting fact is when trained on ImageNet21k (rather than the non-public JFT300M), ViT achieves lower object classification accuracy than the CNN. Instead, the better performance on anomaly segmentation is due to the better patch contextual embedding.
[00197] Fig. 13 illustrates (left to right) original input image and its 6th layer ViT attention maps (normalized) for normal and anomalous images the top row shows results with no training set, wherein both transistor rotations can be considered normal, and the attention map cannot determine which transistor is anomalous. The bottom row shows pixels that contain anomalies and attract much more attention than their neighboring pixels and suggest where the anomalies are located. Inspection of the attention patterns of transformers in Fig. 13 illustrates an intriguing phenomenon. The transformer often pays disproportionate attention to image regions that contain anomalies. This provides some explanation for why the learned context is useful for anomaly segmentation, it highlights the parts of the context the provide evidence that a certain image region is anomalous.
[00198] This phenomenon can be used in a profitable way for a new task, zero-shot anomaly segmentation. The objective of the task is to detect the parts of the image that contain anomalies, just based on a single image and without being given other examples (normal or anomalous) from the same class. The ability to segment anomalies based on a single image is based on the pretraining properties of the networks. Specifically, the anomaly segmentation score is computed by computing the attention from the classification token to each of the patches at layer l (e.g., I = 6). As each head has a different attention pattern, the result is averaged over the attention of all heads. While it cannot be expected to segment some anomalies, such as the misaligned transistor (see Fig. 13 top row), as it is hard to define the normal alignment without a normal training set, other types of anomalies are well located (see Fig. 13 bottom row). The accuracy of zero-shot anomaly segmentation is evaluated quantitatively in Table 22 below. As can be seen, the present method obtains non-trivial segmentation accuracy of > 70% pixel-ROCAUC. It is also compared to the baseline of the kNN distance between the feature representation of the patch and its nearest neighbor. As can be seen, the attention-based approach outperforms the internal kNN baseline.
Table 22: Accuracy for Zero Shot Anomaly Segmentation (avg. ROCAUC %)
Figure imgf000061_0001
[00199] It was further evaluated whether the attention-based method can be used for zero-shot image-level anomaly detection, where the objective is to determine if an image is anomalous given just a single image and no training set of images from a similar class. A simple approach was tested of taking the maximum over the attention map averaged over all heads. The hypothesis is that anomalous images will have a larger maximal attention value than normal images. The method was evaluated over the MVTec dataset (Table 23 below). It was found that this works quite well on textures where repetitions provide evidence for normal patterns and deviation from the repetitions indicates anomalous regions (the exception is Grid, probably because the scale of repetitions is larger than the patch size). It also works very well on objects where the anomaly is a texture, e.g., Hazelnut and Bottle. In some other classes, e.g., Transistor, it is hard to infer anomalies without training images it may also be seen that the attention-map-based method outperforms the internal kNN baseline. While those results are of course weaker than the standard setting where normal-only training images are available, they illustrate the strength of the transformer-based approach for zero-shot anomaly detection.
Table 23: Accuracy for Zero Shot Anomaly Detection (avg. ROCAUC %)
Figure imgf000062_0001
[00200] In some embodiments, the present disclosure provides for results wherein the pixel-level ROCAUC may be higher than the image-level ROCAUC. For example, if only half of the images contain very small anomalies - of the size of one pixel each. In each image, a single pixel is scored with the score s = 1, the anomalous pixel if exists, and a random pixel otherwise. This kind of algorithm can achieve near-perfect pixel-level ROC and PRO (as it finds all the anomalous pixels with very low false positive ratio) but without being informative on whether the image is anomalous. Typically, anomalies are indeed very small, and therefore this scenario is quite common.
[00201] As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
[00202] Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non -exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD- ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
[00203] A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro- magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
[00204] Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
[00205] Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object- oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
[00206] Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a hardware processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. [00207] These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
[00208] The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[00209] The flowcharts and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware -based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
[00210] The descriptions of the various embodiments of the present invention have been presented for purposes of illustration but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
[00211] In the description and claims of the application, each of the words "comprise" "include" and "have", and forms thereof, are not necessarily limited to members in a list with which the words may be associated. In addition, where there are inconsistencies between this application and any document incorporated by reference, it is hereby intended that the present application controls.

Claims

CLAIMS What is claimed is:
1. A system comprising: at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program instruction, the program instructions executable by the at least one hardware processor to: receive, as input, training images, wherein at least a majority of said training images represent normal data instances, receive, as input, a target image, extract (i) a set of feature representations from a plurality of image locations within each of said training images, and (ii) target feature representations from a plurality of target image locations within said target image, calculate, with respect to a target image location of said plurality of target image locations in said target image, a distance between (iii) said target feature representation of said target image location, and (iv) a subset from said set of feature representations comprising the k nearest said feature representations to said target feature representation, and determine that said target image location is anomalous, when said calculated distance exceeds a predetermined threshold.
2. The system of claim 1 , wherein said program instructions are further executable to perform said calculating and said determining with respect to all of said plurality of target image locations.
3. The system of claim 2, wherein said program instructions are further executable to designate a segment of said target image as comprising anomalous target image locations, based, at least in part, on said determining.
4. The system of any one of claims 1-3, wherein said program instructions are further executable to apply a clustering algorithm to said set of feature representations, to obtain clusters of said feature representations, wherein said calculating comprises calculating, with respect to a target image location of said plurality of target image locations, a distance between (i) said target feature representation of said target image location, and (ii) the k nearest means of said clusters to said target feature representation.
5. The system of any one of claims 1-4, wherein said extracting is performed by applying a trained machine learning model to said training images and said target image, wherein said machine learning model is trained on a provided dataset of images.
6. The system of claim 5, wherein said trained machine learning model undergoes additional training using said training images.
7. The system of any one of claims 5 or 6, wherein said trained machine learning model comprises a deep-learning neural network architecture comprising a plurality of layers, and wherein said extracting comprises concatenating features from two or more layers of said plurality of layers.
8. The system of any one of claims 5 or 6, wherein:
(i) said extracting comprises extracting said feature representations separately from each of two or more layers of said machine learning model;
(ii) said calculating comprises calculating a distance separately with respect to said feature representations extracted from each of said two or more layers; and
(iii) said determining is based on a summation of all of said distance calculations.
9. The system of any one of claims 6-8, wherein said two or more layers include the uppermost M layers of said plurality of layers.
10. The system of any one of claims 1-5, wherein said extracting is performed by applying a trained machine learning model to said training images and said target image, wherein said trained machine learning model comprises a self-attention architecture comprising vision transformers.
11. The system of claim 1 , wherein said calculating comprises:
(i) selecting, from said training images, a specified number n of nearest images to said target image; and (ii) calculating, with respect to a target image location of said plurality of target image locations in said target image, a distance between (a) said target feature representation of said target image location, and (b) said feature representations from all of said image locations in said n nearest images; and
(iii) determining that said target image location is anomalous, when said calculated distance exceeds a predetermined threshold.
12. The system of any one of claims 1-11, wherein said feature representation encodes high spatial resolution and semantic context.
13. The system of any one of claims 1-12, wherein each of said image locations represents a pixel in (i) each of said training images, and (ii) said target image.
14. The system of any one of claims 1-13, wherein said extracting is performed with respect to all image locations in (i) each of said training images, and (ii) said target image.
15. A computer-implemented method comprising: receiving, as input, training images, wherein at least a majority of said training images represent normal data instances; receiving, as input, a target image; extracting (i) a set of feature representations from a plurality of image locations within each of said training images, and (ii) target feature representations from a plurality of target image locations within said target image; calculating, with respect to a target image location of said plurality of target image locations in said target image, a distance between (iii) said target feature representation of said target image location, and (iv) a subset from said set of feature representations comprising the k nearest said feature representations to said target feature representation; and determining that said target image location is anomalous, when said calculated distance exceeds a predetermined threshold.
16. The computer-implemented method of claim 15, further comprising performing said calculating and said determining with respect to all of said plurality of target image locations.
17. The computer-implemented method of claim 16, further comprising designating a segment of said target image as comprising anomalous target image locations, based, at least in part, on said determining.
18. The computer-implemented method of any one of claims 15-17, further comprising applying a clustering algorithm to said set of feature representations, to obtain clusters of said feature representations, wherein said calculating comprises calculating, with respect to a target image location of said plurality of target image locations, a distance between (i) said target feature representation of said target image location, and (ii) the k nearest means of said clusters to said target feature representation.
19. The computer-implemented method of any one of claims 15-18, wherein said extracting is performed by applying a trained machine learning model to said training images and said target image, wherein said machine learning model is trained on a provided dataset of images.
20. The computer-implemented method of claim 19, wherein said trained machine learning model undergoes additional training using said training images.
21. The computer-implemented method of any one of claims 19 or 20, wherein said trained machine learning model comprises a deep-learning neural network architecture comprising a plurality of layers, and wherein said extracting comprises concatenating features from two or more layers of said plurality of layers.
22. The computer-implemented method of any one of claims 19 or 20, wherein:
(i) said extracting comprises extracting said feature representations separately from each of two or more layers of said machine learning model;
(ii) said calculating comprises calculating a distance separately with respect to said feature representations extracted from each of said two or more layers; and
(iii) said determining is based on a summation of all of said distance calculations.
23. The computer-implemented method of any one of claims 20-22, wherein said two or more layers include the uppermost M layers of said plurality of layers.
24. The computer-implemented method of any one of claims 15-19, wherein said extracting is performed by applying a trained machine learning model to said training images and said target image, wherein said trained machine learning model comprises a self-attention architecture comprising vision transformers.
25. The computer-implemented method of claim 15, wherein said calculating comprises:
(i) selecting, from said training images, a specified numbern of nearest images to said target image; and
(ii) calculating, with respect to a target image location of said plurality of target image locations in said target image, a distance between (a) said target feature representation of said target image location, and (b) said feature representations from all of said image locations in saidn nearest images; and
(iii) determining that said target image location is anomalous, when said calculated distance exceeds a predetermined threshold.
26. The computer- implemented method of any one of claims 15-25, wherein said feature representation encodes high spatial resolution and semantic context.
27. The computer-implemented method of any one of claims 15-26, wherein each of said image locations represents a pixel in (i) each of said training images, and (ii) said target image.
28. The computer-implemented method of any one of claims 15-27, wherein said extracting is performed with respect to all image locations in (i) each of said training images, and (ii) said target image.
29. A computer program product comprising a non-transitory computer-readable storage medium having program instructions embodied therewith, the program instructions executable by at least one hardware processor to comprising: receive, as input, training images, wherein at least a majority of said training images represent normal data instances; receive, as input, a target image; extract (i) a set of feature representations from a plurality of image locations within each of said training images, and (ii) target feature representations from a plurality of target image locations within said target image; calculate, with respect to a target image location of said plurality of target image locations in said target image, a distance between (iii) said target feature representation of said target image location, and (iv) a subset from said set of feature representations comprising the k nearest said feature representations to said target feature representation; and determine that said target image location is anomalous, when said calculated distance exceeds a predetermined threshold.
30. The computer program product of claim 29, wherein said program instructions are further executable to perform said calculating and said determining with respect to all of said plurality of target image locations.
31. The computer program product of claim 30, wherein said program instructions are further executable to designate a segment of said target image as comprising anomalous target image locations, based, at least in part, on said determining.
32. The computer program product of any one of claims 29-31, wherein said program instructions are further executable to apply a clustering algorithm to said set of feature representations, to obtain clusters of said feature representations, wherein said calculating comprises calculating, with respect to a target image location of said plurality of target image locations, a distance between (i) said target feature representation of said target image location, and (ii) the k nearest means of said clusters to said target feature representation.
33. The computer program product of any one of claims 29-32, wherein said extracting is performed by applying a trained machine learning model to said training images and said target image, wherein said machine learning model is trained on a provided dataset of images.
34. The computer program product of claim 33, wherein said trained machine learning model undergoes additional training using said training images.
35. The computer program product of any one of claims 33 or 34, wherein said trained machine learning model comprises a deep-learning neural network architecture comprising a plurality of layers, and wherein said extracting comprises concatenating features from two or more layers of said plurality of layers.
36. The computer program product of any one of claims 33 or 34, wherein:
(i) said extracting comprises extracting said feature representations separately from each of two or more layers of said machine learning model;
(ii) said calculating comprises calculating a distance separately with respect to said feature representations extracted from each of said two or more layers; and
(iii) said determining is based on a summation of all of said distance calculations.
37. The computer program product of any one of claims 34-36, wherein said two or more layers include the uppermost M layers of said plurality of layers.
38. The computer program product of any one of claims 29-33, wherein said extracting is performed by applying a trained machine learning model to said training images and said target image, wherein said trained machine learning model comprises a self-attention architecture comprising vision transformers.
39. The computer program product of claim 29, wherein said calculating comprises:
(i) selecting, from said training images, a specified number n of nearest images to said target image; and
(ii) calculating, with respect to a target image location of said plurality of target image locations in said target image, a distance between (a) said target feature representation of said target image location, and (b) said feature representations from all of said image locations in said n nearest images; and
(iii) determining that said target image location is anomalous, when said calculated distance exceeds a predetermined threshold.
40. The computer program product of any one of claims 29-39, wherein said feature representation encodes high spatial resolution and semantic context.
41. The computer program product of any one of claims 29-40, wherein each of said image locations represents a pixel in (i) each of said training images, and (ii) said target image.
42. The computer program product of any one of claims 29-41, wherein said extracting is performed with respect to all image locations in (i) each of said training images, and (ii) said target image.
PCT/IL2021/050339 2020-03-25 2021-03-25 Deep learning-based anomaly detection in images WO2021191908A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/913,905 US20230281959A1 (en) 2020-03-25 2021-03-25 Deep learning-based anomaly detection in images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202062994694P 2020-03-25 2020-03-25
US62/994,694 2020-03-25

Publications (1)

Publication Number Publication Date
WO2021191908A1 true WO2021191908A1 (en) 2021-09-30

Family

ID=75639949

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2021/050339 WO2021191908A1 (en) 2020-03-25 2021-03-25 Deep learning-based anomaly detection in images

Country Status (2)

Country Link
US (1) US20230281959A1 (en)
WO (1) WO2021191908A1 (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870422A (en) * 2021-11-30 2021-12-31 华中科技大学 Pyramid Transformer-based point cloud reconstruction method, device, equipment and medium
CN113902710A (en) * 2021-10-12 2022-01-07 菲特(天津)检测技术有限公司 Method and system for detecting surface defects of industrial parts based on anomaly detection algorithm
CN114022475A (en) * 2021-11-23 2022-02-08 上海交通大学 Image anomaly detection and anomaly positioning method and system based on self-supervision mask
CN114078230A (en) * 2021-11-19 2022-02-22 西南交通大学 Small target detection method for self-adaptive feature fusion redundancy optimization
CN114241273A (en) * 2021-12-01 2022-03-25 电子科技大学 Multi-modal image processing method and system based on Transformer network and hypersphere space learning
CN114463329A (en) * 2022-04-12 2022-05-10 苏芯物联技术(南京)有限公司 Welding defect detection method and system based on image and time sequence data fusion
CN114615507A (en) * 2022-05-11 2022-06-10 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Image coding method, decoding method and related device
CN114897909A (en) * 2022-07-15 2022-08-12 四川大学 Crankshaft surface crack monitoring method and system based on unsupervised learning
CN115146762A (en) * 2022-06-14 2022-10-04 兰州理工大学 Method for enhancing ViT model robustness based on SE module
CN115860091A (en) * 2023-02-15 2023-03-28 武汉图科智能科技有限公司 Depth feature descriptor learning method based on orthogonal constraint
WO2023071577A1 (en) * 2021-10-28 2023-05-04 北京有竹居网络技术有限公司 Feature extraction model training method and apparatus, picture searching method and apparatus, and device
DE102022103844B3 (en) 2022-02-17 2023-06-22 Synsor.ai GmbH Method for optimizing a production process based on visual information and device for carrying out the method
CN116311427A (en) * 2023-02-07 2023-06-23 国网数字科技控股有限公司 Face counterfeiting detection method, device, equipment and storage medium
CN116383771A (en) * 2023-06-06 2023-07-04 云南电网有限责任公司信息中心 Network anomaly intrusion detection method and system based on variation self-coding model
WO2023125671A1 (en) * 2021-12-31 2023-07-06 中兴通讯股份有限公司 Image processing method, electronic device, storage medium and program product
CN116645525A (en) * 2023-07-27 2023-08-25 深圳市豆悦网络科技有限公司 Game image recognition method and processing system
CN116758400A (en) * 2023-08-15 2023-09-15 安徽容知日新科技股份有限公司 Method and device for detecting abnormality of conveyor belt and computer readable storage medium
CN116824278A (en) * 2023-08-29 2023-09-29 腾讯科技(深圳)有限公司 Image content analysis method, device, equipment and medium
CN116993694A (en) * 2023-08-02 2023-11-03 江苏济远医疗科技有限公司 Non-supervision hysteroscope image anomaly detection method based on depth feature filling
CN117314900A (en) * 2023-11-28 2023-12-29 诺比侃人工智能科技(成都)股份有限公司 Semi-self-supervision feature matching defect detection method
CN117437518A (en) * 2023-11-03 2024-01-23 苏州鑫康成医疗科技有限公司 GLNET and self-attention-based heart ultrasonic image recognition method
WO2024097126A1 (en) * 2022-10-31 2024-05-10 Cellcarta Fremont Llc System and method for automatic gating in flow cytometry
CN118552956A (en) * 2024-07-29 2024-08-27 济南大学 Automobile part detection method based on super-resolution transducer
US12106225B2 (en) 2019-05-30 2024-10-01 The Research Foundation For The State University Of New York System, method, and computer-accessible medium for generating multi-class models from single-class datasets

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4027300B1 (en) * 2021-01-12 2023-12-27 Fujitsu Limited Apparatus, program, and method for anomaly detection and classification
CN117597703B (en) * 2021-07-01 2024-09-10 谷歌有限责任公司 Multi-scale converter for image analysis
EP4145401A1 (en) * 2021-09-06 2023-03-08 MVTec Software GmbH Method for detecting anomalies in images using a plurality of machine learning programs
US20240212350A1 (en) * 2022-06-07 2024-06-27 Sri International Spatial-temporal anomaly and event detection using night vision sensors
CN114943865B (en) * 2022-06-17 2024-05-07 平安科技(深圳)有限公司 Target detection sample optimization method based on artificial intelligence and related equipment
US20240111868A1 (en) * 2022-10-04 2024-04-04 Dell Products L.P. Delayed inference attack detection for image segmentation-based video surveillance applications
CN117372720B (en) * 2023-10-12 2024-04-26 南京航空航天大学 Unsupervised anomaly detection method based on multi-feature cross mask repair
CN117274270B (en) * 2023-11-23 2024-01-26 吉林大学 Digestive endoscope real-time auxiliary system and method based on artificial intelligence
CN117611930B (en) * 2024-01-23 2024-04-26 中国海洋大学 Fine granularity classification method of medical image based on CLIP
CN118037678A (en) * 2024-02-23 2024-05-14 四川数聚智造科技有限公司 Industrial surface defect detection method and device based on improved variation self-encoder
CN117992898B (en) * 2024-04-07 2024-06-14 腾讯科技(深圳)有限公司 Training method of anomaly detection model, object anomaly detection method and device
CN118379602A (en) * 2024-06-25 2024-07-23 浙江大学 Method and system for enhancing semiconductor defect analysis by semantic and visual interpretation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170032285A1 (en) * 2014-04-09 2017-02-02 Entrupy Inc. Authenticating physical objects using machine learning from microscopic variations
US20190244513A1 (en) * 2018-02-05 2019-08-08 Nec Laboratories America, Inc. False alarm reduction system for automatic manufacturing quality control
US20200070352A1 (en) * 2018-09-05 2020-03-05 Vicarious Fpc, Inc. Method and system for machine concept understanding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170032285A1 (en) * 2014-04-09 2017-02-02 Entrupy Inc. Authenticating physical objects using machine learning from microscopic variations
US20190244513A1 (en) * 2018-02-05 2019-08-08 Nec Laboratories America, Inc. False alarm reduction system for automatic manufacturing quality control
US20200070352A1 (en) * 2018-09-05 2020-03-05 Vicarious Fpc, Inc. Method and system for machine concept understanding

Non-Patent Citations (18)

* Cited by examiner, † Cited by third party
Title
ALEXEY DOSOVITSKIY ET AL.: "An image is worth 16x16 words: Transformers for image recognition at scale", ARXIV PREPRINT ARXIV:2010.11929, 2020
BERGMAN, 1.HOSHEN, Y.: "Classification-based anomaly detection for general data", ICLR, 2020
BERGMANN, P. ET AL.: "MVTec ad-a comprehensive real-world dataset for unsupervised anomaly detection", CVPR, 2019
BERGMANN, P. ET AL.: "MVTec ad-a comprehensive real-world dataset for unsupervised anomaly detection", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2019, pages 9592 - 9600
BERNHARD SCHOLKOPF ET AL.: "Support vector method for novelty detection", NIPS, 2000
GOLAN, I.EL-YANIV, R.: "Deep anomaly detection using geometric transformations", NEURIPS, 2018
GONG, D. ET AL.: "Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection", PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, 2019, pages 1705 - 1714, XP033724029, DOI: 10.1109/ICCV.2019.00179
HASAN, M. ET AL.: "Learning temporal regularity in video sequences", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2016, pages 733 - 742, XP033021250, DOI: 10.1109/CVPR.2016.86
HE, KAIMING ET AL.: "Deep residual learning for image recognition", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2016
HENDRYCKS, D. ET AL.: "Using self-supervised learning can improve model robustness and uncertainty", NEURIPS, 2019
HUANG, C., CAO ET AL.: "Inverse-transform autoencoder for anomaly detection", ARXIV PREPRINT ARXIV: 1911.10676, 2019
LIRON BERGMAN ET AL: "Deep Nearest Neighbor Anomaly Detection", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 24 February 2020 (2020-02-24), XP081606781 *
LUKAS RUFF ET AL.: "Deep one-class classification", ICML, 2018
LUO, W. ET AL.: "A revisit of sparse coding based anomaly detection in stacked RNN framework", ICCV, 2017
LUO, W. ET AL.: "A revisit of sparse coding based anomaly detection in stacked RNN framework", PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, 2017, pages 341 - 349, XP033282888, DOI: 10.1109/ICCV.2017.45
SABOKROU MOHAMMAD ET AL: "Deep-Cascade: Cascading 3D Deep Neural Networks for Fast Anomaly Detection and Localization in Crowded Scenes", IEEE TRANSACTIONS ON IMAGE PROCESSING, IEE SERVICE CENTER , PISCATAWAY , NJ, US, vol. 26, no. 4, 1 April 2017 (2017-04-01), pages 1992 - 2004, XP011644350, ISSN: 1057-7149, [retrieved on 20170329], DOI: 10.1109/TIP.2017.2670780 *
VENKATARAMANAN, S. ET AL.: "Attention guided anomaly detection and localization in images", ARXIV PREPRINT ARXIV: 1911.08616, 2019
ZHAO, Y. ET AL.: "Spatio-temporal autoencoder for video anomaly detection", PROCEEDINGS OF THE 25TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2017, pages 1933 - 1941

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12106225B2 (en) 2019-05-30 2024-10-01 The Research Foundation For The State University Of New York System, method, and computer-accessible medium for generating multi-class models from single-class datasets
CN113902710A (en) * 2021-10-12 2022-01-07 菲特(天津)检测技术有限公司 Method and system for detecting surface defects of industrial parts based on anomaly detection algorithm
WO2023071577A1 (en) * 2021-10-28 2023-05-04 北京有竹居网络技术有限公司 Feature extraction model training method and apparatus, picture searching method and apparatus, and device
CN114078230A (en) * 2021-11-19 2022-02-22 西南交通大学 Small target detection method for self-adaptive feature fusion redundancy optimization
CN114078230B (en) * 2021-11-19 2023-08-25 西南交通大学 Small target detection method for self-adaptive feature fusion redundancy optimization
CN114022475A (en) * 2021-11-23 2022-02-08 上海交通大学 Image anomaly detection and anomaly positioning method and system based on self-supervision mask
CN113870422B (en) * 2021-11-30 2022-02-08 华中科技大学 Point cloud reconstruction method, device, equipment and medium
CN113870422A (en) * 2021-11-30 2021-12-31 华中科技大学 Pyramid Transformer-based point cloud reconstruction method, device, equipment and medium
CN114241273A (en) * 2021-12-01 2022-03-25 电子科技大学 Multi-modal image processing method and system based on Transformer network and hypersphere space learning
WO2023125671A1 (en) * 2021-12-31 2023-07-06 中兴通讯股份有限公司 Image processing method, electronic device, storage medium and program product
DE102022103844B3 (en) 2022-02-17 2023-06-22 Synsor.ai GmbH Method for optimizing a production process based on visual information and device for carrying out the method
CN114463329A (en) * 2022-04-12 2022-05-10 苏芯物联技术(南京)有限公司 Welding defect detection method and system based on image and time sequence data fusion
CN114615507A (en) * 2022-05-11 2022-06-10 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Image coding method, decoding method and related device
CN115146762A (en) * 2022-06-14 2022-10-04 兰州理工大学 Method for enhancing ViT model robustness based on SE module
CN114897909A (en) * 2022-07-15 2022-08-12 四川大学 Crankshaft surface crack monitoring method and system based on unsupervised learning
WO2024097126A1 (en) * 2022-10-31 2024-05-10 Cellcarta Fremont Llc System and method for automatic gating in flow cytometry
CN116311427A (en) * 2023-02-07 2023-06-23 国网数字科技控股有限公司 Face counterfeiting detection method, device, equipment and storage medium
CN115860091A (en) * 2023-02-15 2023-03-28 武汉图科智能科技有限公司 Depth feature descriptor learning method based on orthogonal constraint
CN116383771A (en) * 2023-06-06 2023-07-04 云南电网有限责任公司信息中心 Network anomaly intrusion detection method and system based on variation self-coding model
CN116383771B (en) * 2023-06-06 2023-10-27 云南电网有限责任公司信息中心 Network anomaly intrusion detection method and system based on variation self-coding model
CN116645525A (en) * 2023-07-27 2023-08-25 深圳市豆悦网络科技有限公司 Game image recognition method and processing system
CN116645525B (en) * 2023-07-27 2023-10-27 深圳市豆悦网络科技有限公司 Game image recognition method and processing system
CN116993694A (en) * 2023-08-02 2023-11-03 江苏济远医疗科技有限公司 Non-supervision hysteroscope image anomaly detection method based on depth feature filling
CN116993694B (en) * 2023-08-02 2024-05-14 江苏济远医疗科技有限公司 Non-supervision hysteroscope image anomaly detection method based on depth feature filling
CN116758400A (en) * 2023-08-15 2023-09-15 安徽容知日新科技股份有限公司 Method and device for detecting abnormality of conveyor belt and computer readable storage medium
CN116758400B (en) * 2023-08-15 2023-10-17 安徽容知日新科技股份有限公司 Method and device for detecting abnormality of conveyor belt and computer readable storage medium
CN116824278A (en) * 2023-08-29 2023-09-29 腾讯科技(深圳)有限公司 Image content analysis method, device, equipment and medium
CN116824278B (en) * 2023-08-29 2023-12-19 腾讯科技(深圳)有限公司 Image content analysis method, device, equipment and medium
CN117437518A (en) * 2023-11-03 2024-01-23 苏州鑫康成医疗科技有限公司 GLNET and self-attention-based heart ultrasonic image recognition method
CN117314900B (en) * 2023-11-28 2024-03-01 诺比侃人工智能科技(成都)股份有限公司 Semi-self-supervision feature matching defect detection method
CN117314900A (en) * 2023-11-28 2023-12-29 诺比侃人工智能科技(成都)股份有限公司 Semi-self-supervision feature matching defect detection method
CN118552956A (en) * 2024-07-29 2024-08-27 济南大学 Automobile part detection method based on super-resolution transducer

Also Published As

Publication number Publication date
US20230281959A1 (en) 2023-09-07

Similar Documents

Publication Publication Date Title
US20230281959A1 (en) Deep learning-based anomaly detection in images
Cohen et al. Sub-image anomaly detection with deep pyramid correspondences
Chen et al. Once for all: a two-flow convolutional neural network for visual tracking
Saedi et al. A deep neural network approach towards real-time on-branch fruit recognition for precision horticulture
Kalsotra et al. Background subtraction for moving object detection: explorations of recent developments and challenges
CA2953394C (en) System and method for visual event description and event analysis
Kristan et al. The visual object tracking vot2015 challenge results
US10824935B2 (en) System and method for detecting anomalies in video using a similarity function trained by machine learning
Karlinsky et al. The chains model for detecting parts by their context
JP7101805B2 (en) Systems and methods for video anomaly detection
Balafas et al. Machine learning and deep learning for plant disease classification and detection
Gündoğdu et al. The visual object tracking VOT2016 challenge results
Bahl et al. Single-shot end-to-end road graph extraction
Lu et al. Comparative evaluations of human behavior recognition using deep learning
Ghenescu et al. Object recognition on long range thermal image using state of the art dnn
Suwais et al. A review on classification methods for plants leaves recognition
Huang et al. Person re-identification across multi-camera system based on local descriptors
Qureshi et al. Dense segmentation of textured fruits in video sequences
Zheng et al. A winner-take-All strategy for improved object tracking
Liu et al. An adaptive feature-fusion method for object matching over non-overlapped scenes
Goyal et al. Moving Object Detection in Video Streaming Using Improved DNN Algorithm
Agrawal et al. Systematic Review on Various Deep Learning Models for Object Detection in Videos
Pava et al. Object Detection and Motion Analysis in a Low Resolution 3-D Model
Groefsema Uncertainty Quantification in DETR for Pedestrian Detection
VASILEIOSBALAFAS et al. Machine Learning and Deep Learning for Plant Disease Classification and Detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21720860

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21720860

Country of ref document: EP

Kind code of ref document: A1