CN111861967A - Network, method and apparatus for detecting local anomalies in radiation images - Google Patents
Network, method and apparatus for detecting local anomalies in radiation images Download PDFInfo
- Publication number
- CN111861967A CN111861967A CN201910315913.5A CN201910315913A CN111861967A CN 111861967 A CN111861967 A CN 111861967A CN 201910315913 A CN201910315913 A CN 201910315913A CN 111861967 A CN111861967 A CN 111861967A
- Authority
- CN
- China
- Prior art keywords
- network
- image
- reconstructed
- eigenvectors
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000005855 radiation Effects 0.000 title claims abstract description 30
- 238000012549 training Methods 0.000 claims abstract description 37
- 230000006870 function Effects 0.000 claims description 20
- 238000001514 detection method Methods 0.000 description 7
- 238000012360 testing method Methods 0.000 description 7
- 230000005856 abnormality Effects 0.000 description 6
- 230000002159 abnormal effect Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000002828 fuel tank Substances 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30268—Vehicle interior
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
A network for detecting local anomalies in a radiation image is provided, comprising a reconstructed network portion and a twin network portion. The reconstruction network part comprises a first coding network, a generation network, a discrimination network and a second coding network, wherein the first coding network is used for generating the eigenvector and the category of the original image according to the original image, the generation network is used for generating the reconstructed image according to the eigenvector and the category, and the second coding network is used for generating the eigenvector of the reconstructed image according to the reconstructed image. The twin network portion determines a similarity between the original image and the reconstructed image based on the eigenvectors of the original image and the corresponding eigenvectors of the reconstructed image to determine whether a local anomaly exists. The invention also provides a corresponding training method and a method for detecting local anomalies in the radiation image by using the network.
Description
Technical Field
The present invention relates to the field of radiation images, image understanding, deep learning, and the like, and in particular to a network, method, apparatus, and computer-readable storage medium for detecting local anomalies in radiation images.
Background
The smuggling entrainment problem at land ports has long been a major concern at customs. Some lawbreakers often hide smuggled objects in hidden locations such as wheels, engines, fuel tanks, etc. of containers, vans, cars. Due to the fact that the types of the carried samples are complex and various and the carrying positions are not easy to determine, the radiation image samples obtained by the security inspection equipment are difficult to collect. The traditional detection method utilizes the difference between the vehicle to be detected and the template to judge whether the clamping is existed or not. However, the method is influenced by various factors such as template precision, acquisition equipment and vehicle types, and the effect is often unsatisfactory.
Disclosure of Invention
In view of the shortcomings of the prior art, the present invention provides a network, method, apparatus and computer-readable storage medium for vehicle body local anomaly detection, which can significantly improve the accuracy of detecting local anomalies in radiation images.
According to a first aspect of the present invention there is provided a network for detecting local anomalies in radiation images, the network comprising:
a reconstruction network part including a first encoding network E1, a generating network G, a discriminating network D and a second encoding network E2, the first encoding network E1 being used for generating eigenvectors z and classes c of the original image x from the original image x, the generating network G being used for generating a reconstructed image from the eigenvectors z and the classes c The second encoding network E2 is mainly used for reconstructing imagesEncoding to obtain its eigenvectorAnd
twin network part based on eigenvectors z of original image x and corresponding reconstructed imageEigenvectors of (A)To judge the original image x and the reconstructed imageSimilarity between them to determine if there is a local anomaly.
In one embodiment, the models of the first encoding network E1 and the second encoding network E2 may be the same.
In one embodiment, the twin network portion is based on the eigenvectors z of the original image x and the corresponding reconstructed imageEigenvectors of (A)Feature pairs are generated and the presence or absence of local anomalies is determined from the measure of dissimilarity of the feature pairs. For example, the discrepancy metric may be a euclidean metric or the like.
In one embodiment, the reconstruction network portion may be trained by a radiation image without local anomalies.
In one embodiment, the twin network portion is trained by the radiation image with and without local anomalies after training of the reconstruction network portion is completed.
According to a second aspect of the present invention, there is also provided a training method for a network as described in the first aspect of the present invention, the method comprising: the reconstructed network portion is trained using images without local anomalies and a loss function L,
L=LRe+LD+LC+LEn
Wherein L isReL1 norm, L, representing the reconstructed and original imagesDRepresenting the original image x and the reconstructed imageL2 norm of median feature difference, L, obtained through discrimination network DCA classification function, L, representing a first encoding network E1EnRepresenting eigenvectors z and eigenvectorsL1 norm.
In one embodiment, the above items are defined as follows
LC=-Ex~P[logP(c|x)]
Where f is the function of the discrimination network D, Ex~PRepresenting the expectation of x meeting the P distribution, P (c | x) represents the probability density that x belongs to the c class.
In one embodiment, the method may further comprise: the images with and without local anomalies and the Loss function Loss are used to train the twin network part,
Loss=(1-yi,j)D(fi,fj)+yi,j·max(0,m-D(fi,fj)),
wherein D (f)i,fj) Representing features extracted from original and reconstructed images via portions of a twin networkCharacterize the distance of the pair, and for images containing local anomalies, yi,j0, and for images without local anomalies, yi,jWith 1, m is a predefined threshold.
According to a third aspect of the present invention, there is provided a method for detecting local anomalies in a radiation image, comprising: substituting an image to be measured including a local radiation image of a vehicle body into a reconstruction network part in the network according to the first aspect of the invention to generate a reconstruction image; and substituting the image to be detected and the reconstructed image into the twin network part to determine whether the image to be detected contains the abnormality.
In one embodiment, the network may be obtained by a training method according to the second aspect of the present invention.
The invention also provides a computer readable storage medium having stored thereon instructions which, when executed by a processor, cause the processor to implement a network as described in the first aspect of the invention or to perform a method as described in the second aspect of the invention.
The present invention also provides an electronic device, comprising: a processor and a memory, the memory having computer readable code stored therein which, when executed by the processor, causes the processor to implement a network as described in the first aspect of the invention or to perform a method as described in the second aspect of the invention.
The invention directly predicts the radiation image by using a deep learning method, thereby being capable of simultaneously judging whether different vehicle body parts (such as oil tanks, wheels, transmissions and the like) are hidden or not. In addition, the invention adopts multiple types of suspect-free images as training samples, combines training, can avoid the repeated use of computing resources, improves the operation efficiency, has higher accuracy for the entrainment detection of different positions, and has the characteristics of simplicity, high efficiency and accuracy.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the detailed description, serve to explain the principles of the invention. The invention will be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
FIG. 1 shows a schematic diagram of a network for detecting local anomalies in a vehicle body, according to an embodiment of the invention;
fig. 2 shows an image to be measured and a corresponding reconstructed image according to an embodiment of the invention.
Fig. 3 shows a network for detecting local anomalies in radiation images, according to another embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings.
The idea adopted by the invention is to obtain a coding vector capable of accurately describing data characteristics by using a coding model, and the coding vector can reconstruct image information through a decoding model. In order to enhance the learning ability of the network to different vehicle body positions, the different vehicle bodies can be divided into different categories to be used as training data of different categories for training the model. Because the resolution of the reconstructed image of the self-coding network is not high, the invention introduces a countermeasure network to enhance the learning capability of the network in the reconstruction process of the model, introduces the coding network which is the same as the original coding model after the model is reconstructed to obtain the eigenvector of the reconstructed image, and optimizes the reconstruction capability of the model by constraining the l1 norm of the eigenvector of the image to be measured and the reconstructed image. In addition, considering that the reconstructed image can not realize automatic judgment of the abnormity, a twin network is introduced into the second half part of the network training and is used for comparing the image to be detected and the reconstructed image, and the automatic judgment of the abnormity detection is realized by judging the similarity between the image to be detected and the reconstructed image.
Fig. 1 shows a schematic diagram of a network for detecting local anomalies of a vehicle body according to an embodiment of the invention. The network is mainly composed of convolutional layers, and the structure of the partial network is mainly composed of three parts, as shown in fig. 1, including a conditional variation coding network (coding network E1 and reconstruction network G), a discrimination network D, and a coding network E2. A conditional encoding network may be used to generate samples of different classes C, e.g., original images O of class C produce latent variables Z (also called eigenvectors) via an encoding network E1, and then produce a generated image R via a reconstruction network G. The discrimination network D can be discriminated based on the original image O and the generated image R mainly for improving the effect of reconstructing the network. Alternatively, the generated image O may be input to the encoding network E2, resulting in the hidden variable Z'. In addition, the original image O and the generated image R may be input into corresponding twin (siamese) networks, respectively, resulting in the features feat _ O and feat _ R, which will be used for whether they are similar or not.
For the above network, the present invention may implement the following procedure.
Step 1: and (4) preprocessing and sorting the data. For example, the radiation image may have problems such as streak noise and uneven brightness, and therefore, it is necessary to perform preprocessing such as streak removal and image enhancement before training.
Step 2: training of the reconstructed network part. The non-entrained images are collected as a training sample set. The cost function of the network consists of 4 parts, namely (1) LReL1 norm of reconstructed image R and original image O, and (2) LDThe model of the original image and the reconstructed image is determined by the L2 norm of the intermediate feature difference obtained by the network, (3) the classification function of the coding network, and (4) the L1 norm of the eigenvector of the self-coding network and the eigenvector of the coding network. Thus, the Loss function can be expressed as
L=LRe+LD+LC+LEn
Wherein the content of the first and second substances,
LC=-Ex~P[logP(c|x)]
wherein f is a discrimination netFunction of complex D, Ex~PRepresenting the expectation of x meeting the P distribution, P (c | x) represents the probability density that x belongs to the c class.
In the training process of the network, the reconstructed network structure is trained first, and when the generated image is stable and accurate, the next part of network training is started. The encoded reconstruction network G is obtained by optimizing the cost function of the first stage.
And step 3: training of the twin network portion. In order to determine whether there is a difference between the reconstructed image and the original image (image to be measured) (where the difference refers to an abnormal problem such as entrainment), a twin network portion (not shown in fig. 1) may be provided after the network structure is reconstructed. Two identical or similar networks of the twin network part can respectively receive the reconstructed image and the image to be detected, if the image to be detected does not have the occlusion problem, the image to be detected and the reconstructed image are not changed, otherwise, the image to be detected and the reconstructed image are different. To be able to simplify the measure of this difference, this transformation is here converted into a measure of distance. And judging whether the image contains entrainment or not by judging the affinity and the sparseness of the feature vector.
The image to be measured and the reconstructed image { img1, img2} are brought into the twin network in pairs, a characteristic pair { feat1, feat2} is obtained and is marked as D (f1, f2), and the Euclidean distance can be adopted to calculate the relation between the two images, and other substitutions can also be used. The Loss function at this stage is a contrast Loss function, and the cost function has the characteristics of increasing the difference between classes and reducing the difference in the classes, and is specifically expressed as
Loss_Similarity=(1-yi,j)D(fi,fj)+yi,j·max(0,m-D(fi,fj))
Wherein f isiAnd fjRespectively representing pairs of features obtained by passing the original image and the reconstructed image through the network when yi,jWhen the value is 1, the original image is consistent with the reconstructed image, the image to be measured does not contain entrainment, yi,jAnd 0 indicates that the image to be measured contains entrainment. And m is a predefined threshold.
In particular, it is assumed that there is a certain boundary distance m, within a sphere with m as radius, whichAttractive forces between points (e.g., points corresponding to the reconstructed image and the image under test) are present, and such forces are absent when the distance between a point and a point exceeds this m, and such a boundary m may be given in order to maintain the constraint of such forces. When the two images are similar, minimize Loss _ Similarity ═ D (f)i,fj) So that it is closer; when the two images are not similar, the minimum Loss _ similarity is max (0, m-D (f) i,fj) To separate them as much as possible.
And 4, step 4: and after the network training is finished, substituting the images to be detected to judge whether foreign matters exist. It should be noted that after the network training, the network D and the network E2 do not participate in the calculation of the test part. And acquiring a reconstructed image from the image to be detected through a condition variation coding network. Then, the image to be measured and the reconstructed image can be substituted into the twin network to determine whether the image contains an anomaly.
A method for detecting a local abnormality of a vehicle body according to an embodiment of the present invention will be described below. The method comprises the following specific steps.
Step 1: data cleaning and processing
The radiation image contains some noise of fringe type, which needs to be removed before network training. The preprocessed data is divided into two parts of data, wherein the first part of data is normal data (not containing occlusion) and is used for network training of an image reconstruction part. Different positions of the vehicle body can be intercepted and arranged into different types of data. For example, to focus on the inspection of the four positions of the wheel, the fuel tank, the engine and the battery, the four positions can be cut out first and then sorted into four types of data. The second part of data is composed of the hidden data and the suspect data, and the ratio can be 1: 1, but is not limited to this. The data of the hidden part can be selected from a real hidden image, a manually collected hidden image or an image generated by an algorithm, but is not limited to this. The above method of acquiring data may employ any object detection or segmentation method.
Step 2: network training of reconstructed image portions
To obtain reconstructed images of different positionsThe data and its corresponding categories are brought into the network described with reference to fig. 1 for training. Training data is firstly acquired to an eigenvector z through an encoding network E1, then the eigenvector z is brought into a decoding network to acquire a reconstructed image R, the reconstructed image R is substituted into an encoding network E2, and then the eigenvector z is acquiredMeanwhile, the original image O and the reconstructed image R may be substituted into the discrimination network D to perform the classification training, that is, the original image is determined as true, and the reconstructed image is determined as false. For example, the encoding nets E1 and E2 and the discrimination net D mainly use convolution layers, the number of convolution kernels per layer is a power exponent of 2, the length of a convolution kernel is 3, the step size is 2, and the dimension of an eigenvector is 100. In addition, the network parameters may not be unique, and may be adjusted according to actual conditions. For the discriminating network, a fully connected layer f can be accessed after the convolutional layer.
And step 3: training of twin network portions
And substituting the image img1 to be detected into the network trained in the step 2 to obtain a reconstructed image img 2. The portion of the network training data was substituted in the form of an image pair { img1, img2 }. The network extracts convolution characteristics through the convolution layer, performs pooling operation on the convolution characteristics at the tail end of the network along the dimensionality of the number of convolution kernels, and performs constraint on the pooled characteristics by using a contrast loss function to judge whether the substituted images are similar. If the test image is not occluded, the two images are similar, and if the test image is occluded, the two images are not similar, namely the image to be detected is abnormal.
And 4, step 4: image testing
Referring to fig. 2, an image under test and a corresponding reconstructed image according to an embodiment of the invention are shown. And substituting the image to be measured into a condition variation network (namely an encoding network E1) to obtain a reconstructed image of a corresponding category. And then substituting the image to be detected (on the left in figure 2) and the reconstructed image (on the right in figure 2) into the twin network for comparison to obtain a corresponding detection result, and marking out a possible abnormal object in a white square frame.
FIG. 3A network 300 for detecting local anomalies in radiation images, according to another embodiment of the present invention, is shown, the network 300 comprising a reconstructed network portion 310 and a twin network portion 320. The reconstructed network part comprises a first encoding network E1311, a generating network G312, a discriminating network D313 and a second encoding network E2314. The first encoding network E1311 is for generating an eigenvector z and a class c generation network G312 of the original image x from the original image x and for generating a reconstructed image from the eigenvector z and the class cThe second encoding network E2 is used to reconstruct images from the reconstructed imagesGenerating a reconstructed imageEigenvectors of (A)The twin network portion 320 is based on the eigenvectors z of the original image x and the corresponding reconstructed imageEigenvectors of (A)To judge the original image x and the reconstructed image The similarity between them, and whether there is a local anomaly.
As mentioned above, the models of the first coding network E1311 and the second coding network E2314 may be the same, but the trained parameters of the first coding network E1311 and the second coding network E2314 may be different. In one embodiment, the twin network portion 320 is based on the eigenvectors z of the original image x and the corresponding reconstructed imageEigenvectors of (A)Feature pairs are generated and the presence or absence of local anomalies can be determined from the measure of dissimilarity of the feature pairs. As described above, such a measure of dissimilarity may be a measure of distance, such as a euclidean measure, although the invention is not limited thereto.
In addition, the reconstruction network portion 310 may be trained with radiation images free of local anomalies. That is, the original image input to the first encoding network 311 during training is a radiation image without local abnormality. In contrast, after the training of the reconstruction network portion 311 is completed, the twin network portion 320 is trained by two types of images including a radiation image with a local abnormality and a radiation image without a local abnormality.
For the network 300 shown in fig. 3, the network 300 may be trained using the training method described with reference to fig. 1. Specifically, first, the reconstructed network portion 310 is trained using images without local anomalies and a loss function L. The loss function L may be as described with reference to fig. 1. The twin network portion 320 is then trained using the images with and without local anomalies and the Loss function Loss. The Loss function Loss may be as described with reference to fig. 1.
After the training is completed, the trained network may be used to detect local anomalies in the radiation image. Specifically, an image to be measured including a local radiation image of a vehicle body is substituted into a reconstruction network part in a network to generate a reconstruction image; then, the image to be measured and the reconstructed image are substituted into the twin network portion to determine whether the image to be measured contains an abnormality. For example, the twin network portion may output a probability score indicating that the image under test contains anomalies.
The present invention may also provide a computer storage medium having computer readable code embodied thereon which, when executed by any type of processor or processing device, implements a network or method such as described with reference to fig. 1 or 3.
The invention may also provide an electronic device comprising a memory and a processor, wherein the memory has computer readable code recorded thereon which, when executed by the processor, causes the processor to perform a network or method such as described with reference to fig. 1 or 3.
The invention directly predicts the radiation image by using a deep learning method, thereby being capable of simultaneously judging whether different vehicle body parts (such as oil tanks, wheels, transmissions and the like) are hidden or not. In addition, the invention adopts multiple types of suspect-free images as training samples, combines training, can avoid the repeated use of computing resources, improves the operation efficiency, has higher accuracy for the entrainment detection of different positions, and has the characteristics of simplicity, high efficiency and accuracy.
Although the present subject matter has been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the disclosed subject matter. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof show by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The described embodiments are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The specification, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
Claims (12)
1. A network for detecting local anomalies in radiation images, the network comprising:
a reconstruction network part including a first encoding network E1, a generating network G, a discriminating network D and a second encoding network E2, the first encoding network E1 being used for generating eigenvectors z and classes c of the original image x from the original image x, the generating network G being used for generating a reconstructed image from the eigenvectors z and the classes c The second encoding network E2 is used for reconstructing an image from the imageGenerating a reconstructed imageEigenvectors of (A)
2. The network of claim 1, wherein the models of the first encoded network E1 and the second encoded network E2 are the same.
3. Network according to claim 1, wherein said twin network portion is based on eigenvectors z of said original image x and on the corresponding reconstructed imageEigenvectors of (A)Feature pairs are generated and the presence or absence of local anomalies is determined from the measure of dissimilarity of the feature pairs.
4. The network of claim 1, wherein the reconstruction network portion is trained by a radiation image free of local anomalies.
5. The network of claim 4, wherein the twin network portion is trained by a radiation image with and without local anomalies after training of the reconstruction network portion is completed.
6. A training method for the network of claim 1, the method comprising:
Training the reconstruction network portion using an image free of local anomalies and a loss function L, L ═ LRe+LD+LC+LEn
Wherein L isReL1 norm, L, representing the reconstructed and original imagesDRepresenting the original image x and the reconstructed imageL2 norm of median feature difference, L, obtained through discrimination network DCA classification function, L, representing a first encoding network E1EnRepresenting eigenvectors z and eigenvectorsL1 norm.
8. The training method of claim 6, the method comprising:
the twin network portion is trained using images with and without local anomalies and a Loss function Loss,
Loss=(1-yi,j)D(fi,fj)+yi,j·max(0,m-D(fi,fj)),
wherein D (f)i,fj) Representing the distance of pairs of features extracted by the original and reconstructed images through said twin network portion, and, for images containing local anomalies, yi,j0, and for images without local anomalies, yi,jWith 1, m is a predefined threshold.
9. A method for detecting local anomalies in a radiation image, comprising:
substituting an image to be measured comprising a local radiation image of the vehicle body into a reconstruction network part in a network according to any one of claims 1 to 5 to generate a reconstructed image;
And substituting the image to be detected and the reconstructed image into a twin network part to determine whether the image to be detected contains abnormity.
10. The method of claim 9, wherein the network is obtained by the training method of claim 6.
11. A computer readable storage medium having stored thereon instructions which, when executed by a processor, cause the processor to implement the network of claims 1 to 5 or to perform the method of any one of claims 6 to 10.
12. An electronic device, comprising:
a processor; and
a memory storing computer readable code which, when executed by the processor, causes the processor to implement the network of claims 1 to 5 or to perform the method of any of claims 6 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910315913.5A CN111861967B (en) | 2019-04-18 | 2019-04-18 | Network, method and apparatus for detecting local anomalies in radiation images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910315913.5A CN111861967B (en) | 2019-04-18 | 2019-04-18 | Network, method and apparatus for detecting local anomalies in radiation images |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111861967A true CN111861967A (en) | 2020-10-30 |
CN111861967B CN111861967B (en) | 2024-03-15 |
Family
ID=72951865
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910315913.5A Active CN111861967B (en) | 2019-04-18 | 2019-04-18 | Network, method and apparatus for detecting local anomalies in radiation images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111861967B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10121104B1 (en) * | 2017-11-17 | 2018-11-06 | Aivitae LLC | System and method for anomaly detection via a multi-prediction-model architecture |
US20180374207A1 (en) * | 2017-06-27 | 2018-12-27 | Nec Laboratories America, Inc. | Reconstructor and contrastor for anomaly detection |
CN109461188A (en) * | 2019-01-30 | 2019-03-12 | 南京邮电大学 | A kind of two-dimensional x-ray cephalometry image anatomical features point automatic positioning method |
CN109584221A (en) * | 2018-11-16 | 2019-04-05 | 聚时科技(上海)有限公司 | A kind of abnormal image detection method generating confrontation network based on supervised |
-
2019
- 2019-04-18 CN CN201910315913.5A patent/CN111861967B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180374207A1 (en) * | 2017-06-27 | 2018-12-27 | Nec Laboratories America, Inc. | Reconstructor and contrastor for anomaly detection |
US10121104B1 (en) * | 2017-11-17 | 2018-11-06 | Aivitae LLC | System and method for anomaly detection via a multi-prediction-model architecture |
CN109584221A (en) * | 2018-11-16 | 2019-04-05 | 聚时科技(上海)有限公司 | A kind of abnormal image detection method generating confrontation network based on supervised |
CN109461188A (en) * | 2019-01-30 | 2019-03-12 | 南京邮电大学 | A kind of two-dimensional x-ray cephalometry image anatomical features point automatic positioning method |
Non-Patent Citations (2)
Title |
---|
于波;方业全;刘闽;董君陶;: "基于深度卷积神经网络的图像重建算法", 计算机系统应用, no. 09 * |
雷丽莹;陈华华;: "基于AlexNet的视频异常检测技术", 杭州电子科技大学学报(自然科学版), no. 06 * |
Also Published As
Publication number | Publication date |
---|---|
CN111861967B (en) | 2024-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110119753B (en) | Lithology recognition method by reconstructed texture | |
CN109886970B (en) | Detection segmentation method for target object in terahertz image and computer storage medium | |
CN104778692B (en) | A kind of fabric defect detection method optimized based on rarefaction representation coefficient | |
CN111833300A (en) | Composite material component defect detection method and device based on generation countermeasure learning | |
CN107577687A (en) | Image search method and device | |
CN111798409A (en) | Deep learning-based PCB defect data generation method | |
CN108122221A (en) | The dividing method and device of diffusion-weighted imaging image midbrain ischemic area | |
Haring et al. | Kohonen networks for multiscale image segmentation | |
CN113361646A (en) | Generalized zero sample image identification method and model based on semantic information retention | |
Liu et al. | Sagan: Skip-attention gan for anomaly detection | |
CN114037001A (en) | Mechanical pump small sample fault diagnosis method based on WGAN-GP-C and metric learning | |
CN114708518A (en) | Bolt defect detection method based on semi-supervised learning and priori knowledge embedding strategy | |
CN117197591B (en) | Data classification method based on machine learning | |
Zu et al. | Detection of common foreign objects on power grid lines based on Faster R-CNN algorithm and data augmentation method | |
Li et al. | Wafer crack detection based on yolov4 target detection method | |
CN113159046A (en) | Method and device for detecting foreign matters in ballastless track bed | |
CN107993193A (en) | The tunnel-liner image split-joint method of surf algorithms is equalized and improved based on illumination | |
CN115588178B (en) | Automatic extraction method for high-precision map elements | |
Sahasrabudhe et al. | Structured spatial domain image and data comparison metrics | |
CN111861967B (en) | Network, method and apparatus for detecting local anomalies in radiation images | |
CN112508862B (en) | Method for enhancing magneto-optical image of crack by improving GAN | |
CN113704073B (en) | Method for detecting abnormal data of automobile maintenance record library | |
CN115661543A (en) | Multi-scale industrial part defect detection method based on generation countermeasure network | |
CN115982566A (en) | Multi-channel fault diagnosis method for hydroelectric generating set | |
Lin et al. | Image denoising of printed circuit boards using conditional generative adversarial network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |