CN111445447B - CT image anomaly detection method and device - Google Patents

CT image anomaly detection method and device Download PDF

Info

Publication number
CN111445447B
CN111445447B CN202010183389.3A CN202010183389A CN111445447B CN 111445447 B CN111445447 B CN 111445447B CN 202010183389 A CN202010183389 A CN 202010183389A CN 111445447 B CN111445447 B CN 111445447B
Authority
CN
China
Prior art keywords
sinogram
image
partial
prediction
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010183389.3A
Other languages
Chinese (zh)
Other versions
CN111445447A (en
Inventor
王飞翔
陈名亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Medical Systems Co Ltd
Original Assignee
Neusoft Medical Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Medical Systems Co Ltd filed Critical Neusoft Medical Systems Co Ltd
Priority to CN202010183389.3A priority Critical patent/CN111445447B/en
Publication of CN111445447A publication Critical patent/CN111445447A/en
Application granted granted Critical
Publication of CN111445447B publication Critical patent/CN111445447B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The specification provides a method and a device for detecting CT image abnormality, wherein the method comprises the following steps: acquiring a sinogram of a CT image to be detected; the sinogram of the CT image to be detected comprises the following steps: a first partial sinogram and a second partial sinogram; inputting the first partial sinogram into a prediction network, and generating a second prediction sinogram corresponding to the second partial sinogram by the prediction network according to the first partial sinogram; the prediction network is obtained by training a sinogram of a normal CT image in advance; and comparing the difference between the second partial sinogram and the second prediction sinogram, and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference. To detect CT image anomalies using sinograms of normal CT images.

Description

CT image anomaly detection method and device
Technical Field
The present disclosure relates to the field of medical devices, and in particular, to a method and apparatus for detecting abnormalities in CT images.
Background
In clinical scans of CT (Computed Tomography, computerized tomography), two forms of artifacts, namely motion artifacts and radiation hardening artifacts, tend to appear on the reconstructed CT images due to patient motion (global motion or local motion) and radiation hardening. Timely finding and pre-preventing the false shadow can avoid the false shadow from interfering diagnosis and increase the clinical diagnosis and treatment efficiency.
Currently, the artifact detection method is to train a deep neural network by using a large number of normal and abnormal (artifact) CT images, and detect whether a certain CT image is abnormal or not and whether an artifact exists through the trained deep neural network. This way of detecting abnormalities in CT images requires a large number of normal and abnormal CT images. However, most clinical CT images are normal CT images, so that it is difficult to collect a large number of abnormal CT images, and it is difficult to meet the training requirement of the deep neural network, so as to achieve sufficient sensitivity and specificity. The CT image anomaly detection mode can only effectively solve the problem of image anomaly detection of a certain category, but cannot effectively solve the problem of image anomaly detection of all categories.
Disclosure of Invention
At least one embodiment of the present specification provides a CT image anomaly detection method for detecting CT image anomalies using sinograms of normal CT images.
In a first aspect, a method for detecting an abnormality of a CT image is provided, the method comprising:
acquiring a sinogram of a CT image to be detected; the sinogram of the CT image to be detected comprises the following steps: a first partial sinogram and a second partial sinogram;
inputting the first partial sinogram into a prediction network, and generating a second prediction sinogram corresponding to the second partial sinogram by the prediction network according to the first partial sinogram; the prediction network is obtained by training a sinogram of a normal CT image in advance;
and comparing the difference between the second partial sinogram and the second prediction sinogram, and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference.
In a second aspect, there is provided a CT image anomaly detection apparatus, the apparatus comprising:
the acquisition module is used for acquiring a sinogram of the CT image to be detected; the sinogram of the CT image to be detected comprises the following steps: a first partial sinogram and a second partial sinogram;
the prediction module is used for inputting the first partial sinogram into a prediction network, and generating a second prediction sinogram corresponding to the second partial sinogram according to the first partial sinogram by the prediction network; the prediction network is obtained by training a sinogram of a normal CT image in advance;
and the comparison module is used for comparing the difference between the second partial sinogram and the second prediction sinogram and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference.
In a third aspect, a computer device is provided, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the method for detecting a CT image anomaly according to any embodiment of the present specification when the program is executed by the processor.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor, implements the steps of the CT image anomaly detection method described in any one of the embodiments of the present specification.
According to the technical scheme, in at least one embodiment of the present disclosure, a first part of sinograms of a CT image to be detected is input into a prediction network, a second prediction sinogram corresponding to a second part of sinograms is predicted to be generated, and whether the CT image to be detected is abnormal is determined by comparing the difference between the second part of sinograms and the second prediction sinograms generated by prediction. The prediction network in the mode is obtained by training the deep neural network through the normal CT images in advance, and a large number of abnormal CT images do not need to be collected as training data, so that the training requirement can be met, and the deep neural network has enough sensitivity and specificity. In addition, the CT image abnormality detection mode can effectively detect all types of image abnormalities except the normal CT image, and does not aim at any type of image abnormalities independently.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
FIG. 1 is a flowchart illustrating a method for CT image anomaly detection, in accordance with an exemplary embodiment;
FIG. 2 is a flowchart illustrating another CT image anomaly detection method in accordance with an exemplary embodiment;
FIG. 3 is a flowchart illustrating a predictive network training method, according to an exemplary embodiment;
FIG. 4 is a schematic diagram of a CT image anomaly detection device, according to an exemplary embodiment;
fig. 5 is a schematic view of another CT image anomaly detection apparatus, according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The detailed description of the exemplary embodiments that follows does not represent all aspects consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present description as detailed in the accompanying claims.
The terminology used in the description presented herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this specification to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
At present, an artifact detection mode is to automatically detect images by combining artificial intelligence. The specific mode is as follows: first, training a deep neural network with a large number of normal and abnormal (artifact) CT images, such as training the deep neural network in a supervised learning or unsupervised learning manner; and then, performing anomaly detection on the CT image to be detected by using the trained deep neural network.
In the above artifact detection method, when training the deep neural network, a large number of CT images of different types are required as training data, both based on supervised learning and unsupervised learning, namely: not only are a large number of normal CT images required, but also a large number of abnormal CT images. However, most clinical CT images are normal CT images, and it is difficult to collect a large number of abnormal CT images as training data; moreover, since CT has radiation damage, it is also difficult to manually acquire abnormal CT images. Therefore, the data sizes of the existing normal CT image and the abnormal CT image often have difficulty in meeting the training requirement of the deep neural network, so that the deep neural network obtained by training achieves enough performance (sensitivity and specificity).
The artifact detection mode can only effectively solve the problem of image anomaly detection of a certain category, but cannot effectively solve the problem of image anomaly detection of all categories. In the CT image, different parts of a patient have different motion forms, and correspondingly, different types of artifacts, such as respiratory artifacts, heartbeat artifacts, movement artifacts and the like, can also be formed; depending on the factors that lead to the hardening of the radiation, the types of artifacts produced are also different, such as metal artifacts, bone artifacts, contrast agent artifacts, etc. The artifact detection method can only effectively detect the abnormal CT images of a certain category, but cannot be suitable for detecting the abnormal CT images of all categories due to the reasons of multiple artifact types, complex artifact degree and the like. For example, a trained deep neural network may be dedicated to detecting motion artifacts of the head, but may be less effective in detecting radiation hardening artifacts or other motion artifacts.
Based on the above, the present disclosure provides a method for detecting an abnormality of a CT image, in which a first partial sinogram of a sinogram of the CT image to be detected is input into a prediction network, a second prediction sinogram corresponding to a second partial sinogram is predicted to be generated, and whether the CT image to be detected is abnormal is determined by comparing a difference between the second partial sinogram and the second prediction sinogram generated by prediction. The prediction network in the mode is obtained by training the deep neural network through the normal CT images in advance, and a large number of abnormal CT images do not need to be collected as training data, so that the training requirement can be met, and the deep neural network has enough sensitivity and specificity. In addition, the CT image abnormality detection mode can effectively detect all types of image abnormalities except the normal CT image, and does not aim at any type of image abnormalities independently.
In addition, the CT image anomaly detection method fully utilizes the original data 'sinogram' of the reconstructed CT image to perform anomaly detection instead of utilizing the CT image to perform anomaly detection. One pixel point in the reconstructed CT image corresponds to one sinusoidal curve in the sinusoidal graph, and the CT image and the sinusoidal curve are reversible, so that the way of acquiring the sinusoidal graph corresponding to the normal CT image is more flexible: the original 'sinogram' data can be directly obtained from the CT scanning process, and the corresponding sinogram can also be obtained by converting the reconstructed CT image. In the process of CT scanning data acquisition, if a scanned body moves or generates radiation hardening, the shape or the brightness and the gray level of a sinusoidal line corresponding to the scanned body in a sinusoidal graph are directly changed, for example, the brightness and the gray level of a sinusoidal line corresponding to the moving part of the scanned body are different from those of the sinusoidal line when the scanned body does not move; or, the plurality of sine lines corresponding to the moving part of the scanned body are not normal smooth curves in a period of time due to the abrupt change of the shape of the plurality of sine lines compared with the normal condition, so that the change of the light and shade gray scale of the corresponding area of the sine graph is caused to be changed. According to the CT image anomaly detection method provided by the specification, the original data 'sinogram' of the reconstructed CT image is utilized for anomaly detection, the prior knowledge of the sinogram and the anomaly artifact can be directly utilized for detection, and the sensitivity and the specificity are improved.
In order to make the method for detecting abnormal CT images provided in the present specification clearer, the following detailed description of the implementation procedure of the scheme provided in the present specification is given with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, fig. 1 is a flowchart of a method for detecting an abnormality of a CT image according to an embodiment provided in the present specification. As shown in fig. 1, the process includes:
step 101, acquiring a sinogram of a CT image to be detected; the sinogram of the CT image to be detected comprises the following steps: a first partial sinogram and a second partial sinogram.
When acquiring the sinogram of the CT image to be detected, acquiring an original sinogram for reconstructing the CT image in the process of CT scanning and data acquisition; alternatively, when only the CT image to be detected is available, the feature of reversibility between the CT image and the sinogram may be used to convert the CT image to be detected into a corresponding sinogram. The above are only two specific ways of acquiring the sinogram of the CT image to be detected, but the present specification does not limit the specific way of acquiring the sinogram as long as the sinogram of the CT image to be detected can be acquired.
Wherein, the sinogram of the CT image to be detected comprises: a first partial sinogram and a second partial sinogram, namely: the sinogram of the CT image to be detected can only comprise a first part sinogram and a second part sinogram, which are divided into two parts; more parts of the sinogram may also be included. In other words, the sinogram of the CT image to be detected may consist of two parts, such as an entire sinogram consisting of an "upper half sinogram" and a "lower half sinogram"; alternatively, the sinogram of the CT image to be detected may be composed of more parts, such as the entire sinogram composed of three parts of "upper third sinogram", "middle third sinogram" and "lower third sinogram".
In one example, the sinogram of the CT image to be detected is comprised of the first partial sinogram and a second partial sinogram. That is, the sinogram of the CT image to be detected is composed of a first partial sinogram and a second partial sinogram, and the entire sinogram is composed of two parts in total, such as an "upper half sinogram" and a "lower half sinogram", and further such as an "upper quarter sinogram" and a "lower three-quarter sinogram". The initial dividing position and the proportion of the two parts of the sinogram contained in the whole sinogram in the present example can be any position and any proportion, and the present specification is not limited, for example, the whole sinogram is composed of two parts of a left two-fifths sinogram and a right three-fifths sinogram.
102, inputting the first partial sinogram into a prediction network, and generating a second prediction sinogram corresponding to the second partial sinogram by the prediction network according to the first partial sinogram; the prediction network is obtained by training a sinogram of a normal CT image in advance.
The prediction network is a network trained in advance by using a sinogram of a normal CT image. For example, a great number of sinograms of the normal CT images are used as input of the deep neural network, and the deep neural network is subjected to supervised or unsupervised learning training so as to learn the mapping relation among all parts in the sinograms of the normal CT images, thereby obtaining the prediction network. In this step, the first partial sinogram is input into a prediction network, and a second prediction sinogram corresponding to the second partial sinogram is predicted and generated by the prediction network according to the first partial sinogram.
For example, if the sinogram of the CT image to be detected is composed of two parts, i.e., an "upper half sinogram" and a "lower half sinogram", the first part sinogram and the second part sinogram. The "upper half sinogram" may be input into a prediction network, from which a "lower half prediction sinogram" corresponding to the "lower half sinogram" is generated. It is assumed that the sinogram of the CT image to be detected comprises other parts in addition to the first partial sinogram and the second partial sinogram. Taking the sinogram of the CT image to be detected as an example, the sinogram of the CT image to be detected is composed of three parts, i.e. an upper third sinogram, a middle third sinogram and a lower third sinogram. The 'upper third sinogram' can be input into a prediction network, and the prediction network predicts and generates a 'middle third prediction sinogram' corresponding to the 'middle third sinogram' according to the 'upper third sinogram'; or, the prediction network predicts and generates a 'lower third prediction sinogram' corresponding to the 'lower third sinogram' according to the 'upper third sinogram'. The above description is merely exemplary, and when the sinogram of the CT image to be detected contains more portions, the process by which the prediction network generates a predicted sinogram of the corresponding portion is similar to the above description and will not be described in detail herein.
Step 103, comparing the difference between the second partial sinogram and the second predicted sinogram, and determining whether the CT image to be detected is abnormal according to the difference value of the difference.
Specifically, when comparing the difference between the second partial sinogram and the second predicted sinogram, the comparison method may use an SSIM (structural similarity index, structural similarity) function, and if the CT image to be detected is normal, the difference between the two is small, the consistency is higher, and the SSIM index approaches to 1; if the CT image to be detected is abnormal, the difference between the CT image and the SSIM image is large, the consistency is poor, and the SSIM index is close to 0. In this example, it is possible to determine whether or not the CT image to be detected is abnormal by the SSIM index with the SSIM index as the difference value. The SSIM index obtained by comparing the SSIM index threshold value and the preset SSIM index threshold value can be compared in a mode of setting the SSIM index threshold value, when the SSIM index obtained by comparing the SSIM index value and the preset SSIM index threshold value is lower than the preset SSIM index threshold value, the abnormal CT image to be detected can be determined, and otherwise, the abnormal CT image to be detected is normal.
It should be noted that, when comparing the second partial sinogram with the second predicted sinogram, other comparison methods, such as Peak Signal-to-Noise Ratio (Peak), perceptual hash algorithm (perceptual hash algorithm), local image feature (Local image feature) detection, and the like, may be used to compare the image difference or similarity, and even use a deep neural network-based algorithm for comparing the image difference or similarity. The foregoing are exemplary methods, and the present specification is not limited to specific comparative methods.
According to the CT image anomaly detection method, a first part of sinograms of the CT images to be detected are input into a prediction network, a second prediction sinogram corresponding to the second part of sinograms is predicted and generated, and whether the CT images to be detected are anomalous or not is determined by comparing differences between the second part of sinograms and the second prediction sinograms generated by prediction. The prediction network in the mode is obtained by training the deep neural network through the normal CT images in advance, and a large number of abnormal CT images do not need to be collected as training data, so that the training requirement can be met, and the deep neural network has enough sensitivity and specificity. In addition, the CT image abnormality detection mode can effectively detect all types of image abnormalities except the normal CT image, and does not aim at any type of image abnormalities independently.
In addition, the CT image anomaly detection method fully utilizes the original data 'sinogram' of the reconstructed CT image to detect anomalies, can directly utilize prior knowledge of the sinogram and anomaly artifacts to detect anomalies, and does not utilize the CT image to detect anomalies. And the way of acquiring the sinogram corresponding to the normal CT image is more flexible: the original 'sinogram' data can be directly obtained from the CT scanning process, and the corresponding sinogram can also be obtained by converting the reconstructed CT image.
Referring to fig. 2, fig. 2 is a flowchart of another method for detecting an abnormality of a CT image according to an embodiment of the present disclosure. As shown in fig. 2, the process includes:
step 101, acquiring a sinogram of a CT image to be detected; the sinogram of the CT image to be detected comprises the following steps: a first partial sinogram and a second partial sinogram.
102, inputting the first partial sinogram into a prediction network, and generating a second prediction sinogram corresponding to the second partial sinogram by the prediction network according to the first partial sinogram; the prediction network is obtained by training a sinogram of a normal CT image in advance.
The steps 101 and 102 are identical to the step descriptions in the method for detecting abnormal CT image shown in fig. 1, and are not repeated here.
Step 201, inputting the second partial sinogram into the prediction network, and generating a first prediction sinogram corresponding to the first partial sinogram by the prediction network according to the second partial sinogram.
In step 102, the prediction network has predicted that a second predicted sinogram corresponding to the second partial sinogram is generated. In this step, after the second partial sinogram is input into the prediction network, a first prediction sinogram corresponding to the first partial sinogram is predicted and generated. Thus, through steps 102 and 201, a prediction sinogram of two parts is predicted to be generated, namely: a first predicted sinogram corresponding to the first partial sinogram and a second predicted sinogram corresponding to the second partial sinogram.
For example, assume that the sinogram of the CT image to be detected consists of three parts, respectively: the upper third sinogram, the middle third sinogram and the lower third sinogram can generate a prediction graph corresponding to the upper third sinogram and a prediction graph corresponding to the middle third sinogram after the step 102 and the step 201; or generating a prediction graph corresponding to the upper third of the sinogram and a prediction graph corresponding to the lower third of the sinogram; alternatively, a prediction graph corresponding to the middle third sinogram and a prediction graph of the lower third sinogram are generated. In the case that the sinogram of the CT image to be detected is composed of two parts, or is composed of more parts, the case of generating the first predicted sinogram and the second predicted sinogram is similar thereto, and a detailed description thereof will be omitted.
In one example, the sinogram of the CT image to be detected is comprised of the first partial sinogram and a second partial sinogram. That is, the sinogram of the CT image to be detected includes only a first partial sinogram and a second partial sinogram. After step 102 and step 201, the prediction network predicts and generates a first predicted sinogram and a second predicted sinogram, and then predicts and generates a complete predicted sinogram of the sinograms of the CT images to be detected. Then, whether the CT image to be detected is abnormal or not can be determined according to the difference value by comparing the sine image of the CT image to be detected with the complete prediction sine image in a difference mode.
In the above example, the complete prediction sinogram of the CT image to be detected can be predicted and generated, and the sinogram of the whole CT image to be detected is used as a comparison object.
Step 202, comparing the difference between the sinogram and the predictive image of the CT image to be detected, and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference; the prediction graph comprises: the first predicted sinogram and the second predicted sinogram.
In this step, the first predicted sinogram and the second predicted sinogram may be taken as a whole, and compared with the corresponding portion of the sinogram of the CT image to be detected, and whether the CT image to be detected is abnormal may be determined according to the difference value. Or, the first prediction sinogram and the corresponding first part sinogram can be compared to obtain a first difference value; comparing the second prediction sinogram with the corresponding second partial sinogram to obtain a second difference value; and comprehensively determining whether the CT image to be detected is abnormal or not according to the magnitude of the first difference value and the magnitude of the second difference value.
According to the CT image anomaly detection method, the first partial sinogram and the second partial sinogram are respectively predicted through the prediction network, and the first prediction sinogram and the second prediction sinogram are correspondingly generated. And then, comparing the difference between the predicted image containing the first predicted sinogram and the second predicted sinogram and the CT image to be detected, and determining whether the image is abnormal according to the difference value. The anomaly detection method can predict and generate the prediction sinograms of the two parts and perform corresponding difference comparison, and compared with a mode of performing difference comparison on the prediction sinograms which can only predict and generate a part of the prediction sinograms, the difference comparison is more accurate, so that whether the CT image to be detected is anomalous or not can be determined more accurately.
Referring to fig. 3, fig. 3 is a flowchart of a predictive network training method according to an embodiment of the present disclosure. As shown in fig. 3, the process includes:
step 301, acquiring a sinogram of a normal CT image; the sinogram of the normal CT image includes: a third partial sinogram and a fourth partial sinogram.
Before training the deep neural network to obtain the prediction network, a large amount of training data is required, in this embodiment, a large amount of sinograms of normal CT images are required to be acquired. Compared with other methods for training the deep neural network, the method only needs to acquire a great number of sinograms of normal CT images, and does not need to acquire a great number of abnormal CT images as training data. Most of clinical CT images are normal CT images, and enough training data can be acquired to meet the training requirement, so that the difficulty that a large number of abnormal CT images are difficult to acquire is avoided. The specific mode of acquiring the sinogram of the normal CT image is not limited in the present specification, for example, the sinogram data of the original CT image for reconstruction can be acquired from the process of acquiring the data from the original CT scan; alternatively, after the normal CT image is acquired, the CT image may be converted into a corresponding sinogram by utilizing the reversible characteristics of the two.
Wherein, the sinogram of the normal CT image comprises: a third partial sinogram and a fourth partial sinogram. That is, the sinogram for the normal CT image as the deep neural network training data is composed of two parts (a third part sinogram and a fourth part sinogram), or is composed of more parts such as including the third part sinogram, the fourth part sinogram and the fifth part sinogram.
In one example, the sinogram of the normal CT image is comprised of the third partial sinogram and a fourth partial sinogram. That is, the sinogram for the normal CT image as training data is composed of a third partial sinogram and a fourth partial sinogram, in total.
And step 302, training the prediction network according to the third part of sinogram and the fourth part of sinogram, so that the prediction network learns the mapping relation between the third part of sinogram and the fourth part of sinogram.
In this step, the pre-constructed deep neural network is trained using the third partial sinogram and the fourth partial sinogram, so that the prediction network obtained by training learns the mapping relationship between the third partial sinogram and the fourth partial sinogram. Take the example of an antagonism network: and inputting the third part of sinogram and the fourth part of sinogram into a GAN (Generative Adversarial Networks, generating type countermeasure network) for training, so that a prediction network obtained by training learns the mapping relation between the third part of sinogram and the fourth part of sinogram. In the use stage of the prediction network after training is performed, the third part of sinogram can be used as input, and the prediction network predicts and generates a fourth part of prediction sinogram corresponding to the fourth part of sinogram according to the learned mapping relation between the third part of sinogram and the second part of sinogram; alternatively, the prediction network predicts and generates a "third partial predicted sinogram" corresponding to the third partial sinogram, taking as input the fourth partial sinogram.
The description will be made with the GAN network as the object of the training deep neural network. In the GAN network, the generating network can be composed of a context encoder and a context decoder based on CNN (Convolutional Neural Networks, convolutional neural network); the CNN adopts an AlexNet structure, an activation function is a ReLU (Rectified Linear Unit, linear rectification function), and the decoder consists of five upper convolution layers and an activation layer using the ReLU as the activation function; the discrimination network in the GAN network is similar to the encoder part of the generation network and is also a network architecture using AlexNet.
The training data for training the GAN network is a sinogram of a plurality of normal CT images, and the sinogram of each normal CT image is exemplified by two parts of "upper half sinogram" and "lower half sinogram". The 'upper half of the sinogram' of the normal CT image is intercepted and used as the input of a generating network, and the generating network obtains the structural expression of the missing 'lower half' sinogram data by analyzing the incomplete sinogram. The input end of the judging network is a 'lower half sine chart' or a predictive sine chart corresponding to the 'lower half sine chart' generated by the generating network, and a CNN five-layer convolution network is used for finally outputting 0/1 value to represent the judgment of the true or false of the judging network to the input image. The generation network and the discrimination network of the GAN network are sequentially and iteratively trained, so that the generation network has accurate mapping capability from one part to the other part in the sinogram of the normal CT.
The above deep neural network using the GAN network as training is merely exemplary, and not limiting, and other deep neural network architectures may be used to implement the deep learning network, such as supervised learning or unsupervised learning, including the U-NET network.
As shown in fig. 4, the present specification provides a CT image anomaly detection apparatus that can perform the CT image anomaly detection method of any one of the embodiments of the present specification. The apparatus may include an acquisition module 401, a prediction module 402, and a comparison module 403. Wherein:
an acquisition module 401, configured to acquire a sinogram of a CT image to be detected; the sinogram of the CT image to be detected comprises the following steps: a first partial sinogram and a second partial sinogram;
a prediction module 402, configured to input the first partial sinogram into a prediction network, and generate, by the prediction network, a second predicted sinogram corresponding to the second partial sinogram according to the first partial sinogram; the prediction network is obtained by training a sinogram of a normal CT image in advance;
and a comparison module 403, configured to compare the difference between the second partial sinogram and the second predicted sinogram, and determine whether the CT image to be detected is abnormal according to the difference value of the difference.
Optionally, the prediction module 402 is further configured to input the second partial sinogram into the prediction network, and generate, by the prediction network, a first predicted sinogram corresponding to the first partial sinogram according to the second partial sinogram; the comparison module 403 is further configured to compare differences between the sinogram and the predictive image of the CT image to be detected, and determine whether the CT image to be detected is abnormal according to a difference value of the differences; the prediction graph comprises: the first predicted sinogram and the second predicted sinogram.
Optionally, the sinogram of the CT image to be detected is composed of the first partial sinogram and the second partial sinogram.
Optionally, as shown in fig. 5, the apparatus further includes:
a normal image acquisition module 501, configured to acquire a sinogram of a normal CT image; the sinogram of the normal CT image includes: a third partial sinogram and a fourth partial sinogram;
and the training module 502 is configured to train the prediction network according to the third partial sinogram and the fourth partial sinogram, so that the prediction network learns the mapping relationship between the third partial sinogram and the fourth partial sinogram.
Optionally, the sinogram of the normal CT image is composed of the third partial sinogram and a fourth partial sinogram.
The implementation process of the functions and roles of each module in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the objectives of at least one embodiment of the present disclosure. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The present specification also provides a computer device including a memory, a processor and a computer program stored on the memory and executable on the processor, the processor being capable of implementing the CT image anomaly detection method of any embodiment of the present specification when executing the program.
The present specification also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, is capable of implementing the CT image anomaly detection method of any of the embodiments of the present specification.
Wherein the non-transitory computer readable storage medium may be a ROM, random-access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc., which is not limited in this application.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It is to be understood that the present description is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The foregoing description of the preferred embodiments is provided for the purpose of illustration only, and is not intended to limit the scope of the disclosure, since any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the disclosure are intended to be included within the scope of the disclosure.

Claims (8)

1. A method for detecting abnormalities in CT images, said method comprising:
acquiring a sinogram of a CT image to be detected; the sinogram of the CT image to be detected comprises the following steps: a first partial sinogram and a second partial sinogram;
inputting the first partial sinogram into a prediction network, and generating a second prediction sinogram corresponding to the second partial sinogram by the prediction network according to the first partial sinogram; the prediction network is obtained by training a sinogram of a normal CT image in advance;
comparing the difference between the second partial sinogram and the second prediction sinogram, and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference;
before the acquiring of the sinogram of the CT image to be detected, the method further comprises the following steps:
acquiring a sinogram of a normal CT image; the sinogram of the normal CT image consists of a third part sinogram and a fourth part sinogram;
and training the prediction network according to the third part of sinogram and the fourth part of sinogram, so that the prediction network learns the mapping relation between the third part of sinogram and the fourth part of sinogram.
2. The method of claim 1, wherein the inputting the first partial sinogram into a prediction network, after generating a second prediction sinogram corresponding to the second partial sinogram from the first partial sinogram by the prediction network, further comprises:
inputting the second partial sinogram into the prediction network, and generating a first prediction sinogram corresponding to the first partial sinogram by the prediction network according to the second partial sinogram;
comparing the difference between the sinogram and the predictive graph of the CT image to be detected, and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference; the prediction graph comprises: the first predicted sinogram and the second predicted sinogram.
3. The method according to any one of claims 1 or 2, wherein the sinogram of the CT image to be detected consists of the first partial sinogram and a second partial sinogram.
4. A CT image anomaly detection device, the device comprising:
the acquisition module is used for acquiring a sinogram of the CT image to be detected; the sinogram of the CT image to be detected comprises the following steps: a first partial sinogram and a second partial sinogram;
the prediction module is used for inputting the first partial sinogram into a prediction network, and generating a second prediction sinogram corresponding to the second partial sinogram according to the first partial sinogram by the prediction network; the prediction network is obtained by training a sinogram of a normal CT image in advance;
the comparison module is used for comparing the difference between the second partial sinogram and the second prediction sinogram and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference;
the normal image acquisition module is used for acquiring a sinogram of a normal CT image; the sinogram of the normal CT image consists of a third part sinogram and a fourth part sinogram;
and the training module is used for training the prediction network according to the third part of sinogram and the fourth part of sinogram, so that the prediction network learns the mapping relation between the third part of sinogram and the fourth part of sinogram.
5. The apparatus of claim 4, wherein the device comprises a plurality of sensors,
the prediction module is further used for inputting the second partial sinogram into the prediction network, and generating a first prediction sinogram corresponding to the first partial sinogram according to the second partial sinogram by the prediction network;
the comparison module is further used for comparing differences between the sinogram and the predictive graph of the CT image to be detected and determining whether the CT image to be detected is abnormal or not according to the difference value of the differences; the prediction graph comprises: the first predicted sinogram and the second predicted sinogram.
6. The apparatus according to claim 4 or 5, characterized in that the sinogram of the CT image to be detected consists of the first partial sinogram and a second partial sinogram.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1-3 when executing the program.
8. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method of any of claims 1-3.
CN202010183389.3A 2020-03-16 2020-03-16 CT image anomaly detection method and device Active CN111445447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010183389.3A CN111445447B (en) 2020-03-16 2020-03-16 CT image anomaly detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010183389.3A CN111445447B (en) 2020-03-16 2020-03-16 CT image anomaly detection method and device

Publications (2)

Publication Number Publication Date
CN111445447A CN111445447A (en) 2020-07-24
CN111445447B true CN111445447B (en) 2024-03-01

Family

ID=71627573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010183389.3A Active CN111445447B (en) 2020-03-16 2020-03-16 CT image anomaly detection method and device

Country Status (1)

Country Link
CN (1) CN111445447B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096117A (en) * 2021-04-29 2021-07-09 中南大学湘雅医院 Ectopic ossification CT image segmentation method, three-dimensional reconstruction method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402787A (en) * 2010-09-19 2012-04-04 上海西门子医疗器械有限公司 System and method for detecting strip artifact in image
CN109215014A (en) * 2018-07-02 2019-01-15 中国科学院深圳先进技术研究院 Training method, device, equipment and the storage medium of CT image prediction model
CN110288671A (en) * 2019-06-25 2019-09-27 南京邮电大学 The low dosage CBCT image rebuilding method of network is generated based on three-dimensional antagonism
CN110555474A (en) * 2019-08-28 2019-12-10 上海电力大学 photovoltaic panel fault detection method based on semi-supervised learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3451284A1 (en) * 2017-09-05 2019-03-06 Siemens Healthcare GmbH Method for automatically recognising artefacts in computed tomography image data
US10489907B2 (en) * 2017-11-13 2019-11-26 Siemens Healthcare Gmbh Artifact identification and/or correction for medical imaging

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402787A (en) * 2010-09-19 2012-04-04 上海西门子医疗器械有限公司 System and method for detecting strip artifact in image
CN109215014A (en) * 2018-07-02 2019-01-15 中国科学院深圳先进技术研究院 Training method, device, equipment and the storage medium of CT image prediction model
CN110288671A (en) * 2019-06-25 2019-09-27 南京邮电大学 The low dosage CBCT image rebuilding method of network is generated based on three-dimensional antagonism
CN110555474A (en) * 2019-08-28 2019-12-10 上海电力大学 photovoltaic panel fault detection method based on semi-supervised learning

Also Published As

Publication number Publication date
CN111445447A (en) 2020-07-24

Similar Documents

Publication Publication Date Title
US10691980B1 (en) Multi-task learning for chest X-ray abnormality classification
CN110796613B (en) Automatic identification method and device for image artifacts
JP2020064609A (en) Patient-specific deep learning image denoising methods and systems
US11205264B2 (en) Systems and methods for multi-label segmentation of cardiac computed tomography and angiography images using deep neural networks
CN111311704A (en) Image reconstruction method and device, computer equipment and storage medium
US11302094B2 (en) System and method for segmenting normal organ and/or tumor structure based on artificial intelligence for radiation treatment planning
CN112348908A (en) Shape-based generative countermeasure network for segmentation in medical imaging
US11475535B2 (en) PET-CT registration for medical imaging
CN113272869A (en) Three-dimensional shape reconstruction from topograms in medical imaging
KR102178803B1 (en) System and method for assisting chest medical images reading
CN115136192A (en) Out-of-distribution detection of input instances to a model
CN111445447B (en) CT image anomaly detection method and device
Mohebbian et al. Classifying MRI motion severity using a stacked ensemble approach
JP5329204B2 (en) X-ray CT system
US11823354B2 (en) System and method for utilizing a deep learning network to correct for a bad pixel in a computed tomography detector
CN115249279A (en) Medical image processing method, medical image processing device, computer equipment and storage medium
KR20100068184A (en) Method for detecting ground glass opacity using computed tomography of chest
CN115700740A (en) Medical image processing method, apparatus, computer device and storage medium
KR102580749B1 (en) Method and apparatus for image quality assessment of medical images
Zhao et al. Automated breast lesion segmentation from ultrasound images based on ppu-net
Ens et al. Automatic motion correction in cone-beam computed tomography
WO2018215357A1 (en) Device and method for pet image reconstruction
CN115631232B (en) Method for determining radial position of double-probe detector
US20230190182A1 (en) Tooth decay diagnostics using artificial intelligence
US20230106845A1 (en) Trained model generation program, image generation program, trained model generation device, image generation device, trained model generation method, and image generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant