CN111445447A - CT image anomaly detection method and device - Google Patents

CT image anomaly detection method and device Download PDF

Info

Publication number
CN111445447A
CN111445447A CN202010183389.3A CN202010183389A CN111445447A CN 111445447 A CN111445447 A CN 111445447A CN 202010183389 A CN202010183389 A CN 202010183389A CN 111445447 A CN111445447 A CN 111445447A
Authority
CN
China
Prior art keywords
sinogram
image
prediction
detected
normal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010183389.3A
Other languages
Chinese (zh)
Other versions
CN111445447B (en
Inventor
王飞翔
陈名亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Medical Systems Co Ltd
Original Assignee
Neusoft Medical Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Medical Systems Co Ltd filed Critical Neusoft Medical Systems Co Ltd
Priority to CN202010183389.3A priority Critical patent/CN111445447B/en
Publication of CN111445447A publication Critical patent/CN111445447A/en
Application granted granted Critical
Publication of CN111445447B publication Critical patent/CN111445447B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present specification provides a method and a device for detecting CT image abnormality, wherein the method comprises the following steps: acquiring a sinogram of a CT image to be detected; the sinogram of the CT image to be detected comprises: a first portion sinogram and a second portion sinogram; inputting the first part sinogram into a prediction network, and generating a second prediction sinogram corresponding to the second part sinogram according to the first part sinogram by the prediction network; the prediction network is obtained by utilizing the sinogram training of a normal CT image in advance; and comparing the difference between the second partial sinogram and the second predicted sinogram, and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference. To detect CT image abnormalities using the sinogram of a normal CT image.

Description

CT image anomaly detection method and device
Technical Field
The present disclosure relates to the field of medical devices, and in particular, to a method and an apparatus for detecting an abnormality in a CT image.
Background
In clinical scanning of CT (Computed Tomography), due to patient motion (global motion or local motion) and radiation hardening, two forms of artifacts, namely motion artifacts and radiation hardening artifacts, tend to appear on reconstructed CT images. The artifacts can be found and prevented in time, so that the artifact interference diagnosis can be avoided, and the clinical diagnosis and treatment efficiency can be increased.
At present, the artifact detection method is to train a deep neural network by using a large number of normal and abnormal (artifact) CT images, and detect whether a CT image is abnormal or not and whether an artifact exists or not through the deep neural network obtained by training. This method of CT image anomaly detection requires a large number of normal and anomalous CT images. However, most of clinical CT images are normal CT images, it is difficult to collect a large number of abnormal CT images, and it is difficult to meet the training requirement of the deep neural network, so as to achieve sufficient sensitivity and specificity. Moreover, the CT image anomaly detection method can only effectively solve the image anomaly detection of a certain category, but cannot effectively solve the image anomaly detection of all categories.
Disclosure of Invention
At least one embodiment of the present specification provides a CT image anomaly detection method to detect a CT image anomaly using a sinogram of a normal CT image.
In a first aspect, a method for detecting an abnormality in a CT image is provided, the method including:
acquiring a sinogram of a CT image to be detected; the sinogram of the CT image to be detected comprises: a first portion sinogram and a second portion sinogram;
inputting the first part sinogram into a prediction network, and generating a second prediction sinogram corresponding to the second part sinogram according to the first part sinogram by the prediction network; the prediction network is obtained by utilizing the sinogram training of a normal CT image in advance;
and comparing the difference between the second partial sinogram and the second predicted sinogram, and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference.
In a second aspect, a CT image anomaly detection apparatus is provided, the apparatus comprising:
the acquisition module is used for acquiring a sinogram of a CT image to be detected; the sinogram of the CT image to be detected comprises: a first portion sinogram and a second portion sinogram;
a prediction module configured to input the first portion sinogram into a prediction network, and generate a second predicted sinogram corresponding to the second portion sinogram from the first portion sinogram by the prediction network; the prediction network is obtained by utilizing the sinogram training of a normal CT image in advance;
and the comparison module is used for comparing the difference between the second partial sinogram and the second predicted sinogram and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference.
In a third aspect, a computer device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the method for detecting an abnormality in a CT image according to any embodiment of the present disclosure.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, and the program, when executed by a processor, implements the steps of the method for detecting an abnormality in a CT image according to any one of the embodiments of the present specification.
According to the technical scheme, in at least one embodiment of the present specification, a first part sinogram of a CT image to be detected is input to a prediction network, a second prediction sinogram corresponding to a second part sinogram is generated through prediction, and whether the CT image to be detected is abnormal is determined by comparing a difference between the second part sinogram and the second prediction sinogram generated through prediction. The prediction network in the method is obtained by training the deep neural network by using the normal CT image in advance, and a large number of abnormal CT images do not need to be collected as training data, so that the training requirement can be met, and the deep neural network can achieve sufficient sensitivity and specificity. The CT image abnormity detection mode can effectively detect the image abnormity of all types except the normal CT image, and does not aim at the image abnormity of a certain type independently.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Drawings
FIG. 1 is a flow chart illustrating a method for CT image anomaly detection according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating another CT image anomaly detection method according to an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a predictive network training method in accordance with an exemplary embodiment;
FIG. 4 is a schematic diagram of a CT image anomaly detection apparatus according to an exemplary embodiment;
fig. 5 is a schematic diagram illustrating another CT image anomaly detection apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The specific manner described in the following exemplary embodiments does not represent all aspects consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the appended claims.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
At present, the artifact detection mode is to combine artificial intelligence to perform automatic image detection. The specific mode is as follows: first, a deep neural network is trained using a large number of normal and abnormal (artifact) CT images, e.g., in a supervised or unsupervised learning manner; and then, carrying out anomaly detection on the CT image to be detected by using the deep neural network obtained by training.
In the artifact detection method, when the deep neural network is trained, a large number of CT images of different types are required as training data, no matter based on supervised learning or unsupervised learning, that is to say: not only a large number of normal CT images are required, but also a large number of abnormal CT images are required. However, most clinical CT images are normal CT images, and it is difficult to collect a large number of abnormal CT images as training data; moreover, due to the radiation damage of CT, it is difficult to artificially acquire an abnormal CT image. Therefore, the data scale of the conventional normal CT image and abnormal CT image is often difficult to meet the training requirement of the deep neural network, so that the deep neural network obtained by training achieves sufficient performance (sensitivity and specificity).
The artifact detection method can only effectively solve the image abnormality detection of a certain type, but cannot effectively solve the image abnormality detection of all types. In a CT image, different parts of a patient have different motion forms, and accordingly, different types of artifacts, such as respiratory artifacts, heartbeat artifacts, motion artifacts, and the like, may also be formed; the types of artifacts produced vary according to the factors that cause the radiation hardening, such as metal artifacts, bone artifacts, and contrast agent artifacts. Due to the fact that the types of artifacts are many, the degree of the artifacts is complex, and the like, the artifact detection method can only effectively detect the abnormal CT images of a certain category, but cannot be applied to the detection of the abnormal CT images of all categories. For example, some trained deep neural network may be dedicated to detecting motion artifacts of the head, but may be less effective in detecting radiation hardening artifacts or motion artifacts at other locations.
Based on the above, the present specification provides a CT image anomaly detection method, where a first part sinogram of a CT image to be detected is input to a prediction network, a second prediction sinogram corresponding to a second part sinogram is generated by prediction, and whether the CT image to be detected is anomalous is determined by comparing a difference between the second part sinogram and the second prediction sinogram generated by prediction. The prediction network in the method is obtained by training the deep neural network by using the normal CT image in advance, and a large number of abnormal CT images do not need to be collected as training data, so that the training requirement can be met, and the deep neural network can achieve sufficient sensitivity and specificity. The CT image abnormity detection mode can effectively detect the image abnormity of all types except the normal CT image, and does not aim at the image abnormity of a certain type independently.
In addition, the CT image abnormality detection method fully utilizes the original data 'sinogram' of the reconstructed CT image to detect the abnormality, but does not utilize the CT image to detect the abnormality. One pixel point in the reconstructed CT image corresponds to one sine curve in the sine graph, and the CT image and the sine are reversible, so that the way of obtaining the sine graph corresponding to the normal CT image is more flexible: the original 'sinogram' data can be directly obtained from the CT scanning process, and the corresponding sinogram can also be obtained by utilizing the CT image conversion obtained by reconstruction. In the process of acquiring data by CT scanning, if a scanned object moves or generates ray hardening, the sine line corresponding to the scanned object in the sinogram directly changes in form or brightness and darkness, for example, the brightness of the sine line corresponding to the moving part of the scanned object is different from the brightness of the sine line when the scanned object does not move; or, because a plurality of sine lines corresponding to the moving part of the scanned body have abrupt morphological changes compared with the normal condition, the plurality of sine lines are not normal smooth curves within a period of time, and thus, the change of the light and shade gray scale of the area corresponding to the sinogram is caused to change. The CT image anomaly detection method provided by the specification utilizes the original data 'sinogram' of the reconstructed CT image to carry out anomaly detection, can directly utilize the prior knowledge of the sinogram and the anomaly artifact to carry out detection, and has higher sensitivity and specificity.
In order to make the method for detecting the abnormality of the CT image provided by the present specification clearer, the following describes in detail the implementation process of the scheme provided by the present specification with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for detecting an abnormality in a CT image according to an embodiment provided in the present specification. As shown in fig. 1, the process includes:
step 101, acquiring a sinogram of a CT image to be detected; the sinogram of the CT image to be detected comprises: a first partial sinogram and a second partial sinogram.
When acquiring the sinogram of a CT image to be detected, acquiring an original sinogram for reconstructing the CT image in the process of acquiring data by CT scanning; or, when only the CT image to be detected is present, the CT image to be detected can be converted into the corresponding sinogram by using the reversible characteristic between the CT image and the sinogram. The above are only two specific ways to acquire the sinogram of the CT image to be detected, but the description does not limit the specific ways to acquire the sinogram as long as the sinogram of the CT image to be detected can be acquired.
Wherein, the sinogram of the CT image to be detected comprises: a first partial sinogram and a second partial sinogram, namely: the sinogram of the CT image to be detected can only comprise a first part sinogram and a second part sinogram, and the two parts are total; more partial sinograms may also be included. In other words, the sinogram of the CT image to be detected may be composed of two parts, for example, the entire sinogram is composed of the "upper half sinogram" and the "lower half sinogram"; alternatively, the sinogram of the CT image to be detected may be composed of more parts, for example, the entire sinogram is composed of three parts, i.e., "upper-third sinogram", "middle-third sinogram" and "lower-third sinogram".
In one example, the sinogram of the CT image to be detected is composed of the first part sinogram and the second part sinogram. That is, the sinogram of the CT image to be detected is composed of a first part sinogram and a second part sinogram, and the whole sinogram is composed of two parts, for example, an "upper half sinogram" and a "lower half sinogram", and further, for example, an "upper quarter sinogram" and a "lower three quarters sinogram". In this example, the initial dividing position and the occupied proportion of the two parts of the sinogram included in the entire sinogram may be any position and any occupied proportion, and the present specification is not limited, for example, the entire sinogram is composed of two parts, namely "two fifths of the left sinogram" and "three fifths of the right sinogram".
Step 102, inputting the first part sinogram into a prediction network, and generating a second prediction sinogram corresponding to the second part sinogram by the prediction network according to the first part sinogram; the prediction network is obtained by utilizing the sinogram training of a normal CT image in advance.
The prediction network is a network obtained by using the sinogram training of the normal CT image in advance. For example, a great amount of sinograms of normal CT images are used as input of a deep neural network, and the deep neural network learns the mapping relation between each part in the sinograms of the normal CT images through supervised or unsupervised learning training, so as to obtain a prediction network. In the step, the first part of the sinogram is input into a prediction network, and the prediction network predicts the first part of the sinogram and generates a second prediction sinogram corresponding to a second part of the sinogram.
For example, if the sinogram of the CT image to be detected is composed of two parts, i.e., an upper half sinogram and a lower half sinogram, the two parts are the first part sinogram and the second part sinogram. The "top half sinogram" may be input to the prediction network, which generates a "bottom half prediction sinogram" corresponding to the "bottom half sinogram". Suppose that the sinogram of the CT image to be detected includes other parts in addition to the first part sinogram and the second part sinogram. For example, the sinogram of the CT image to be detected is composed of three parts, i.e., "upper-third sinogram", "middle-third sinogram", and "lower-third sinogram". The 'upper third sinogram' can be input into a prediction network, and a 'middle third prediction sinogram' corresponding to the 'middle third sinogram' is generated by the prediction network according to the 'upper third sinogram'; alternatively, a "lower third predicted sinogram" corresponding to the "lower third sinogram" is generated from the "upper third sinogram" prediction by the prediction network. The above description is only exemplary, and when the sinogram of the CT image to be detected contains more parts, the process of generating the prediction sinogram of the corresponding part by the prediction network is similar to the above description, and is not described in detail here.
And 103, comparing the difference between the second partial sinogram and the second predicted sinogram, and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference.
Specifically, when comparing the difference between the second partial sinogram and the second predicted sinogram, the comparison method may use an SSIM (structural similarity index) function, and if the CT image to be detected is normal, the difference between the two is small and has high consistency, and the SSIM index approaches to 1; if the CT image to be detected is abnormal, the difference between the CT image and the CT image is large, the consistency is poor, and the SSIM index approaches to 0. In this example, the SSIM index may be used as a difference value, and whether the CT image to be detected is abnormal or not may be determined by the SSIM index. The method can also be used for comparing the SSIM index obtained by comparing the SSIM index with a preset SSIM index threshold value in a mode of setting the SSIM index threshold value, and when the SSIM index obtained by comparison is lower than the preset SSIM index threshold value, the CT image to be detected can be determined to be abnormal, and otherwise, the CT image to be detected is normal.
It should be noted that, when comparing the second partial sinogram with the second predicted sinogram, other comparison methods, such as Peak Signal-to-Noise Ratio (Peak Signal-to-Noise Ratio), perceptual hash algorithm (perceptual hash algorithm), local image feature (L) detection, etc. may also be used to compare image differences or similarities, and even an algorithm based on a deep neural network to compare image differences or image differences similarities may be used.
In the method for detecting the abnormality of the CT image according to the embodiment, a first part sinogram of a sinogram of the CT image to be detected is input to a prediction network, a second prediction sinogram corresponding to a second part sinogram is generated through prediction, and whether the CT image to be detected is abnormal or not is determined by comparing a difference between the second part sinogram and the second prediction sinogram generated through prediction. The prediction network in the method is obtained by training the deep neural network by using the normal CT image in advance, and a large number of abnormal CT images do not need to be collected as training data, so that the training requirement can be met, and the deep neural network can achieve sufficient sensitivity and specificity. The CT image abnormity detection mode can effectively detect the image abnormity of all types except the normal CT image, and does not aim at the image abnormity of a certain type independently.
In addition, the CT image anomaly detection method fully utilizes the original data 'sinogram' of the reconstructed CT image to carry out anomaly detection, and can directly utilize the prior knowledge of the sinogram and the anomaly artifact to carry out detection instead of utilizing the CT image to carry out anomaly detection. And the way of acquiring the sinogram corresponding to the normal CT image is more flexible: the original 'sinogram' data can be directly obtained from the CT scanning process, and the corresponding sinogram can also be obtained by utilizing the CT image conversion obtained by reconstruction.
Referring to fig. 2, fig. 2 is a flowchart illustrating another CT image anomaly detection method according to an embodiment of the present disclosure. As shown in fig. 2, the process includes:
step 101, acquiring a sinogram of a CT image to be detected; the sinogram of the CT image to be detected comprises: a first partial sinogram and a second partial sinogram.
Step 102, inputting the first part sinogram into a prediction network, and generating a second prediction sinogram corresponding to the second part sinogram by the prediction network according to the first part sinogram; the prediction network is obtained by utilizing the sinogram training of a normal CT image in advance.
The above steps 101 and 102 are consistent with the step description in the CT image anomaly detection method shown in fig. 1, and are not repeated herein.
Step 201, inputting the second part sinogram into the prediction network, and generating a first prediction sinogram corresponding to the first part sinogram by the prediction network according to the second part sinogram.
In step 102, the prediction network has predicted generation of a second predicted sinogram corresponding to a second portion sinogram. In this step, after the second part sinogram is input into the prediction network, a first prediction sinogram corresponding to the first part sinogram is predicted and generated. Thus, after step 102 and step 201, two partial prediction sinograms are predicted to be generated, namely: a first predicted sinogram corresponding to a first portion sinogram, and a second predicted sinogram corresponding to a second portion sinogram.
For example, suppose that the sinogram of the CT image to be detected is composed of three parts, which are: the upper third sinogram, the middle third sinogram and the lower third sinogram, after the steps 102 and 201, can generate a prediction graph corresponding to the upper third sinogram and a prediction graph corresponding to the middle third sinogram; or generating a prediction map corresponding to the upper third sinogram and a prediction map corresponding to the lower third sinogram; alternatively, a prediction map corresponding to the middle third of the sinogram and a prediction map corresponding to the lower third of the sinogram are generated. When the sinogram of the CT image to be detected is composed of two parts, or more parts, the situation of generating the first prediction sinogram and the second prediction sinogram is similar to this, and is not described again.
In one example, the sinogram of the CT image to be detected is composed of the first part sinogram and the second part sinogram. That is, the sinogram of the CT image to be detected includes only two parts, the first part sinogram and the second part sinogram. After the step 102 and the step 201, the prediction network predicts and generates the first prediction sinogram and the second prediction sinogram, and then predicts and generates a complete prediction sinogram of the CT image to be detected. Then, the difference comparison can be carried out on the sinogram of the CT image to be detected and the complete prediction sinogram, and whether the CT image to be detected is abnormal or not can be determined according to the difference value.
In the above example, the complete prediction sinogram of the CT image to be detected can be predicted and generated, the sinogram of the whole CT image to be detected is used as the comparison object, and compared with the case that only the sinogram of a part of the CT image to be detected is used as the comparison object, the difference value obtained by the comparison in this way is more accurate, and whether the sinogram of the CT image to be detected is abnormal can be more reliably determined.
Step 202, comparing the difference between the sinogram of the CT image to be detected and the prediction image, and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference; the prediction graph comprises: the first predicted sinogram and the second predicted sinogram.
In this step, the first prediction sinogram and the second prediction sinogram may be taken as a whole, and the difference between the first prediction sinogram and the second prediction sinogram and the corresponding sinogram portion of the CT image to be detected is compared, and whether the CT image to be detected is abnormal or not is determined according to the difference value. Alternatively, the first predicted sinogram may be compared with the corresponding first partial sinogram to obtain a first difference value; comparing the second predicted sinogram with a corresponding second partial sinogram to obtain a second difference value; and comprehensively determining whether the CT image to be detected is abnormal or not according to the magnitude of the first difference value and the magnitude of the second difference value.
In the method for detecting the abnormality of the CT image according to the embodiment, the first part sinogram and the second part sinogram are respectively predicted by the prediction network, and the first prediction sinogram and the second prediction sinogram are correspondingly generated. And then, comparing the difference between the prediction image containing the first prediction sinogram and the second prediction sinogram and the CT image to be detected, and determining whether the difference is abnormal or not according to the difference value. The abnormity detection method can predict and generate two parts of prediction sinograms and carry out corresponding difference comparison, and compared with a mode of only predicting and generating one part of prediction sinograms to carry out difference comparison, the difference comparison is more accurate, so that whether the CT image to be detected is abnormal can be more accurately determined.
Referring to fig. 3, fig. 3 is a flowchart illustrating a predictive network training method according to an embodiment of the present disclosure. As shown in fig. 3, the process includes:
step 301, acquiring a sinogram of a normal CT image; the sinogram of the normal CT image includes: a third partial sinogram and a fourth partial sinogram.
Before the deep neural network is trained to obtain the prediction network, a large amount of training data is needed, that is, in this embodiment, a large amount of sinograms of normal CT images need to be acquired. Compared with other deep neural network training modes, the deep neural network training mode only needs to acquire a large number of sinograms of normal CT images and does not need to acquire a large number of abnormal CT images as training data. Most of clinical CT images are normal CT images, and can acquire enough training data to meet the training requirement, so that the difficulty that a large number of abnormal CT images are difficult to acquire is avoided. The present specification does not limit the specific manner of obtaining the sinogram of the normal CT image, for example, the original sinogram data for reconstructing the CT image may be obtained from the process of acquiring data by original CT scan; alternatively, after acquiring a normal CT image, the CT image can be converted into a corresponding sinogram by using the reversible characteristics of the two.
Wherein the sinogram of the normal CT image comprises: a third partial sinogram and a fourth partial sinogram. That is, the sinogram for a normal CT image as training data of the deep neural network is composed of two parts (a third part sinogram and a fourth part sinogram), or more parts, such as a third part sinogram, a fourth part sinogram and a fifth part sinogram.
In one example, the sinogram of the normal CT image is composed of the third and fourth partial sinograms. That is, the sinogram for a normal CT image as training data is composed of two parts, a third part sinogram and a fourth part sinogram.
Step 302, training the prediction network according to the third part sinogram and the fourth part sinogram, so that the prediction network learns the mapping relationship between the third part sinogram and the fourth part sinogram.
In the step, a pre-constructed deep neural network is trained by using the third part sinogram and the fourth part sinogram, so that the mapping relation between the third part sinogram and the fourth part sinogram is learned by the prediction network obtained by training. Taking the countermeasure network as an example: inputting the third part sinogram and the fourth part sinogram into a GAN (Generative adaptive networks, Generative countermeasure network) for training, so that the predicted network obtained by training learns the mapping relation between the third part sinogram and the fourth part sinogram. In the using stage of the prediction network after training, the third part sinogram can be used as input, and the prediction network predicts and generates a fourth part prediction sinogram corresponding to the fourth part sinogram according to the learned mapping relation between the third part sinogram and the fourth part sinogram; alternatively, the prediction network predicts and generates a "third portion predicted sinogram" corresponding to the third portion sinogram, using the fourth portion sinogram as an input.
The method is characterized in that a GAN network is taken as an object of a trained deep Neural network for specific description, wherein in the GAN network, a generating network can be composed of a context encoder and a decoder based on a CNN (Convolutional Neural network), the CNN network adopts an AlexNet structure, an activation function is Re L U (Rectified L initial Unit), the decoder is composed of a five-layer convolution layer and an activation layer using Re L U as the activation function, and a discriminating network in the GAN network is similar to an encoder part of the generating network and is also a network architecture using AlexNet.
The training data for training the GAN network is the sinogram of a large number of normal CT images, and is described by taking an example that the sinogram of each normal CT image is divided into two parts, namely an upper half sinogram and a lower half sinogram. The 'upper half sinogram' of the sinogram of the normal CT image is intercepted and used as the input of a generating network, and the generating network obtains the structural expression of missing 'lower half' sinogram data by analyzing the incomplete sinogram. And the input end of the discrimination network is a 'lower half sinogram' or a prediction sinogram which is generated by the generation network and corresponds to the 'lower half sinogram', and a CNN five-layer convolution network is used for finally outputting 0/1 values to represent the discrimination of the discrimination network on the authenticity of the input image. And sequentially and iteratively training a generating network and a judging network of the GAN network so that the generating network has the accurate mapping capacity from one part to another part in the sinogram of the normal CT.
The deep neural network using the GAN network as the training is only exemplary, and not limiting, and may also be implemented using other deep neural network architectures, such as a deep learning network for supervised learning or unsupervised learning including a U-NET network.
Fig. 4 illustrates a CT image anomaly detection device according to the present specification, which can perform the CT image anomaly detection method according to any embodiment of the present specification. The apparatus may include an acquisition module 401, a prediction module 402, and a comparison module 403. Wherein:
an obtaining module 401, configured to obtain a sinogram of a CT image to be detected; the sinogram of the CT image to be detected comprises: a first portion sinogram and a second portion sinogram;
a prediction module 402, configured to input the first part sinogram into a prediction network, and generate a second predicted sinogram corresponding to the second part sinogram from the first part sinogram by the prediction network; the prediction network is obtained by utilizing the sinogram training of a normal CT image in advance;
a comparing module 403, configured to compare a difference between the second partial sinogram and the second predicted sinogram, and determine whether the CT image to be detected is abnormal according to a difference value of the difference.
Optionally, the predicting module 402 is further configured to input the second part sinogram into the predicting network, and generate, by the predicting network, a first predicted sinogram corresponding to the first part sinogram from the second part sinogram; the comparing module 403 is further configured to compare a difference between the sinogram of the CT image to be detected and the prediction image, and determine whether the CT image to be detected is abnormal according to a difference value of the difference; the prediction graph comprises: the first predicted sinogram and the second predicted sinogram.
Optionally, the sinogram of the CT image to be detected is composed of the first part sinogram and the second part sinogram.
Optionally, as shown in fig. 5, the apparatus further includes:
a normal image obtaining module 501, configured to obtain a sinogram of a normal CT image; the sinogram of the normal CT image includes: a third partial sinogram and a fourth partial sinogram;
a training module 502, configured to train the prediction network according to the third partial sinogram and the fourth partial sinogram, so that the prediction network learns a mapping relationship between the third partial sinogram and the fourth partial sinogram.
Optionally, the sinogram of the normal CT image is composed of the third and fourth partial sinograms.
The implementation process of the functions and actions of each module in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of at least one embodiment of the present specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The present specification also provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor can implement the CT image anomaly detection method according to any embodiment of the present specification when executing the computer program.
The present specification also provides a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, is capable of implementing the CT image anomaly detection method according to any one of the embodiments of the present specification.
The non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, etc., which is not limited in this application.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (12)

1. A method for detecting abnormalities in a CT image, the method comprising:
acquiring a sinogram of a CT image to be detected; the sinogram of the CT image to be detected comprises: a first portion sinogram and a second portion sinogram;
inputting the first part sinogram into a prediction network, and generating a second prediction sinogram corresponding to the second part sinogram according to the first part sinogram by the prediction network; the prediction network is obtained by utilizing the sinogram training of a normal CT image in advance;
and comparing the difference between the second partial sinogram and the second predicted sinogram, and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference.
2. The method according to claim 1 wherein after inputting the first portion sinogram into a prediction network, generating a second predicted sinogram from the first portion sinogram by the prediction network corresponding to the second portion sinogram, further comprising:
inputting the second part sinogram into the prediction network, and generating a first prediction sinogram corresponding to the first part sinogram by the prediction network according to the second part sinogram;
comparing the difference between the sinogram and the prediction image of the CT image to be detected, and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference; the prediction graph comprises: the first predicted sinogram and the second predicted sinogram.
3. A method according to any one of claims 1 or 2, wherein the sinogram of the CT image to be detected is composed of the first and second partial sinograms.
4. The method according to claim 1, wherein before acquiring the sinogram of the CT image to be detected, further comprising:
acquiring a sinogram of a normal CT image; the sinogram of the normal CT image includes: a third partial sinogram and a fourth partial sinogram;
and training the prediction network according to the third part sinogram and the fourth part sinogram, so that the prediction network learns the mapping relation between the third part sinogram and the fourth part sinogram.
5. The method of claim 4 wherein the sinogram of the normal CT image is comprised of the third and fourth partial sinograms.
6. A CT image abnormality detection apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a sinogram of a CT image to be detected; the sinogram of the CT image to be detected comprises: a first portion sinogram and a second portion sinogram;
a prediction module configured to input the first portion sinogram into a prediction network, and generate a second predicted sinogram corresponding to the second portion sinogram from the first portion sinogram by the prediction network; the prediction network is obtained by utilizing the sinogram training of a normal CT image in advance;
and the comparison module is used for comparing the difference between the second partial sinogram and the second predicted sinogram and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference.
7. The apparatus of claim 6,
the prediction module is further configured to input the second part sinogram into the prediction network, and the prediction network generates a first predicted sinogram corresponding to the first part sinogram according to the second part sinogram;
the comparison module is further used for comparing the difference between the sinogram of the CT image to be detected and the prediction image and determining whether the CT image to be detected is abnormal or not according to the difference value of the difference; the prediction graph comprises: the first predicted sinogram and the second predicted sinogram.
8. The apparatus according to claim 6 or 7, characterized in that the sinogram of the CT image to be detected is composed of the first partial sinogram and the second partial sinogram.
9. The apparatus of claim 6, further comprising:
the normal image acquisition module is used for acquiring a sinogram of a normal CT image; the sinogram of the normal CT image includes: a third partial sinogram and a fourth partial sinogram;
and the training module is used for training the prediction network according to the third part sinogram and the fourth part sinogram so that the prediction network learns the mapping relation between the third part sinogram and the fourth part sinogram.
10. The apparatus of claim 9 wherein the sinogram of the normal CT image is comprised of the third and fourth partial sinograms.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-5 when executing the program.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN202010183389.3A 2020-03-16 2020-03-16 CT image anomaly detection method and device Active CN111445447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010183389.3A CN111445447B (en) 2020-03-16 2020-03-16 CT image anomaly detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010183389.3A CN111445447B (en) 2020-03-16 2020-03-16 CT image anomaly detection method and device

Publications (2)

Publication Number Publication Date
CN111445447A true CN111445447A (en) 2020-07-24
CN111445447B CN111445447B (en) 2024-03-01

Family

ID=71627573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010183389.3A Active CN111445447B (en) 2020-03-16 2020-03-16 CT image anomaly detection method and device

Country Status (1)

Country Link
CN (1) CN111445447B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096117A (en) * 2021-04-29 2021-07-09 中南大学湘雅医院 Ectopic ossification CT image segmentation method, three-dimensional reconstruction method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402787A (en) * 2010-09-19 2012-04-04 上海西门子医疗器械有限公司 System and method for detecting strip artifact in image
CN109215014A (en) * 2018-07-02 2019-01-15 中国科学院深圳先进技术研究院 Training method, device, equipment and the storage medium of CT image prediction model
US20190073804A1 (en) * 2017-09-05 2019-03-07 Siemens Healthcare Gmbh Method for automatically recognizing artifacts in computed-tomography image data
US20190147588A1 (en) * 2017-11-13 2019-05-16 Siemens Healthcare Gmbh Artifact identification and/or correction for medical imaging
CN110288671A (en) * 2019-06-25 2019-09-27 南京邮电大学 The low dosage CBCT image rebuilding method of network is generated based on three-dimensional antagonism
CN110555474A (en) * 2019-08-28 2019-12-10 上海电力大学 photovoltaic panel fault detection method based on semi-supervised learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402787A (en) * 2010-09-19 2012-04-04 上海西门子医疗器械有限公司 System and method for detecting strip artifact in image
US20190073804A1 (en) * 2017-09-05 2019-03-07 Siemens Healthcare Gmbh Method for automatically recognizing artifacts in computed-tomography image data
US20190147588A1 (en) * 2017-11-13 2019-05-16 Siemens Healthcare Gmbh Artifact identification and/or correction for medical imaging
CN109215014A (en) * 2018-07-02 2019-01-15 中国科学院深圳先进技术研究院 Training method, device, equipment and the storage medium of CT image prediction model
CN110288671A (en) * 2019-06-25 2019-09-27 南京邮电大学 The low dosage CBCT image rebuilding method of network is generated based on three-dimensional antagonism
CN110555474A (en) * 2019-08-28 2019-12-10 上海电力大学 photovoltaic panel fault detection method based on semi-supervised learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096117A (en) * 2021-04-29 2021-07-09 中南大学湘雅医院 Ectopic ossification CT image segmentation method, three-dimensional reconstruction method and device

Also Published As

Publication number Publication date
CN111445447B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
US10691980B1 (en) Multi-task learning for chest X-ray abnormality classification
JP6837597B2 (en) Active learning systems and methods
US20200104984A1 (en) Methods and devices for reducing dimension of eigenvectors
JP2020064609A (en) Patient-specific deep learning image denoising methods and systems
EP4036931A1 (en) Training method for specializing artificial intelligence model in institution for deployment, and apparatus for training artificial intelligence model
US11263744B2 (en) Saliency mapping by feature reduction and perturbation modeling in medical imaging
WO2022051290A1 (en) Connected machine-learning models with joint training for lesion detection
CN111311704A (en) Image reconstruction method and device, computer equipment and storage medium
CN112348908A (en) Shape-based generative countermeasure network for segmentation in medical imaging
CN111612756B (en) Coronary artery specificity calcification detection method and device
WO2021165053A1 (en) Out-of-distribution detection of input instances to a model
US20210319539A1 (en) Systems and methods for background aware reconstruction using deep learning
KR20170117324A (en) Image processing apparatus, image processing method, and storage medium
CN115249279A (en) Medical image processing method, medical image processing device, computer equipment and storage medium
Mohebbian et al. Classifying MRI motion severity using a stacked ensemble approach
CN117671463B (en) Multi-mode medical data quality calibration method
CN111179277A (en) Unsupervised self-adaptive mammary gland lesion segmentation method
CN111445447B (en) CT image anomaly detection method and device
CN111339993A (en) X-ray image metal detection method and system
CN114612484B (en) Retina OCT image segmentation method based on unsupervised learning
Basu Analyzing Alzheimer's disease progression from sequential magnetic resonance imaging scans using deep convolutional neural networks
CN115700740A (en) Medical image processing method, apparatus, computer device and storage medium
Arega et al. Automatic Quality Assessment of Cardiac MR Images with Motion Artefacts Using Multi-task Learning and K-Space Motion Artefact Augmentation
KR102580749B1 (en) Method and apparatus for image quality assessment of medical images
US20240312007A1 (en) Pet image analysis and reconstruction by machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant