CN111325737A - Low-dose CT image processing method and device and computer equipment - Google Patents

Low-dose CT image processing method and device and computer equipment Download PDF

Info

Publication number
CN111325737A
CN111325737A CN202010129313.2A CN202010129313A CN111325737A CN 111325737 A CN111325737 A CN 111325737A CN 202010129313 A CN202010129313 A CN 202010129313A CN 111325737 A CN111325737 A CN 111325737A
Authority
CN
China
Prior art keywords
image
noise
dose
deconvolution
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010129313.2A
Other languages
Chinese (zh)
Other versions
CN111325737B (en
Inventor
盛斌
戴超
贺加原
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhitang Health Technology Co ltd
Original Assignee
Shanghai Zhitang Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhitang Health Technology Co ltd filed Critical Shanghai Zhitang Health Technology Co ltd
Priority to CN202010129313.2A priority Critical patent/CN111325737B/en
Publication of CN111325737A publication Critical patent/CN111325737A/en
Application granted granted Critical
Publication of CN111325737B publication Critical patent/CN111325737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a low-dose CT image processing method, a low-dose CT image processing device, a computer device and a storage medium. The method comprises the following steps: acquiring a low-dose CT image and a pre-trained target extraction model; the target extraction model comprises a convolution chain and a deconvolution chain; respectively determining the arrangement sequence of each noise removal convolutional layer in a convolution chain and the arrangement sequence of each deconvolution layer in a deconvolution chain; determining the noise-removal convolutional layers and the deconvolution layers with the same arrangement sequence as a pair of related layers; extracting image features in the low-dose CT image based on the convolution chain to obtain a feature image corresponding to each noise-removal convolution layer; and respectively inputting the characteristic images into corresponding deconvolution layers based on the relation of the association layers, so that the deconvolution layers extract the target organs in the low-dose CT image according to the characteristic images. By adopting the method, the target organ can be extracted from the low-dose CT image.

Description

Low-dose CT image processing method and device and computer equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a low-dose CT image processing method and apparatus, and a computer device.
Background
With the development of scientific technology, the extraction of clear and accurate target regions from CT (computed tomography) images is a key step of computer-assisted therapy, for example, the extraction of liver from CT images is a key step of assisted therapy of liver diseases. Due to the widespread use of medical CT, total radiation dose poses a potential cancer risk to patients, and therefore low dose CT arises. Low dose CT is a method of achieving reduced radiation dose by shortening the exposure time of the X-ray tube. However, shortening the exposure time of the X-ray tube introduces additional noise and streak artifacts in the low-dose CT image, making it particularly difficult to extract a clear and accurate target region from the low-dose CT image. Therefore, it is an urgent need to invent a method capable of extracting a clear and accurate target region from low-dose CT.
Disclosure of Invention
In view of the above, it is necessary to provide a low-dose CT image processing method, apparatus, computer device and storage medium capable of extracting a target organ from a low-dose CT image in order to solve the above-mentioned technical problems.
A low dose CT image processing method, the method comprising:
acquiring a low-dose CT image and a pre-trained target extraction model; the low-dose CT image has a target organ; the target extraction model comprises a convolution chain consisting of a plurality of noise-removal convolution layers and a deconvolution chain consisting of a plurality of deconvolution layers; the number of the noise removal convolution layers is the same as that of the anti-convolution layers;
respectively determining the arrangement sequence of each noise-removal convolutional layer in the convolutional chain and the arrangement sequence of each deconvolution layer in the deconvolution chain;
determining the noise-removal convolutional layers and the deconvolution layers with the same arrangement sequence as a pair of related layers;
extracting image features in the low-dose CT image based on the convolution chain to obtain a feature image corresponding to each noise-removal convolution layer;
and respectively inputting the characteristic images into corresponding deconvolution layers based on the relation of the association layers, so that the deconvolution layers extract the target organs in the low-dose CT image according to the characteristic images.
In one embodiment, the extracting image features in the low-dose CT image based on the convolution chain to obtain a feature image corresponding to each noise-removed convolution layer includes:
determining a noise removal convolutional layer of a current sequence;
taking a low-dose CT image as an input of a noise-removal convolutional layer in a current sequence, so that the noise-removal convolutional layer in the current sequence extracts a characteristic image from the low-dose CT image;
and taking the noise-removal convolutional layer in the next sequence as the noise-removal convolutional layer in the current sequence, taking the extracted feature image as a low-dose CT image, and returning to the step of taking the low-dose CT image as the input of the noise-removal convolutional layer in the current sequence until the whole convolutional chain is traversed.
In one embodiment, the inputting the feature images into corresponding deconvolution layers respectively based on the relationship between the associated layers, so that the deconvolution layer extracts the target organ in the low-dose CT image from the feature images includes:
extracting a characteristic image corresponding to the noise removal convolutional layer positioned at the tail end in the convolutional chain, and recording the characteristic image as a first characteristic image;
performing feature reconstruction on the first feature image based on a deconvolution layer located at the tail of the deconvolution chain to obtain a first intermediate result;
extracting a characteristic image corresponding to the noise-removal convolutional layer at the tail of the convolutional link, and recording the characteristic image as a second characteristic image;
and inputting the first intermediate result and the second characteristic image into a corresponding deconvolution layer so that the target organ in the low-dose CT image is extracted by the deconvolution layer.
In one embodiment, the inputting the first intermediate result and the second feature image into a corresponding deconvolution layer, so that the deconvolution layer extracts a target organ in the low-dose CT image includes:
determining a noise removal convolution layer and a deconvolution layer of a current sequence; the noise-removal convolutional layer in the current sequence is not the noise-removal convolutional layer positioned at the tail in the convolutional chain; the deconvolution layer of the current sequence is not the deconvolution layer at the end of the convolution chain;
extracting a second characteristic image corresponding to the noise-removed convolutional layer of the current sequence;
performing pixel superposition on the second characteristic image and the first intermediate result based on the deconvolution layer of the current sequence to obtain a second intermediate result;
and taking the noise-removal convolutional layer and the deconvolution layer in the next sequence as the noise-removal convolutional layer and the deconvolution layer in the current sequence, taking the second intermediate result as the first intermediate result, returning to the step of extracting the second characteristic image corresponding to the noise-removal convolutional layer in the current sequence until the complete deconvolution chain is traversed.
In one embodiment, the training step of the target extraction model includes:
acquiring a training image group; the training image set comprises a low-dose CT image and a high-dose CT image;
extracting a first target image block from a low-dose CT image in the training image group, and extracting a second target image block from a high-dose CT image; the position and the size of the first target image block are the same as those of the second target image block;
determining a total error between the first target image block and the second target image block based on the target extraction model;
and adjusting model parameters in the target extraction model until the total error between the first target image block and the second target image block is minimum.
In one embodiment, the determining the total error between the first target image block and the second target image block based on the target extraction model comprises:
acquiring a first total error between the first target image block and the second target image block based on the convolution chain;
adjusting model parameters in the target extraction model until the first total error is minimum;
acquiring a second total error between the first target image block and the second target image block based on the deconvolution chain;
adjusting model parameters in the target extraction model until the second total error is minimal.
A low dose CT image processing apparatus, the apparatus comprising:
the model acquisition module is used for acquiring a low-dose CT image and a pre-trained target extraction model; the low-dose CT image has a target organ; the target extraction model comprises a convolution chain consisting of a plurality of noise-removal convolution layers and a deconvolution chain consisting of a plurality of deconvolution layers; the number of the noise removal convolution layers is the same as that of the anti-convolution layers;
a correlation layer determining module, configured to determine an arrangement order of each noise-removal convolutional layer in the convolutional chain and an arrangement order of each deconvolution layer in the deconvolution chain, respectively; determining the noise-removal convolutional layers and the deconvolution layers with the same arrangement sequence as a pair of related layers;
the target organ extraction module is used for extracting image features in the low-dose CT image based on the convolution chain to obtain a feature image corresponding to each noise-removal convolution layer; and respectively inputting the characteristic images into corresponding deconvolution layers based on the relation of the association layers, so that the deconvolution layers extract the target organs in the low-dose CT image according to the characteristic images.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a low-dose CT image and a pre-trained target extraction model; the low-dose CT image has a target organ; the target extraction model comprises a convolution chain consisting of a plurality of noise-removal convolution layers and a deconvolution chain consisting of a plurality of deconvolution layers; the number of the noise removal convolution layers is the same as that of the anti-convolution layers;
respectively determining the arrangement sequence of each noise-removal convolutional layer in the convolutional chain and the arrangement sequence of each deconvolution layer in the deconvolution chain;
determining the noise-removal convolutional layers and the deconvolution layers with the same arrangement sequence as a pair of related layers;
extracting image features in the low-dose CT image based on the convolution chain to obtain a feature image corresponding to each noise-removal convolution layer;
and respectively inputting the characteristic images into corresponding deconvolution layers based on the relation of the association layers, so that the deconvolution layers extract the target organs in the low-dose CT image according to the characteristic images.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a low-dose CT image and a pre-trained target extraction model; the low-dose CT image has a target organ; the target extraction model comprises a convolution chain consisting of a plurality of noise-removal convolution layers and a deconvolution chain consisting of a plurality of deconvolution layers; the number of the noise removal convolution layers is the same as that of the anti-convolution layers;
respectively determining the arrangement sequence of each noise-removal convolutional layer in the convolutional chain and the arrangement sequence of each deconvolution layer in the deconvolution chain;
determining the noise-removal convolutional layers and the deconvolution layers with the same arrangement sequence as a pair of related layers;
extracting image features in the low-dose CT image based on the convolution chain to obtain a feature image corresponding to each noise-removal convolution layer;
and respectively inputting the characteristic images into corresponding deconvolution layers based on the relation of the association layers, so that the deconvolution layers extract the target organs in the low-dose CT image according to the characteristic images.
According to the low-dose CT image processing method, the low-dose CT image processing device, the computer equipment and the storage medium, the convolution chain and the deconvolution chain in the target extraction model can be determined by acquiring the low-dose CT image and the pre-trained target extraction model; determining the associated layers based on the order of arrangement by determining the order of arrangement of each noise-removed convolutional layer in the convolutional chain and the order of arrangement of each deconvolution layer in the deconvolution chain; performing feature extraction on the low-dose CT image through a convolution chain to obtain a feature image corresponding to each noise-removal convolution layer; through the relation of the associated layers, the characteristic images can be respectively input into the corresponding deconvolution layers, so that the deconvolution layers extract the target organs in the low-dose CT image according to the characteristic images.
Drawings
FIG. 1 is a diagram illustrating an exemplary embodiment of a low-dose CT image processing method;
FIG. 2 is a schematic flow chart diagram illustrating a method for processing low-dose CT images in one embodiment;
FIG. 3 is a diagram of a convolution chain and a deconvolution chain in one embodiment;
FIG. 4 is a diagram illustrating the structure of an object extraction model in one embodiment;
FIG. 5 is a schematic flow chart diagram illustrating a method for training a target extraction model in one embodiment;
FIG. 6 is a block diagram of a low-dose CT image processing apparatus according to an exemplary embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, the line of low-dose CT image processing is applied to a low-dose CT image processing system. The low-dose CT image processing system includes a terminal 110 and a server 120. The low-dose CT image processing method may be performed at the terminal 110 or the server 120. When a target organ needs to be identified from the low-dose CT image, the terminal 110 may perform target organ identification and extraction on the low-dose CT image, or may transmit the low-dose CT image to the server 120, and the server 120 performs target organ identification on the low-dose CT image. The terminal 110 and the server 120 are connected through a network. The terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, a low-dose CT image processing method is provided, which is exemplified by the application of the method to the server in fig. 1, and includes the following steps:
s202, acquiring a low-dose CT image and a pre-trained target extraction model.
The low-dose CT image is a CT image which is taken by a patient when the radiation dose is reduced to half or even one tenth of the original dose. The pre-trained target extraction model is a trained machine learning model. The target extraction model comprises a convolution chain and a deconvolution chain; the convolution chain is a submodel formed by connecting a plurality of noise removal convolution layers; the deconvolution chain is a sub-model formed by connecting a plurality of deconvolution layers.
Specifically, the image capturing device may be deployed in a preset image capturing area, the image capturing device scans a real scene in a camera view in real time with low radiometry, and generates a field image frame in real time according to a preset time frequency, and the generated field image frame may be cached locally in the image capturing device. For example, a CT scan of a patient may be performed using a CT device, resulting in live image frames, which are transmitted to a computer device. And the computer equipment cuts the field image frame to obtain a low-dose CT image. And the computer equipment is pre-stored with a target extraction model, and identifies and extracts a target organ in the low-dose CT image based on the target extraction model after the low-dose CT image is obtained. In the scenario of extracting a liver organ, the target organ may be a liver.
S204, respectively determining the arrangement sequence of each noise-removal convolutional layer in the convolutional chain and the arrangement sequence of each deconvolution layer in the deconvolution chain.
S206, the noise-removal convolutional layer and the deconvolution layer with the same arrangement sequence are determined as a pair of related layers.
Wherein, the noise removal convolution layer is a convolution neural network; the deconvolution layer is a deconvolution neural network.
Specifically, as shown in fig. 3, the target extraction model includes a convolution chain and a deconvolution chain. The noise-removed convolutional layers in the convolutional chains are connected in a chain structure so that the output of a previous-order noise-removed convolutional layer can be the input of a next-order noise-removed convolutional layer. Similarly, the deconvolution layers in the deconvolution chain are also connected in a chain structure, so that the low-dose CT image is up-sampled based on the chain-structured deconvolution chain. The computer device determines an arrangement order of each of the noise-removed convolutional layers in the convolution chain and an arrangement order of each of the deconvolution layers in the deconvolution chain, respectively, and determines the noise-removed convolutional layers and the deconvolution layers having the same arrangement order as a pair of associated layers. For example, the noise-removed convolutional layer located at the start position and the deconvolution layer located at its real position are determined as a pair of associated layers. FIG. 3 is a diagram illustrating a convolution chain and a deconvolution chain, in one embodiment.
And S208, extracting image features in the low-dose CT image based on the convolution chain to obtain a feature image corresponding to each noise-removal convolution layer.
Specifically, the computer device inputs a low-dose CT image with a fixed size into the noise-removal convolutional layer located at the initial position, performs downsampling on the low-dose CT image by the noise-removal convolutional layer located at the initial position to obtain a corresponding feature image, and uses the extracted feature image as the input of the noise-removal convolutional layer in the next sequence, so that the noise-removal convolutional layer in the next sequence performs feature extraction on the feature image, and thus, until the whole convolutional chain is traversed, the feature image corresponding to each noise-removal convolutional layer is obtained. For example, the target extraction model may extract low-level features based on the noise-removed convolutional layer at the starting position. The low-level features refer to low-dimensional features extracted from edges and corners of the target organ. Then, the next sequential noise-removed convolutional layer extracts the simple contour features of the target organ using the low-level features. Then, the noise-removed convolutional layer at the end uses the simple contour features to distinguish the features of higher layers, such as extracting the liver texture and the complete contour of the target organ by using the simple contour features of the target organ.
It is noted that unlike conventional convolutional networks, the present application uses a chain of symmetrically connected noise-canceling convolutional layers as stacked components. Streak artifacts in low-dose CT images are smoothed from the noise-removed convolution layer at the first starting position to the noise-removed convolution layer at the ending position to preserve important information in the plaque. The streak artifact refers to the appearance of all different types of non-random interference in the image during CT imaging, i.e. the streak image of the tissues or lesions that do not exist in the subject at all. Plaque refers to a focal region in a CT image.
In one embodiment, the noise-removed convolutional layer is followed by a modified linear unit relu (e) max (0, e), and thus, the noise-removed convolutional layer is specific
Figure BDA0002395358450000081
Can be expressed as:
Figure BDA0002395358450000082
where j ═ 0,1,2.. m denotes the order of appearance of the noise-removed convolutional layers in the convolutional chain, and k denotesjRepresenting the weight m in the noise-removed convolutional layerjDenotes the convolution process, e0Representing image blocks extracted from an input low-dose CT image, ej(j>0) Is the characteristic image output by the former noise-removing convolution layer. ReLU(e) Max (0, e) is the formula for the activation function. The image block means that when the convolution operation is performed on the low-dose CT image, the low-dose CT image is often divided to obtain a plurality of image areas, the convolution layer performs the convolution operation on each image area, and one divided area is the image block.
And S210, respectively inputting the characteristic images into corresponding deconvolution layers based on the relation of the associated layers, so that the deconvolution layers extract the target organs in the low-dose CT image according to the characteristic images.
Specifically, as shown in fig. 4, the target extraction model acquires a feature image output from the noise-removed convolution layer located at the end in the convolution chain. For the convenience of the following description, the feature image output from the noise-removed convolution layer located at the end of the convolution chain is referred to as a first feature image. And the target extraction model determines a deconvolution layer positioned at the tail in the deconvolution chain, and takes the first characteristic image as the input of the deconvolution layer, so that the deconvolution layer positioned at the tail in the deconvolution chain carries out characteristic reconstruction on the first characteristic image, and a first intermediate result is obtained. FIG. 4 is a diagram illustrating a structure of an object extraction model according to an embodiment.
Further, the target extraction model extracts the feature image corresponding to the convolution chain from which the noise-removed convolutional layer located at the end is removed, and for convenience of description below, the feature image output from the convolution chain from which the noise-removed convolutional layer located at the end is removed is referred to as a second feature image. Based on the relation of the associated layers between the convolutional layers and the deconvolution layers, the target extraction model inputs the first intermediate result and the second characteristic image into the corresponding deconvolution layers, so that the deconvolution layers extract the target organs in the low-dose CT image according to the characteristic images.
In one embodiment, the deconvolution layer is followed by a modified linear unit relu (e) ═ max (0, e), so that the particular deconvolution layer is
Figure BDA0002395358450000083
Can be expressed as:
Figure BDA0002395358450000084
wherein j ═ 0,1,2.. m denotes the order of appearance of the deconvolution layers in the deconvolution chain, k'jRepresents a weight m 'in the noise-removed convolutional layer'jThe deviation of (a) is determined,
Figure BDA0002395358450000085
denotes the deconvolution process, gmE denotes the first feature image as the output vector of the last deconvolution layer, and g denotesj(m>j>0) Feature image representing the deconvolution layer output of the previous order, and g0Refers to an image block extracted from a low-dose CT image.
In the low-dose CT image processing method, the convolution chain and the deconvolution chain in the target extraction model can be determined by acquiring the low-dose CT image and the pre-trained target extraction model; determining the associated layers based on the order of arrangement by determining the order of arrangement of each noise-removed convolutional layer in the convolutional chain and the order of arrangement of each deconvolution layer in the deconvolution chain; performing feature extraction on the low-dose CT image through a convolution chain to obtain a feature image corresponding to each noise-removal convolution layer; through the relation of the associated layers, the characteristic images can be respectively input into the corresponding deconvolution layers, so that the deconvolution layers extract the target organs in the low-dose CT image according to the characteristic images.
In one embodiment, extracting image features from the low-dose CT image based on the convolution chain to obtain a feature image corresponding to each noise-removed convolution layer includes: determining a noise removal convolutional layer of a current sequence; taking the CT image with low dose as the input of the noise-removal convolutional layer in the current sequence, so that the noise-removal convolutional layer in the current sequence extracts a characteristic image from the CT image with low dose; and taking the noise-removal convolutional layer in the next sequence as the noise-removal convolutional layer in the current sequence, taking the extracted feature image as a low-dose CT image, and returning to the step of taking the low-dose CT image as the input of the noise-removal convolutional layer in the current sequence until the whole convolutional chain is traversed.
Specifically, as shown in fig. 3, the convolution chain has a plurality of sequentially connected noise-removal convolution layers. And the computer respectively determines the noise-removal convolutional layers at the initial position and inputs the low-dose CT image into the noise-removal convolutional layers at the initial position, so that the noise-removal convolutional layers at the initial position extract image features in the low-dose CT image, and a feature image corresponding to the noise-removal convolutional layers at the initial position is obtained.
Further, the computer apparatus inputs the feature image corresponding to the noise-removed convolutional layer of the start position into the noise-removed convolutional layer of the next order, resulting in the feature image corresponding to the noise-removed convolutional layer of the next order. And sequentially traversing until the whole convolution chain is traversed.
In this embodiment, the low-dose CT image is processed based on the convolution chain, so that low-level features in the low-dose CT image can be extracted first, then, simple contour features of the target organ are extracted by using the low-level features, and then, features of higher levels are distinguished based on the simple contour features, so that liver texture and a complete contour of the target organ are obtained.
In one embodiment, based on the relationship between the associated layers, inputting the feature images into the corresponding deconvolution layers, respectively, so that the deconvolution layers extract the target organ in the low-dose CT image from the feature images includes: extracting a characteristic image corresponding to the noise removal convolutional layer positioned at the tail end in the convolutional chain, and recording the characteristic image as a first characteristic image; performing feature reconstruction on the first feature image based on a deconvolution layer located at the tail of the deconvolution chain to obtain a first intermediate result; extracting a characteristic image corresponding to the noise-removal convolutional layer at the tail of the convolutional link, and recording the characteristic image as a second characteristic image; and inputting the first intermediate result and the second characteristic image into a corresponding deconvolution layer so that the target organ in the low-dose CT image is extracted by the deconvolution layer.
Specifically, the computer device takes the output of the last-located noise-removed convolutional layer in the convolutional chain as the input of the deconvolution chain. The first characteristic image is input into a convolution layer positioned at the tail position in the deconvolution chain, so that the convolution layer positioned at the tail position carries out characteristic reconstruction on the first characteristic image, and a first intermediate result is obtained. The computer device extracts the feature images corresponding to the convolution chain except for the noise-removed convolution layer located at the end, and for the convenience of description, the feature images except for the feature image corresponding to the noise-removed convolution layer located at the end will be referred to as second feature images hereinafter. And the computer equipment inputs the first intermediate result and the second characteristic image into a corresponding deconvolution layer so that the deconvolution layer extracts a target organ in the low-dose CT image according to the characteristic image.
In this embodiment, the target organ can be extracted from the low-dose CT image step by performing chain processing on the first feature image.
In another embodiment, inputting the first intermediate result and the second feature image into a corresponding deconvolution layer, so that the deconvolution layer extracts a target organ in the low-dose CT image includes: determining a noise removal convolution layer and a deconvolution layer of a current sequence; the noise-removed convolutional layer in the current sequence is not the noise-removed convolutional layer positioned at the tail in the convolutional chain; the deconvolution layer in the current order is not the deconvolution layer at the end of the convolution chain; extracting a second characteristic image corresponding to the noise-removed convolutional layer in the current sequence; based on the deconvolution layer in the current sequence, performing pixel superposition on the second characteristic image and the first intermediate result to obtain a second intermediate result; and taking the noise-removal convolutional layer and the deconvolution layer in the next sequence as the noise-removal convolutional layer and the deconvolution layer in the current sequence, taking the second intermediate result as the first intermediate result, and returning to the step of extracting the second characteristic image corresponding to the noise-removal convolutional layer in the current sequence until the complete deconvolution chain is traversed.
Specifically, as shown in fig. 4, the corresponding noise-removed convolutional layer and deconvolution layer are determined according to the associated hierarchical relationship, and the second feature image corresponding to the noise-removed convolutional layer is input into the corresponding deconvolution layer. And the computer equipment determines the deconvolution layer in the last but one order as the deconvolution layer in the current order, inputs the first intermediate result into the deconvolution layer in the current order, and performs pixel superposition on the second characteristic image and the first intermediate result by the deconvolution layer in the current order to obtain a second intermediate result. And the method is carried out until the whole deconvolution chain is traversed.
In this embodiment, by establishing a feature processing method based on residual error compensation, that is, by using residual mapping in the noise-removed convolutional layer and the deconvolution layer instead of the linear input and output shown in fig. 3, the probability of gradient disappearance can be reduced, and thus, the training of the target extraction model can be optimized when the network is deep. Secondly, the feature processing method based on residual error compensation can keep more structure and contrast details of the low-dose CT image and can remarkably improve the accuracy of extracting the target region from the low-dose CT image, so that the output of the noise-removed convolution layer is directly input into the corresponding deconvolution layer, high-resolution features can be obtained, and the target extraction process of the low-dose CT image can be improved.
In one embodiment, as shown in fig. 5, the training step of the target extraction model includes:
s302, acquiring a training image group.
S304, extracting a first target image block from the low-dose CT image in the training image group, and extracting a second target image block from the high-dose CT image.
S306, determining a total error between the first target image block and the second target image block based on the target extraction model.
S308, adjusting the model parameters in the target extraction model until the total error between the first target image block and the second target image block is minimum.
Wherein the training image group comprises a low-dose CT image and a high-dose CT image; the position and the size of the first target image block are the same as those of the second target image block.
Specifically, a developer of the target extraction model may collect a large number of training sample images for the same target organ. Wherein the training sample images include low-dose CT images and high-dose CT images. The developer of the target extraction model divides the collected sample training image into a plurality of training image groups, and cuts the low-dose CT image and the high-dose CT image in the training image groups to obtain a CT image with fixed sizeFor example, the cropped CT image size may be 64 × 64, for the first training of the target extraction model, the target extraction model randomly initializes the training parameter set σ K, F, S, P, and then finds the classification probability of the target organ by forward propagation, and achieves the estimation of σ by minimizing the total error (L (c, σ)) between the low-dose CT image and the high-dose CT image1,z1),(y2,z2)…(yT,zT) The total error function can be expressed as:
Figure BDA0002395358450000121
the image block pair is a first target image and a second target image which are in the same position and the same size in the same training image group.
Wherein, ypAnd zpRepresenting a first target image and a second target image in a set of training image groups, W representing the total number of training image groups. At this point, the back propagation is used to compute L relative to all weights (W) in the target extraction modelpTo WT) And updating all parameter values using gradient descent to minimize the output error (L (c, σ)), and taking the total error of the last-order noise-removed convolutional layers as a loss function.
In this embodiment, a large number of data samples are required in the training process of the target extraction model based on the convolutional neural network, however, in a real clinical experiment, it is often difficult to acquire a sufficient number of low-dose CT images, so that the probability of the target extraction model training failure due to insufficient low-dose CT images can be effectively alleviated by training the target extraction model based on the low-dose CT images and the high-dose CT images together. In addition, by using the training strategy based on the image block, a low-contrast region of the target organ relative to the background of the target organ can be effectively detected.
In another embodiment, determining the total error between the first target image block and the second target image block based on the target extraction model comprises: acquiring a first total error between a first target image block and a second target image block based on a convolution chain; adjusting model parameters in the target extraction model until the first total error is minimum; acquiring a second total error between the first target image block and the second target image block based on a deconvolution chain; and adjusting the model parameters in the target extraction model until the second total error is minimum.
In the back propagation of the training process, the filter λ connected to the five M × M noise-removing convolutional layers is n × n, so that the noise-removing convolutional layer at the start of the convolutional chain provides an output of size (M-n +1) × (M-n +1) that can be used to compute some unit of noise-removing convolutional layers
Figure BDA0002395358450000122
Thus, the sum of the weights of the filter components of the first few layers is:
Figure BDA0002395358450000131
at the same time, the noise-removing convolution layer will
Figure BDA0002395358450000132
The non-linearity of (d) applies to:
Figure BDA0002395358450000133
to optimize all the weights and parameters in the target extraction model, the target organ is correctly classified and detected from the low dose CT image training set, and the first total error in the convolution chain is calculated during the training process by applying the chain rule. Performing bias derivation on the output of each noise-removed convolution layer by using E
Figure BDA0002395358450000134
Applying the chain rule to the gradient component of the weight in each noise-removed convolutional layer, i.e. adding the contributions of all expressions in one variable to obtain
Figure BDA0002395358450000135
So that the first total error of the k NRC to n noise-removed convolutional layers is
Figure BDA0002395358450000136
Wherein k is<n。
Figure BDA0002395358450000137
Is the value of the total error of the previous layer. And then, adjusting the model parameters in the target extraction model until the first total error is minimum. Similarly, a second total error of the convolution chain can be derived based on the above principle, and the model parameters in the target extraction model are adjusted until the second total error is minimum.
In this embodiment, by determining the total error between the first target image block and the second target image block, the target extraction model may be trained based on the total error, so that the trained target extraction model may accurately extract the target organ from the low-dose CT image. In addition, since the second target image block has less noise, when the median error between the first target image block and the second target image block is the smallest, the noise can be removed from the first target image block based on the trained target extraction model.
It should be understood that although the steps in the flowcharts of fig. 2 and 5 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 and 5 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 6, there is provided a low-dose CT image processing apparatus 600 comprising: a model acquisition module 602, an association layer determination module 604, and a target organ extraction module 606, wherein:
a model acquisition module 602, configured to acquire a low-dose CT image and a pre-trained target extraction model; a target organ is in a low-dose CT image; the target extraction model comprises a convolution chain consisting of a plurality of noise removal convolution layers and a deconvolution chain consisting of a plurality of deconvolution layers; the number of noise-removal convolutional layers and deconvolution layers is the same.
An associated layer determining module 604, configured to determine an arrangement order of each noise-removal convolutional layer in the convolutional chain and an arrangement order of each deconvolution layer in the deconvolution chain, respectively; the noise-removed convolutional layer and the deconvolution layer in the same order are determined as a pair of related layers.
A target organ extraction module 606, configured to extract image features in the low-dose CT image based on the convolution chain to obtain a feature image corresponding to each noise-removed convolution layer; and respectively inputting the characteristic images into corresponding deconvolution layers based on the relation of the association layers, so that the deconvolution layers extract the target organs in the low-dose CT image according to the characteristic images.
In one embodiment, the target organ extraction module 606 further includes a convolution feature extraction module 6061 for determining a current order of noise-removed convolution layers; taking the CT image with low dose as the input of the noise-removal convolutional layer in the current sequence, so that the noise-removal convolutional layer in the current sequence extracts a characteristic image from the CT image with low dose; and taking the noise-removal convolutional layer in the next sequence as the noise-removal convolutional layer in the current sequence, taking the extracted feature image as a low-dose CT image, and returning to the step of taking the low-dose CT image as the input of the noise-removal convolutional layer in the current sequence until the whole convolutional chain is traversed.
In one embodiment, the target organ extraction module 606 further includes a deconvolution feature extraction module 6062, configured to extract a feature image corresponding to the last noise-removed convolution layer in the convolution chain, and record the feature image as a first feature image;
performing feature reconstruction on the first feature image based on a deconvolution layer located at the tail of the deconvolution chain to obtain a first intermediate result; extracting a characteristic image corresponding to the noise-removal convolutional layer at the tail of the convolutional link, and recording the characteristic image as a second characteristic image; and inputting the first intermediate result and the second characteristic image into a corresponding deconvolution layer so that the target organ in the low-dose CT image is extracted by the deconvolution layer.
In one embodiment, the deconvolution feature extraction module 6062 is further configured to determine a current order of noise-removed convolutional layers and deconvolution layers; the noise-removed convolutional layer in the current sequence is not the noise-removed convolutional layer positioned at the tail in the convolutional chain; the deconvolution layer in the current order is not the deconvolution layer at the end of the convolution chain; extracting a second characteristic image corresponding to the noise-removed convolutional layer in the current sequence; based on the deconvolution layer in the current sequence, performing pixel superposition on the second characteristic image and the first intermediate result to obtain a second intermediate result; and taking the noise-removal convolutional layer and the deconvolution layer in the next sequence as the noise-removal convolutional layer and the deconvolution layer in the current sequence, taking the second intermediate result as the first intermediate result, and returning to the step of extracting the second characteristic image corresponding to the noise-removal convolutional layer in the current sequence until the complete deconvolution chain is traversed.
In one embodiment, the low-dose CT image processing apparatus 600 is further configured to acquire a training image set; the training image group comprises a low-dose CT image and a high-dose CT image; extracting a first target image block from a low-dose CT image in a training image group, and extracting a second target image block from a high-dose CT image; the position and the size of the first target image block are the same as those of the second target image block; determining a total error between the first target image block and the second target image block based on the target extraction model; and adjusting the model parameters in the target extraction model until the total error between the first target image block and the second target image block is minimum.
In one embodiment, the low-dose CT image processing apparatus 600 is further configured to obtain a first total error between the first target image block and the second target image block based on a convolution chain; adjusting model parameters in the target extraction model until the first total error is minimum; acquiring a second total error between the first target image block and the second target image block based on the deconvolution chain; and adjusting the model parameters in the target extraction model until the second total error is minimum.
For specific limitations of the low-dose CT image processing method, reference may be made to the above limitations of the low-dose CT image processing method, which are not described herein again. The modules in the low-dose CT image processing apparatus can be implemented in whole or in part by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing low dose CT image processing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a low dose CT image processing method.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a low-dose CT image and a pre-trained target extraction model; a target organ is in a low-dose CT image; the target extraction model comprises a convolution chain consisting of a plurality of noise removal convolution layers and a deconvolution chain consisting of a plurality of deconvolution layers; the number of the noise removal convolution layers is the same as that of the deconvolution layers;
respectively determining the arrangement sequence of each noise removal convolutional layer in a convolution chain and the arrangement sequence of each deconvolution layer in a deconvolution chain;
determining the noise-removal convolutional layers and the deconvolution layers with the same arrangement sequence as a pair of related layers;
extracting image features in the low-dose CT image based on the convolution chain to obtain a feature image corresponding to each noise-removal convolution layer;
and respectively inputting the characteristic images into corresponding deconvolution layers based on the relation of the associated layers so that the target organs in the low-dose CT image are extracted by the deconvolution layers.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining a noise removal convolutional layer of a current sequence;
taking the CT image with low dose as the input of the noise-removal convolutional layer in the current sequence, so that the noise-removal convolutional layer in the current sequence extracts a characteristic image from the CT image with low dose;
and taking the noise-removal convolutional layer in the next sequence as the noise-removal convolutional layer in the current sequence, taking the extracted feature image as a low-dose CT image, and returning to the step of taking the low-dose CT image as the input of the noise-removal convolutional layer in the current sequence until the whole convolutional chain is traversed.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
extracting a characteristic image corresponding to the noise removal convolutional layer positioned at the tail end in the convolutional chain, and recording the characteristic image as a first characteristic image;
performing feature reconstruction on the first feature image based on a deconvolution layer located at the tail of the deconvolution chain to obtain a first intermediate result;
extracting a characteristic image corresponding to the noise-removal convolutional layer at the tail of the convolutional link, and recording the characteristic image as a second characteristic image;
and inputting the first intermediate result and the second characteristic image into corresponding deconvolution layers, so that the target organ in the low-dose CT image is extracted by the deconvolution layers according to the characteristic images.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining a noise removal convolution layer and a deconvolution layer of a current sequence; the noise-removed convolutional layer in the current sequence is not the noise-removed convolutional layer positioned at the tail in the convolutional chain; the deconvolution layer in the current order is not the deconvolution layer at the end of the convolution chain;
extracting a second characteristic image corresponding to the noise-removed convolutional layer in the current sequence;
based on the deconvolution layer in the current sequence, performing pixel superposition on the second characteristic image and the first intermediate result to obtain a second intermediate result;
and taking the noise-removal convolutional layer and the deconvolution layer in the next sequence as the noise-removal convolutional layer and the deconvolution layer in the current sequence, taking the second intermediate result as the first intermediate result, and returning to the step of extracting the second characteristic image corresponding to the noise-removal convolutional layer in the current sequence until the complete deconvolution chain is traversed.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a training image group; the training image group comprises a low-dose CT image and a high-dose CT image;
extracting a first target image block from a low-dose CT image in a training image group, and extracting a second target image block from a high-dose CT image; the position and the size of the first target image block are the same as those of the second target image block;
determining a total error between the first target image block and the second target image block based on the target extraction model;
and adjusting the model parameters in the target extraction model until the total error between the first target image block and the second target image block is minimum.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a first total error between a first target image block and a second target image block based on a convolution chain;
adjusting model parameters in the target extraction model until the first total error is minimum;
acquiring a second total error between the first target image block and the second target image block based on the deconvolution chain;
and adjusting the model parameters in the target extraction model until the second total error is minimum.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a low-dose CT image and a pre-trained target extraction model; a target organ is in a low-dose CT image; the target extraction model comprises a convolution chain consisting of a plurality of noise removal convolution layers and a deconvolution chain consisting of a plurality of deconvolution layers; the number of the noise removal convolution layers is the same as that of the deconvolution layers;
respectively determining the arrangement sequence of each noise removal convolutional layer in a convolution chain and the arrangement sequence of each deconvolution layer in a deconvolution chain;
determining the noise-removal convolutional layers and the deconvolution layers with the same arrangement sequence as a pair of related layers;
extracting image features in the low-dose CT image based on the convolution chain to obtain a feature image corresponding to each noise-removal convolution layer;
and respectively inputting the characteristic images into corresponding deconvolution layers based on the relation of the association layers, so that the deconvolution layers extract the target organs in the low-dose CT image according to the characteristic images.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a noise removal convolutional layer of a current sequence;
taking the CT image with low dose as the input of the noise-removal convolutional layer in the current sequence, so that the noise-removal convolutional layer in the current sequence extracts a characteristic image from the CT image with low dose;
and taking the noise-removal convolutional layer in the next sequence as the noise-removal convolutional layer in the current sequence, taking the extracted feature image as a low-dose CT image, and returning to the step of taking the low-dose CT image as the input of the noise-removal convolutional layer in the current sequence until the whole convolutional chain is traversed.
In one embodiment, the computer program when executed by the processor further performs the steps of:
extracting a characteristic image corresponding to the noise removal convolutional layer positioned at the tail end in the convolutional chain, and recording the characteristic image as a first characteristic image;
performing feature reconstruction on the first feature image based on a deconvolution layer located at the tail of the deconvolution chain to obtain a first intermediate result;
extracting a characteristic image corresponding to the noise-removal convolutional layer at the tail of the convolutional link, and recording the characteristic image as a second characteristic image;
and inputting the first intermediate result and the second characteristic image into a corresponding deconvolution layer so that the target organ in the low-dose CT image is extracted by the deconvolution layer.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a noise removal convolution layer and a deconvolution layer of a current sequence; the noise-removed convolutional layer in the current sequence is not the noise-removed convolutional layer positioned at the tail in the convolutional chain; the deconvolution layer in the current order is not the deconvolution layer at the end of the convolution chain;
extracting a second characteristic image corresponding to the noise-removed convolutional layer in the current sequence;
based on the deconvolution layer in the current sequence, performing pixel superposition on the second characteristic image and the first intermediate result to obtain a second intermediate result;
and taking the noise-removal convolutional layer and the deconvolution layer in the next sequence as the noise-removal convolutional layer and the deconvolution layer in the current sequence, taking the second intermediate result as the first intermediate result, and returning to the step of extracting the second characteristic image corresponding to the noise-removal convolutional layer in the current sequence until the complete deconvolution chain is traversed.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a training image group; the training image group comprises a low-dose CT image and a high-dose CT image;
extracting a first target image block from a low-dose CT image in a training image group, and extracting a second target image block from a high-dose CT image; the position and the size of the first target image block are the same as those of the second target image block;
determining a total error between the first target image block and the second target image block based on the target extraction model;
and adjusting the model parameters in the target extraction model until the total error between the first target image block and the second target image block is minimum.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a first total error between a first target image block and a second target image block based on a convolution chain;
adjusting model parameters in the target extraction model until the first total error is minimum;
acquiring a second total error between the first target image block and the second target image block based on the deconvolution chain;
and adjusting the model parameters in the target extraction model until the second total error is minimum.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A low-dose CT image processing method, characterized in that the method comprises:
acquiring a low-dose CT image and a pre-trained target extraction model; the low-dose CT image has a target organ; the target extraction model comprises a convolution chain consisting of a plurality of noise-removal convolution layers and a deconvolution chain consisting of a plurality of deconvolution layers; the number of the noise removal convolution layers is the same as that of the anti-convolution layers;
respectively determining the arrangement sequence of each noise-removal convolutional layer in the convolutional chain and the arrangement sequence of each deconvolution layer in the deconvolution chain;
determining the noise-removal convolutional layers and the deconvolution layers with the same arrangement sequence as a pair of related layers;
extracting image features in the low-dose CT image based on the convolution chain to obtain a feature image corresponding to each noise-removal convolution layer;
and respectively inputting the characteristic images into corresponding deconvolution layers based on the relation of the association layers, so that the deconvolution layers extract the target organs in the low-dose CT image according to the characteristic images.
2. The method of claim 1, wherein the extracting image features in the low-dose CT image based on the convolution chain to obtain a feature image corresponding to each noise-removed convolution layer comprises:
determining a noise removal convolutional layer of a current sequence;
taking a low-dose CT image as an input of a noise-removal convolutional layer in a current sequence, so that the noise-removal convolutional layer in the current sequence extracts a characteristic image from the low-dose CT image;
and taking the noise-removal convolutional layer in the next sequence as the noise-removal convolutional layer in the current sequence, taking the extracted feature image as a low-dose CT image, and returning to the step of taking the low-dose CT image as the input of the noise-removal convolutional layer in the current sequence until the whole convolutional chain is traversed.
3. The method according to claim 1, wherein the inputting the feature images into corresponding deconvolution layers respectively based on the relationship of the associated layers, so that the deconvolution layers extract the target organ in the low-dose CT image from the feature images comprises:
extracting a characteristic image corresponding to the noise removal convolutional layer positioned at the tail end in the convolutional chain, and recording the characteristic image as a first characteristic image;
performing feature reconstruction on the first feature image based on a deconvolution layer located at the tail of the deconvolution chain to obtain a first intermediate result;
extracting a characteristic image corresponding to the noise-removal convolutional layer at the tail of the convolutional link, and recording the characteristic image as a second characteristic image;
and inputting the first intermediate result and the second characteristic image into a corresponding deconvolution layer so that the target organ in the low-dose CT image is extracted by the deconvolution layer.
4. The method of claim 3, wherein inputting the first intermediate result and the second feature image into a corresponding deconvolution layer, so that the deconvolution layer extracts a target organ in the low-dose CT image comprises:
determining a noise removal convolution layer and a deconvolution layer of a current sequence; the noise-removal convolutional layer in the current sequence is not the noise-removal convolutional layer positioned at the tail in the convolutional chain; the deconvolution layer of the current sequence is not the deconvolution layer at the end of the convolution chain;
extracting a second characteristic image corresponding to the noise-removed convolutional layer of the current sequence;
performing pixel superposition on the second characteristic image and the first intermediate result based on the deconvolution layer of the current sequence to obtain a second intermediate result;
and taking the noise-removal convolutional layer and the deconvolution layer in the next sequence as the noise-removal convolutional layer and the deconvolution layer in the current sequence, taking the second intermediate result as the first intermediate result, returning to the step of extracting the second characteristic image corresponding to the noise-removal convolutional layer in the current sequence until the complete deconvolution chain is traversed.
5. The method of claim 1, wherein the step of training the target extraction model comprises:
acquiring a training image group; the training image set comprises a low-dose CT image and a high-dose CT image;
extracting a first target image block from a low-dose CT image in the training image group, and extracting a second target image block from a high-dose CT image; the position and the size of the first target image block are the same as those of the second target image block;
determining a total error between the first target image block and the second target image block based on the target extraction model;
and adjusting model parameters in the target extraction model until the total error between the first target image block and the second target image block is minimum.
6. The method of claim 5, wherein the determining the total error between the first target image block and the second target image block based on the target extraction model comprises:
acquiring a first total error between the first target image block and the second target image block based on the convolution chain;
adjusting model parameters in the target extraction model until the first total error is minimum;
acquiring a second total error between the first target image block and the second target image block based on the deconvolution chain;
adjusting model parameters in the target extraction model until the second total error is minimal.
7. A low dose CT image processing apparatus, characterized in that the apparatus comprises:
the model acquisition module is used for acquiring a low-dose CT image and a pre-trained target extraction model; the low-dose CT image has a target organ; the target extraction model comprises a convolution chain consisting of a plurality of noise-removal convolution layers and a deconvolution chain consisting of a plurality of deconvolution layers; the number of the noise removal convolution layers is the same as that of the anti-convolution layers;
a correlation layer determining module, configured to determine an arrangement order of each noise-removal convolutional layer in the convolutional chain and an arrangement order of each deconvolution layer in the deconvolution chain, respectively; determining the noise-removal convolutional layers and the deconvolution layers with the same arrangement sequence as a pair of related layers;
the target organ extraction module is used for extracting image features in the low-dose CT image based on the convolution chain to obtain a feature image corresponding to each noise-removal convolution layer; and respectively inputting the characteristic images into corresponding deconvolution layers based on the relation of the association layers, so that the deconvolution layers extract the target organs in the low-dose CT image according to the characteristic images.
8. The apparatus of claim 7, further comprising:
the convolution characteristic extraction module is used for determining the noise removal convolution layer of the current sequence; taking a low-dose CT image as an input of a noise-removal convolutional layer in a current sequence, so that the noise-removal convolutional layer in the current sequence extracts a characteristic image from the low-dose CT image; and taking the noise-removal convolutional layer in the next sequence as the noise-removal convolutional layer in the current sequence, taking the extracted feature image as a low-dose CT image, and returning to the step of taking the low-dose CT image as the input of the noise-removal convolutional layer in the current sequence until the whole convolutional chain is traversed.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 6 are implemented by the processor when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN202010129313.2A 2020-02-28 2020-02-28 Low-dose CT image processing method, device and computer equipment Active CN111325737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010129313.2A CN111325737B (en) 2020-02-28 2020-02-28 Low-dose CT image processing method, device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010129313.2A CN111325737B (en) 2020-02-28 2020-02-28 Low-dose CT image processing method, device and computer equipment

Publications (2)

Publication Number Publication Date
CN111325737A true CN111325737A (en) 2020-06-23
CN111325737B CN111325737B (en) 2024-03-15

Family

ID=71172987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010129313.2A Active CN111325737B (en) 2020-02-28 2020-02-28 Low-dose CT image processing method, device and computer equipment

Country Status (1)

Country Link
CN (1) CN111325737B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564553A (en) * 2018-05-07 2018-09-21 南方医科大学 Low-dose CT image noise suppression method based on convolutional neural networks
US20190005657A1 (en) * 2017-06-30 2019-01-03 Baidu Online Network Technology (Beijing) Co., Ltd . Multiple targets-tracking method and apparatus, device and storage medium
CN109166161A (en) * 2018-07-04 2019-01-08 东南大学 A kind of low-dose CT image processing system inhibiting convolutional neural networks based on noise artifacts
CN110223352A (en) * 2019-06-14 2019-09-10 浙江明峰智能医疗科技有限公司 A kind of medical image scanning automatic positioning method based on deep learning
CN110223255A (en) * 2019-06-11 2019-09-10 太原科技大学 A kind of shallow-layer residual error encoding and decoding Recursive Networks for low-dose CT image denoising
US20190378247A1 (en) * 2018-06-07 2019-12-12 Beijing Kuangshi Technology Co., Ltd. Image processing method, electronic device and non-transitory computer-readable recording medium
CN110570394A (en) * 2019-08-01 2019-12-13 深圳先进技术研究院 medical image segmentation method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190005657A1 (en) * 2017-06-30 2019-01-03 Baidu Online Network Technology (Beijing) Co., Ltd . Multiple targets-tracking method and apparatus, device and storage medium
CN108564553A (en) * 2018-05-07 2018-09-21 南方医科大学 Low-dose CT image noise suppression method based on convolutional neural networks
US20190378247A1 (en) * 2018-06-07 2019-12-12 Beijing Kuangshi Technology Co., Ltd. Image processing method, electronic device and non-transitory computer-readable recording medium
CN109166161A (en) * 2018-07-04 2019-01-08 东南大学 A kind of low-dose CT image processing system inhibiting convolutional neural networks based on noise artifacts
CN110223255A (en) * 2019-06-11 2019-09-10 太原科技大学 A kind of shallow-layer residual error encoding and decoding Recursive Networks for low-dose CT image denoising
CN110223352A (en) * 2019-06-14 2019-09-10 浙江明峰智能医疗科技有限公司 A kind of medical image scanning automatic positioning method based on deep learning
CN110570394A (en) * 2019-08-01 2019-12-13 深圳先进技术研究院 medical image segmentation method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周云成;许童羽;邓寒冰;苗腾;: "基于双卷积链Fast R-CNN的番茄关键器官识别方法" *
高净植;刘;张权;桂志国;: "改进深度残差卷积神经网络的LDCT图像估计" *

Also Published As

Publication number Publication date
CN111325737B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN109993726B (en) Medical image detection method, device, equipment and storage medium
US11478212B2 (en) Method for controlling scanner by estimating patient internal anatomical structures from surface data using body-surface and organ-surface latent variables
US11514573B2 (en) Estimating object thickness with neural networks
CN112258528B (en) Image processing method and device and electronic equipment
WO2019214052A1 (en) Method for assessing bone age using x-ray image of hand, device, computer apparatus, and storage medium
CN111179372B (en) Image attenuation correction method, image attenuation correction device, computer equipment and storage medium
CN110084868B (en) Image correction method, apparatus, computer device, and readable storage medium
KR102053527B1 (en) Method for image processing
CN107862665B (en) CT image sequence enhancement method and device
JP4274400B2 (en) Image registration method and apparatus
CN108038840B (en) Image processing method and device, image processing equipment and storage medium
CN111243052A (en) Image reconstruction method and device, computer equipment and storage medium
WO2023216720A1 (en) Image reconstruction model training method and apparatus, device, medium, and program product
CN108961161B (en) Image data processing method, device and computer storage medium
CN111325737B (en) Low-dose CT image processing method, device and computer equipment
KR102477991B1 (en) Medical image processing method and apparatus
JP4127537B2 (en) Image processing method, apparatus, and program
EP4292042A1 (en) Generalizable image-based training framework for artificial intelligence-based noise and artifact reduction in medical images
CN111091504B (en) Image offset field correction method, computer device, and storage medium
KR20220169134A (en) Apparauts, system, method and program for deciphering tomography image of common bile duct stone using artificial intelligence
CN113643394A (en) Scattering correction method, device, computer equipment and storage medium
EP4343680A1 (en) De-noising data
US20230079164A1 (en) Image registration
US20230169659A1 (en) Image segmentation and tracking based on statistical shape model
US20240135502A1 (en) Generalizable Image-Based Training Framework for Artificial Intelligence-Based Noise and Artifact Reduction in Medical Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant