CN111833251A - Three-dimensional medical image super-resolution reconstruction method and device - Google Patents

Three-dimensional medical image super-resolution reconstruction method and device Download PDF

Info

Publication number
CN111833251A
CN111833251A CN202010669466.6A CN202010669466A CN111833251A CN 111833251 A CN111833251 A CN 111833251A CN 202010669466 A CN202010669466 A CN 202010669466A CN 111833251 A CN111833251 A CN 111833251A
Authority
CN
China
Prior art keywords
original
slice
slices
generated
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010669466.6A
Other languages
Chinese (zh)
Inventor
吴振洲
徐奕宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ande Yizhi Technology Co ltd
Original Assignee
Beijing Ande Yizhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ande Yizhi Technology Co ltd filed Critical Beijing Ande Yizhi Technology Co ltd
Priority to CN202010669466.6A priority Critical patent/CN111833251A/en
Publication of CN111833251A publication Critical patent/CN111833251A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks

Abstract

The disclosure relates to a method and a device for reconstructing super-resolution of three-dimensional medical images, wherein the method comprises the following steps: acquiring an original slice sequence in a three-dimensional medical image to be reconstructed; inputting original slices in the original slice sequence into a deep learning model to obtain a generated slice sequence among the original slices; and obtaining a reconstructed three-dimensional medical image according to the generated slice sequence and the original slice sequence, wherein the resolution of the reconstructed three-dimensional medical image is higher than that of the three-dimensional medical image to be reconstructed. According to the method and the device, in the image acquisition process, the acquisition time and cost can be effectively saved, the hardware requirement of acquisition equipment is reduced, and the three-dimensional medical image with ideal effect and high resolution can still be reconstructed for the acquired image with lower resolution between planes; meanwhile, doctors can obtain more information from the obtained generated slice sequence, thereby better assisting clinical medical diagnosis.

Description

Three-dimensional medical image super-resolution reconstruction method and device
Technical Field
The disclosure relates to the technical field of computer image processing, in particular to a three-dimensional medical image super-resolution reconstruction method and device.
Background
Medical images are widely used in clinical medical auxiliary diagnosis, and are limited by factors such as acquisition time and hardware conditions, and generally, the inter-plane resolution of an acquired three-dimensional medical image is lower than the in-plane resolution. Therefore, there is a need to improve the resolution of three-dimensional medical images using super-resolution reconstruction.
In the related art, in the up-sampling method based on the difference value, the reconstructed image does not bring more extra information; in the supervised learning-based method, the reconstructed three-dimensional medical image with low inter-plane resolution has poor effect.
Disclosure of Invention
In view of the above, the present disclosure provides a method, an apparatus, and a storage medium for super-resolution reconstruction of three-dimensional medical images.
According to an aspect of the present disclosure, there is provided a three-dimensional medical image super-resolution reconstruction method, the method including:
acquiring an original slice sequence in a three-dimensional medical image to be reconstructed;
inputting original slices in the original slice sequence into a deep learning model to obtain a generated slice sequence among the original slices;
and obtaining a reconstructed three-dimensional medical image according to the generated slice sequence and the original slice sequence, wherein the resolution of the reconstructed three-dimensional medical image is higher than that of the three-dimensional medical image to be reconstructed.
In one possible implementation, inputting original slices in the original slice sequence into a deep learning model to obtain a generated slice sequence between the original slices, including:
inputting original slices in the original slice sequence into the deep learning model one by one, and repeatedly executing the operation of inputting the latest output generated slice of the deep learning model into the deep learning model aiming at each original slice until reaching a preset first iteration number;
and obtaining a generated slice sequence among the original slices according to all the generated slices output by the deep learning model.
In one possible implementation, inputting original slices in the original slice sequence into a deep learning model to obtain a generated slice sequence between the original slices, including:
inputting original slices in the original slice sequence into the deep learning model one by one, and repeatedly executing the operation of inputting the input image of the deep learning model for a first preset number of times and the latest output generated slice of the deep learning model into the deep learning model aiming at each original slice until a preset second iteration number is reached;
and obtaining a generated slice sequence among the original slices according to all the generated slices output by the deep learning model.
In a possible implementation manner, the deep learning model is obtained by performing generation countermeasure training on a generation countermeasure network including a generator and a discriminator by using a three-dimensional medical image training sample sequence.
In one possible implementation, the three-dimensional medical image training sample sequence includes slices of the three-dimensional medical image with the same directional resolution.
In one possible implementation, the method further includes:
for adjacent training samples in the training sample sequence, inputting one training sample in the adjacent training samples into the generator, and repeatedly executing the operation of inputting the latest generated slice output by the generator into the generator until reaching a preset third iteration number;
inputting the generated slice output by the generator for the last time and another training sample in the adjacent training samples into the discriminator to obtain the similarity between the generated slice output by the generator for the last time and the another training sample;
obtaining a loss function value according to the similarity; and carrying out back propagation according to the loss function value, training the generator and the discriminator until convergence is achieved, and taking the generator obtained in the convergence as the deep learning model.
In one possible implementation, the method further includes:
for adjacent training samples in the training sample sequence, inputting one training sample in the adjacent training samples into the generator, and repeatedly executing the operation of inputting the input image of the generator for a second preset number of times and the latest output generated slice of the generator into the generator until reaching a preset fourth iteration number;
inputting the generated slice output by the generator for the last time and another training sample in the adjacent training samples into the discriminator to obtain the similarity between the generated slice output by the generator for the last time and the another training sample;
obtaining a loss function value according to the similarity; and carrying out back propagation according to the loss function value, training the generator and the discriminator until convergence is achieved, and taking the generator obtained in the convergence as the deep learning model.
According to another aspect of the present disclosure, there is provided a three-dimensional medical image super-resolution reconstruction apparatus, the apparatus including:
the original slice sequence acquisition module is used for acquiring an original slice sequence in a three-dimensional medical image to be reconstructed;
the generated slice sequence acquisition module is used for inputting the original slices in the original slice sequence into a deep learning model to obtain a generated slice sequence among the original slices;
and the three-dimensional medical image reconstruction module is used for obtaining a reconstructed three-dimensional medical image according to the generated slice sequence and the original slice sequence, wherein the resolution of the reconstructed three-dimensional medical image is higher than that of the three-dimensional medical image to be reconstructed.
In a possible implementation manner, the deep learning model is obtained by performing generation countermeasure training on a generation countermeasure network including a generator and a discriminator by using a three-dimensional medical image training sample sequence.
According to another aspect of the present disclosure, there is provided a three-dimensional medical image super-resolution reconstruction apparatus, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, original slices in a three-dimensional medical image to be reconstructed are input into a trained deep learning model, and a generated slice sequence between the original slices is output, so that the three-dimensional medical image after super-resolution reconstruction is obtained. Therefore, in the image acquisition process, the acquisition time and cost can be effectively saved, the hardware requirement of acquisition equipment is reduced, and the three-dimensional medical image with ideal effect and high resolution can still be reconstructed for the acquired image with lower resolution between planes; meanwhile, doctors can obtain more information from the obtained generated slice sequence, thereby better assisting clinical medical diagnosis.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow chart of a method for super-resolution reconstruction of three-dimensional medical images according to an embodiment of the present disclosure;
FIG. 2 shows a schematic diagram of a deep learning model iteration, according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of a deep learning model iteration, according to an embodiment of the present disclosure;
FIG. 4 shows a flow chart of a method for super-resolution reconstruction of three-dimensional medical images according to an embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of training a deep learning model according to an embodiment of the present disclosure;
FIG. 6 shows a schematic diagram of training a deep learning model according to an embodiment of the present disclosure;
FIG. 7 shows a flow chart of a method for super-resolution reconstruction of three-dimensional medical images according to an embodiment of the present disclosure;
FIG. 8 illustrates a schematic diagram of training a deep learning model according to an embodiment of the present disclosure;
FIG. 9 shows a block diagram of a three-dimensional medical image super-resolution reconstruction apparatus according to an embodiment of the present disclosure;
FIG. 10 shows a block diagram of a three-dimensional medical image super-resolution reconstruction apparatus according to an embodiment of the present disclosure;
fig. 11 shows a block diagram of an apparatus for three-dimensional medical image super-resolution reconstruction according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of a method for super-resolution reconstruction of three-dimensional medical images according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
101, acquiring an original slice sequence in a three-dimensional medical image to be reconstructed;
102, inputting original slices in the original slice sequence into a deep learning model to obtain a generated slice sequence among the original slices;
and 103, obtaining a reconstructed three-dimensional medical image according to the generated slice sequence and the original slice sequence, wherein the resolution of the reconstructed three-dimensional medical image is higher than that of the three-dimensional medical image to be reconstructed.
Wherein the three-dimensional medical image to be reconstructed may comprise: MR (Magnetic Resonance), CT (Computed Tomography), and other three-dimensional medical images. The original slice sequence of the three-dimensional medical image may include: original slice sequences in different directions such as sagittal direction, coronal direction, and transverse axis direction; the number of original slices included in an original slice sequence in the same direction can be set according to actual working requirements, one original slice is input into a trained deep learning model, one or more generated slices output by the deep learning model form a generated slice sequence between the original slice and a next original slice adjacent to the original slice, wherein the number of generated slices included in the generated slice sequence is related to a target resolution of a three-dimensional medical image to be reconstructed, and specifically, if the target resolution is H times the original resolution of the three-dimensional medical image to be reconstructed (H is an integer greater than 1), the number of generated slices included in the generated slice sequence is H-1. All original slices in the original slice sequence are input into a trained deep learning model one by one, and the obtained generated slice sequence among all the original slices is combined with the original slice sequence to obtain a reconstructed three-dimensional medical image.
Illustratively, an original slice sequence in the coronal direction of a three-dimensional MR image is acquired, the original slice sequence includes a plurality of original slices arranged from front to back, the original slices arranged in front of two adjacent original slices are input into a trained deep learning model, the deep learning model outputs N (N is an integer greater than 1) generated slices, the N generated slices constitute a generated slice sequence between the two connected original slices, and similarly, other original slices in the original slice sequence are input into the trained deep learning model one by one, a generated slice sequence including N generated slices is generated between any two adjacent original slices; and combining all the obtained generated slice sequences with the original slice sequence in sequence to obtain a reconstructed three-dimensional MR image, wherein the slice number in the coronal direction of the reconstructed three-dimensional MR image is N +1 times of the slice number in the coronal direction of the original three-dimensional MR image, namely the resolution in the coronal direction of the three-dimensional MR image is improved by N +1 times.
The three-dimensional medical image super-resolution reconstruction is a process of deducing a high-resolution three-dimensional medical image from a low-resolution three-dimensional medical image, thereby improving the resolution of the three-dimensional medical image. For example, for MR images, due to trade-offs among resolution, image acquisition speed, noise and the like, the resolution between planes is generally inferior to the in-plane resolution, and a super-resolution method is required to improve the resolution of three-dimensional MR images; in the related technology, the image resolution is improved by adopting up-sampling methods such as linear interpolation, bicubic interpolation and the like, and the improvement of the image resolution of the up-sampling methods based on the interpolation completely depends on the content of the image and does not bring more additional information; instead, they often introduce side effects such as noise amplification and image blurring, which may cause artifacts in partial regions of the image, thereby affecting subsequent processing and image analysis. Or the image resolution is improved based on supervised learning, the method depends on the performance of a super-resolution network, and the reconstructed effect is poor for the three-dimensional medical image with low inter-plane resolution.
In the embodiment of the disclosure, original slices in a three-dimensional medical image to be reconstructed are input into a trained deep learning model, and a generated slice sequence between the original slices is output, so that the three-dimensional medical image after super-resolution reconstruction is obtained. Therefore, in the image acquisition process, the acquisition time and cost can be effectively saved, the hardware requirement of acquisition equipment is reduced, and the three-dimensional medical image with ideal effect and high resolution can still be reconstructed for the acquired image with lower resolution between planes; meanwhile, the generated slice sequence comprises a plurality of generated slices, and a doctor can acquire more information from the obtained generated slice sequence, so that the clinical medical diagnosis is better assisted.
In one possible implementation, inputting original slices in the original slice sequence into a deep learning model to obtain a generated slice sequence between the original slices, including: inputting original slices in the original slice sequence into the deep learning model one by one, and repeatedly executing the operation of inputting the latest output generated slice of the deep learning model into the deep learning model aiming at each original slice until reaching a preset first iteration number; and obtaining a generated slice sequence among the original slices according to all the generated slices output by the deep learning model.
In the embodiment of the disclosure, original slices in an original slice sequence of a three-dimensional medical image to be reconstructed are input into a trained deep learning model one by one, and the deep learning model is utilized to carry out multiple iterations, and a new generated slice is obtained every time an iteration is carried out; in each iteration process, one generated slice currently output by the deep learning model is used as the input of the deep learning model in the next iteration process, and the iteration is repeated for multiple times until the first iteration time is reached, wherein the first iteration time can be determined according to the number of the generated slices contained in a generated slice sequence corresponding to each preset original slice and also according to the target resolution of the three-dimensional medical image to be reconstructed, for example, the number of the generated slices contained in the generated slice sequence corresponding to each preset original slice is N, the first iteration time is N, for each original slice, N generated slices are obtained through N times of iteration operation of the deep learning model, and the iteration operation is ended; or, if the target resolution of the three-dimensional medical image to be reconstructed is H times of the original resolution, the first iteration frequency is H-1, for each original slice, H-1 iteration operations are performed through the deep learning model to obtain H-1 generated slices, and the iteration operations are finished. Therefore, for all original slices, after the iteration operation is completed, all the generated slices can be combined in sequence to obtain a generated slice sequence, and in the process, each iteration is performed, the generated slices output in the deep learning model are used for predicting the subsequent slices, so that the accuracy of the subsequently output generated slices is ensured.
Illustratively, the three-dimensional medical image to be reconstructed may comprise a plurality of CT or MR three-dimensional medical images from different hospitals with different inter-layer spacings, for each three-dimensional medical image, the sequence of original slices comprising M original slices, each original slice being represented as Mt(t is more than or equal to 1 and less than or equal to M, and t is an integer), the generated slice sequence corresponding to each preset original slice comprises K generated slices, namely the first iteration number is K. FIG. 2 is a diagram illustrating an iteration of a deep learning model according to an embodiment of the disclosure, and each original slice m is shown in FIG. 2tInputting the data into a deep learning model, and outputting a generated slice n1Will generate slice n1Inputting the data into a deep learning model, and outputting a generated slice n2Repeating the iteration K times, and finally outputting the generated slice nk(ii) a Generating K slices (n) output by the deep learning model in the K iteration processes1、n2…nk) Combining to obtain a generated slice sequence between the original slice and the next original slice; for all original slices (m)1、m2…mM) And after the iteration operation is executed, a generated slice sequence can be obtained.
In one possible implementation, inputting original slices in the original slice sequence into a deep learning model to obtain a generated slice sequence between the original slices, including: inputting original slices in the original slice sequence into the deep learning model one by one, and repeatedly executing the operation of inputting the input image of the deep learning model for a first preset number of times and the latest output generated slice of the deep learning model into the deep learning model aiming at each original slice until a preset second iteration number is reached; and obtaining a generated slice sequence among the original slices according to all the generated slices output by the deep learning model.
In the embodiment of the disclosure, original slices in an original slice sequence of a three-dimensional medical image to be reconstructed are input into a trained deep learning model one by one, the deep learning model is utilized to carry out multiple iterations, and a plurality of slices are input into the deep learning model every iteration is carried out, and a new generated slice is obtained at the same time; specifically, in each iteration process, a generated slice currently output by the deep learning model and an input image of the deep learning model for a first preset number of times before are used as the input of the deep learning model in the next iteration process, and the iteration is repeated for multiple times until a second iteration number is reached. The first preset number of times can be set according to actual working needs, and a specific numerical value of the first preset number of times is smaller than the number of generated slices contained in a generated slice sequence corresponding to each preset original slice, for example, the first preset number of times is 1, in each iteration process, two slices, namely an input image and an output generated slice of the deep learning model in the last iteration process, are simultaneously input into the deep learning model, and one generated slice of the current iteration process is obtained; and if the first preset number of times is 2, in each iteration process, inputting three slices, namely a generated slice output by the deep learning model in the last iteration process and input images of the deep learning model in the previous two iteration processes into the deep learning model at the same time to obtain one generated slice of the current iteration process. It should be noted that, if an input image or an output generated slice of the deep learning model in one or more previous iterations is missing in the current iteration, the input image or the output generated slice is set as random noise or another generated slice is selected. The second iteration number may be determined according to the number of generated slices included in a generated slice sequence corresponding to each preset original slice, or according to a target resolution of the three-dimensional medical image to be reconstructed, for example, if the number of generated slices included in the generated slice sequence corresponding to each preset original slice is N, the second iteration number is N, for each original slice, N generated slices are obtained through N times of iteration operations of the deep learning model, and the iteration operations are ended; or, if the target resolution of the three-dimensional medical image to be reconstructed is H times of the original resolution, the second iteration frequency is H-1, for each original slice, H-1 iteration operations are performed through the deep learning model to obtain H-1 generated slices, and the iteration operations are finished. Thus, for all original slices, after the iteration operation is completed, all the generated slices can be combined in sequence to obtain a generated slice sequence; in the process, each iteration is performed, the plurality of slices are input into the deep learning model, so that the subsequent slices are predicted by using the plurality of known slices, and a more accurate generated slice is output.
Illustratively, the original slice sequence includes M original slices, each original slice is denoted by mt (1 ≦ t ≦ M, t is an integer), the first preset number is 1, and if the generated slice sequence corresponding to each original slice includes K generated slices, the second iteration number is K. FIG. 3 is a diagram illustrating an iteration of a deep learning model according to an embodiment of the disclosure, and each original slice m is shown in FIG. 3tAnd inputting random noise into the deep learning model and outputting the generated slice n1Will generate slice n1And original slice mtInputting the data into a deep learning model, and outputting a generated slice n2Will generate slice n2And generating a slice n1Inputting into deep learning model, repeating the iteration K times, and outputting the generated slice nk(ii) a Generating K slices (n) output by the deep learning model in the K iteration processes1、n2…nk) Combining to obtain a generated slice sequence between the original slice and the next original slice; for all original slices (m)1、m2…mM) And after the iteration operation is executed, a generated slice sequence can be obtained.
Based on this, the embodiment of the present disclosure further provides a training process of the deep learning model in the above embodiment.
In a possible implementation manner, the deep learning model is obtained by performing generation countermeasure training on a generation countermeasure network including a generator and a discriminator by using a three-dimensional medical image training sample sequence. Wherein, the three-dimensional medical image training sample sequence may include: the training sample sequences in different directions such as sagittal direction, coronal direction, transverse axis direction and the like of the three-dimensional medical images such as MR or CT and the like comprise a plurality of training samples. The generative confrontation network can be any existing generative confrontation network, the generative confrontation network comprises a generator and a discriminator, the generator and the discriminator mutually game to generate output required by each other, and when the training of the generative confrontation network reaches convergence, the obtained generator is used as the deep learning model for performing super-resolution reconstruction on the three-dimensional medical image.
In one possible implementation, the three-dimensional medical image training sample sequence includes slices of the three-dimensional medical image with the same directional resolution. That is, in the process of training the generative confrontation network, the training samples used may be slices with the same directional resolution, for example, the training samples may be slices in the sagittal direction of the MR image, and the resolution of each slice is the same.
In the related technology, a method for reconstructing image super-resolution based on supervised learning is disclosed, wherein in the training process, a training sample consists of an original axial slice and a fuzzy version of the slice, namely the training sample comprises low-resolution and high-resolution image blocks, the mapping from the low-resolution image blocks to the high-resolution image blocks is trained by using the training sample through an advanced super-resolution network, the trained network is applied to input images with lower resolution in different directions, and the estimated high-resolution images are spliced together to obtain a three-dimensional image with isotropic resolution. The method needs to perform joint training on image blocks with low resolution and high resolution, and meanwhile, for a three-dimensional image with the resolution of 1mm × 1mm × Dmm, when D is large (for example, D >8), if the isotropic resolution is achieved, the training learning difficulty is greatly increased, and an ideal effect cannot be achieved.
In consideration of the fact that the requirement for obtaining the training sample with high resolution on hardware, time cost and other conditions is better, for example, for a hospital without high-end imaging equipment, the supervised learning method in the related art is not suitable due to the lack of high-resolution images corresponding to the low-resolution images. According to the training method in the embodiment of the disclosure, the three-dimensional medical image training samples are slices with the same direction resolution, the slices can be low-resolution images or high-resolution images, and a corresponding higher-resolution image does not need to be acquired for the slice, i.e., the training can be completed regardless of the original resolution of the slice and whether the high-resolution image of the slice exists or not, so that the application range is greatly increased, and for hospitals without high-end imaging equipment, the low-resolution medical images can still be applied for training.
Fig. 4 shows a flowchart of a method for super-resolution reconstruction of three-dimensional medical images according to an embodiment of the present disclosure. As shown in fig. 4, the method may include:
step 201, inputting one training sample of adjacent training samples in the training sample sequence into the generator, and repeatedly executing the operation of inputting the latest generated slice output by the generator into the generator until reaching a preset third iteration number;
step 202, inputting the generated slice output by the generator for the last time and another training sample in the adjacent training samples into the discriminator to obtain the similarity between the generated slice output by the generator for the last time and the another training sample;
step 203, obtaining a loss function value according to the similarity; and carrying out back propagation according to the loss function value, training the generator and the discriminator until convergence is achieved, and taking the generator obtained in the convergence as the deep learning model.
In the embodiment of the disclosure, for any two adjacent training samples in a training sample sequence, one of the adjacent training samples is input into a generator, the generator is utilized to perform multiple iterations, and a new generated slice is obtained every time an iteration is performed; in each iteration process, one generated slice currently output by the generator is used as the input of the generator in the next iteration process, and the iteration is repeated for multiple times until reaching a third iteration time, wherein the third iteration time is K if the target resolution of a training sample is K times of the original resolution, and for each training sample, the generator performs K times of iteration operation to obtain K generated slices, and the iteration operation is ended; inputting the Kth generated slice and the other training sample in the adjacent training samples into a discriminator, measuring the similarity between the Kth generated slice and the other training sample in the adjacent training samples by adopting methods such as average absolute error and/or root mean square error and the like to obtain the similarity between the Kth generated slice and the other training sample in the adjacent training samples, and then calculating a loss function value according to the similarity; and performing back propagation according to the loss function value, training the generator and the discriminator at the same time until convergence is achieved, and using the generator obtained in the convergence as the deep learning model for performing super-resolution reconstruction on the three-dimensional medical image, wherein the loss function can be the existing loss function. In the training process, each time iteration is carried out, the subsequent slices are predicted by using the generated slices output in the generator, so that the accuracy of the subsequently output generated slices is ensured; meanwhile, a plurality of generated slices can be obtained by using one generator, all the generated slices are required to be as real as possible, and the generated slices can deceive a discriminator which is responsible for estimating the probability that the generated slices come from the original training sample sequence; in order for the generator to maximize the probability of the arbiter being in error, the generated slice output from the last iteration should have a high similarity to another training sample; meanwhile, the discriminator is trained, so that correct labels can be allocated to training samples and generated slices output in the last iteration to the maximum extent.
Illustratively, for MR three-dimensional medical images with a resolution of 1mm × 1mm × Kmm (K being an integer greater than 1), the training objective is to obtain an isotropic resolution, i.e. a corresponding resolution of 1mm × 1mm × 1mmMR three-dimensional medical images. For this MR three-dimensional medical image, the transverse axis direction of the image has a high resolution of 1mm × 1mm, while the sagittal and coronal directions have a low resolution of Kmm × 1 mm. Selecting a sagittal section as a training sample, wherein the training sample sequence comprises S training samples arranged in sequence, and each training sample is expressed as St(t is more than or equal to 1 and less than or equal to S, and t is an integer) if the target resolution in the sagittal direction is K times of the original resolution, namely the sagittal direction reaches the high resolution of 1mm multiplied by 1mm, the third iteration time is K, and for any two adjacent slices in the sagittal direction, additional K-1 slices are generated between the two adjacent slices, so that the number of the slices in the sagittal direction with the high resolution is K times of the number of the slices in the sagittal direction with the low resolution. FIG. 5 is a schematic diagram of a training deep learning model according to an embodiment of the disclosure, and as shown in FIG. 5, for two adjacent training samples st and st +1, the training sample st is input into a generator and the output generates a slice g1Will generate slices g1Input into the generator, and output the generated slice g2So that the iteration is repeated K times, each new output generated slice giAs input to the generator and get the next generated slice gi+1Where i is 1, 2 …, K-1, and finally output to generate slice gk(ii) a Will generate slice gkAnd training sample st+1Inputting into a discriminator, calculating to generate a slice g by using the average absolute errorkAnd training sample st+1Using the discriminator to generate the slice gkAnd classifying the images into real or false images, obtaining a loss function value according to the similarity, carrying out back propagation according to the loss function value, training the generator and the discriminator, and adjusting the parameters of the generator and the discriminator until convergence is achieved.
Further, in the training process, the generator and the discriminator may be optimized through a plurality of feedbacks, specifically, a plurality of training samples arranged in sequence in the training sample sequence are input into the generator one by one, and the above iteration operation is performed, for each training sample, the generated slice output by the generator last time in the iteration process and the next training sample adjacent to the training sample are input into the discriminator, so as to obtain the similarity between the generated slice output by the generator last time and the next training sample adjacent to the training sample. In this way, the similarity corresponding to each training sample can be obtained by processing a plurality of training samples, the loss function value is calculated by utilizing the similarities corresponding to all the training samples, the propagation is carried out in the reverse direction according to the loss function value, meanwhile, the generator and the discriminator are trained until the convergence is reached, and the generator obtained in the convergence is used as the deep learning model for carrying out the super-resolution reconstruction on the three-dimensional medical image.
Exemplarily, taking the MR three-dimensional medical image with the resolution of 1mm × 1mm × Kmm (K is an integer greater than 1) as an example, the training sample sequence includes S training samples arranged in sequence, and the training sample sequence is grouped according to the number of included training samples, where each group is expressed as It ═ St,…,st+QT is more than or equal to 1 and less than or equal to S-Q, and Q>1; for example, when Q is 2, then s is included in each groupt、st+1、st+2Three training samples; FIG. 6 is a schematic diagram of a deep learning model training method according to an embodiment of the disclosure, and as shown in FIG. 6, for each group It, the training samples st of the group are input into the generator and the output generates the slice gt,1Will generate slices gt,1Input into the generator, and output the generated slice gt,2So that the iteration is repeated K times, each new output generated slice gt,iAs input to the generator and get the next generated slice gt,i+1Where i is 1, 2 …, K-1, and finally output to generate slice gt,K(ii) a Will generate slice gt,KAnd training sample st+1Inputting the data into a discriminator to calculate and generate a slice gt,KAnd training samples st+1The similarity of (2); for other training samples s in Itt+1,…,st+Q-1Repeating the above iteration process, and calculating the final output generated slice g of each iterationt+1,K…gt+Q-1,KWith corresponding training samples st+2…st+QThe similarity of (A) is calculated by performing the operation Q times in this way, and the generated section g is obtainedt,K…gt+Q-1,KTraining in accordance with the correspondenceTraining sample st+1…st+QWith high similarity, inputting the generated slices and training samples into a discriminator, and generating the slices g through the discriminatort,K…gt+Q-1,KClassifying the images into real or false images, and simultaneously solving a loss function value according to all the obtained similarities; and performing back propagation according to the loss function value, training the generator and the discriminator, and adjusting the parameters of the generator and the discriminator until convergence is achieved.
Fig. 7 shows a flowchart of a method for super-resolution reconstruction of three-dimensional medical images according to an embodiment of the present disclosure. As shown in fig. 7, the method may include:
step 301, inputting one training sample of the adjacent training samples in the training sample sequence into the generator, and repeatedly performing the operation of inputting the input image of the generator for the second preset number of times before and the generated slice output by the generator for the latest time into the generator until reaching a preset fourth iteration number;
step 302, inputting the generated slice output by the generator for the last time and another training sample in the adjacent training samples into the discriminator to obtain the similarity between the generated slice output by the generator for the last time and the another training sample;
step 303, obtaining a loss function value according to the similarity; and carrying out back propagation according to the loss function value, training the generator and the discriminator until convergence is achieved, and taking the generator obtained in the convergence as the deep learning model.
In the embodiment of the present disclosure, in order to improve the accuracy of generating slices in the middle of the iterative process, a plurality of slices are input into the generator in each iterative process, and one output generated slice is obtained at the same time, considering that the middle generated slices are not directly evaluated by the similarity index in the iterative process, but the accuracy of the middle generated slices affects the similarity between the generated slice finally output in the iterative process and the corresponding training sample. Specifically, in each iteration process, a generated slice currently output by the generator and an input image of the generator for a second preset number of times before are used as the input of the generator in the next iteration process, and the iteration is carried out for multiple times until a fourth iteration number is reached. If the target resolution of the training sample is K times of the original resolution, the fourth iteration number is K, for each training sample, performing K iteration operations through the generator to obtain K generated slices, and ending the iteration operations; the second preset number of times can be set according to actual working needs, the numerical value of the second preset number of times is smaller than K, for example, the first preset number of times is 1, in each iteration process, two slices, namely an input image and an output generated slice of the generator in the last iteration process are simultaneously input into the generator, and one generated slice of the current iteration process is obtained; if the first preset number of times is 2, in each iteration process, inputting three slices, namely the generated slice output by the generator in the last iteration process and the input image of the generator in the previous two iteration processes, into the generator at the same time to obtain one generated slice of the current iteration process. It should be noted that, if an input image or an output generated slice of the generator in one or more previous iteration processes is missing in the current iteration process, the input image or the output generated slice is set as random noise or other slices are selected; for example, if P slices are input into the generator for the corresponding 1 st, … th, P-1 th iteration, the generator input will have a missing slice, which may be set to random noise or selected as the corresponding slice during the previous training sample iteration. Inputting the Kth generated slice and the other training sample in the adjacent training samples into a discriminator, measuring the similarity between the Kth generated slice and the other training sample in the adjacent training samples by adopting methods such as average absolute error and/or root mean square error and the like to obtain the similarity between the Kth generated slice and the other training sample in the adjacent training samples, and then calculating a loss function value according to the similarity; and carrying out backward propagation according to the loss function value, training the generator and the discriminator at the same time until convergence is achieved, and using the generator obtained in the convergence as the deep learning model for carrying out super-resolution reconstruction on the three-dimensional medical image.
Illustratively, taking the MR three-dimensional medical image with the resolution of 1mm × 1mm × Kmm (K is an integer greater than 1) as an example, a slice in the sagittal direction is selected as a training sample, the training sample sequence includes S training samples arranged in sequence, and each training sample is represented as St(t is more than or equal to 1 and less than or equal to S, t is an integer), the second preset time is 1, if the target resolution in the sagittal direction is K times of the original resolution, the fourth iteration time is K, and for any two adjacent slices in the sagittal direction, additional K-1 slices are generated between the two adjacent slices, so that the number of the slices in the sagittal direction with high resolution is K times of the number of the slices in the sagittal direction with low resolution; FIG. 8 is a schematic diagram of training a deep learning model according to an embodiment of the disclosure, as shown in FIG. 8, for two adjacent training samples stAnd st+1Will train the sample stAnd random noise is input into the generator and output to generate slice gt,1Will generate slices gt,1And inputting the training sample st into the generator, and outputting the generated slice gt,2Will generate slices gt,1And generating a slice gt,2Input into the generator, so that the iteration is repeated K times, and each new output generated slice gt,iAnd the last generated slice gt,i-1As input to the generator and get the next generated slice gt,i+1Where i is 1, 2 …, K-1, and finally output to generate slice gt,k(ii) a The generated section gt,kAnd training samples st+1With very high similarity, will generate slice gt,kAnd training sample st+1Inputting into a discriminator, calculating to generate a slice g by using the average absolute errort,kAnd training sample st+1Using the discriminator to generate the slice gt,kAnd classifying the images into real or false images, obtaining a loss function value according to the similarity, carrying out back propagation according to the loss function value, training the generator and the discriminator, and adjusting the parameters of the generator and the discriminator until convergence is achieved.
In the training process of the generated countermeasure network, for each training sample of the low-resolution three-dimensional medical image, multiple iterations are performed by using the same generator, and multiple generated slices can be obtained, so that a high-resolution three-dimensional medical image (for example, a three-dimensional medical image with isotropic resolution) is obtained; since the output of the generator itself is used as input to predict subsequent slices, the generation countermeasure network can be trained regardless of whether the training sample is an image with high resolution or isotropic resolution, thereby making the data set available for training more extensive; in the image acquisition process, the trained deep learning model can be used for generating high-resolution medical images in all directions without extra time and cost. The image acquisition equipment such as an MR machine and the like can effectively save the scanning time and cost, can still generate an image with high resolution between planes by scanning with low resolution between the planes and utilizing a trained deep learning model, and can also obtain an ideal super-resolution reconstruction effect when the difference between the target high resolution and the original low resolution is large (for example, the multiple of the target high resolution and the original low resolution is larger than 8); for example, if the target resolution of the three-dimensional medical image to be reconstructed is 8 times of the original resolution, a generator obtained by training when the third iteration number or the fourth iteration number is 8 is selected as the deep learning model, so that the super-resolution reconstruction effect is further improved; meanwhile, doctors in hospitals can use conventional imaging equipment to check the reconstructed high-resolution images to assist clinical medical diagnosis.
It should be noted that, although the above embodiment is taken as an example to describe a three-dimensional medical image super-resolution reconstruction method, those skilled in the art can understand that the disclosure should not be limited thereto. In fact, the user can flexibly set each implementation mode according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
In this way, in the embodiment of the present disclosure, the original slices in the three-dimensional medical image to be reconstructed are input into the trained deep learning model, and the generated slice sequence between the original slices is output, so as to obtain the three-dimensional medical image after super-resolution reconstruction. Therefore, in the image acquisition process, the acquisition time and cost can be effectively saved, the hardware requirement of acquisition equipment is reduced, and the three-dimensional medical image with ideal effect and high resolution can still be reconstructed for the acquired image with lower resolution between planes; meanwhile, doctors can obtain more information from the obtained generated slice sequence, thereby better assisting clinical medical diagnosis.
Fig. 9 shows a block diagram of a three-dimensional medical image super-resolution reconstruction apparatus according to an embodiment of the present disclosure. As shown in fig. 9, the apparatus may include: an original slice sequence obtaining module 401, configured to obtain an original slice sequence in a three-dimensional medical image to be reconstructed; a generated slice sequence obtaining module 402, configured to input original slices in the original slice sequence into a deep learning model, so as to obtain a generated slice sequence between the original slices; a three-dimensional medical image reconstruction module 403, configured to obtain a reconstructed three-dimensional medical image according to the generated slice sequence and the original slice sequence, where a resolution of the reconstructed three-dimensional medical image is higher than a resolution of the three-dimensional medical image to be reconstructed.
In a possible implementation manner, the generated slice sequence obtaining module 402 is specifically configured to: inputting original slices in the original slice sequence into the deep learning model one by one, and repeatedly executing the operation of inputting the latest output generated slice of the deep learning model into the deep learning model aiming at each original slice until reaching a preset first iteration number; and obtaining a generated slice sequence among the original slices according to all the generated slices output by the deep learning model.
In a possible implementation manner, the generated slice sequence obtaining module 402 is specifically configured to: inputting original slices in the original slice sequence into the deep learning model one by one, and repeatedly executing the operation of inputting the input image of the deep learning model for a first preset number of times and the latest output generated slice of the deep learning model into the deep learning model aiming at each original slice until a preset second iteration number is reached; and obtaining a generated slice sequence among the original slices according to all the generated slices output by the deep learning model.
In a possible implementation manner, the deep learning model is obtained by performing generation countermeasure training on a generation countermeasure network including a generator and a discriminator by using a three-dimensional medical image training sample sequence.
In one possible implementation, the three-dimensional medical image training sample sequence includes slices of the three-dimensional medical image with the same directional resolution.
Fig. 10 shows a block diagram of a three-dimensional medical image super-resolution reconstruction apparatus according to an embodiment of the present disclosure. As shown in fig. 10, the apparatus includes: the original slice sequence acquisition module 401, the generated slice sequence acquisition module 402, and the three-dimensional medical image reconstruction module 403 may further include: a training module 404, configured to, for adjacent training samples in the training sample sequence, input one of the adjacent training samples into the generator, and repeatedly perform an operation of inputting a latest generated slice output by the generator into the generator until a preset third iteration number is reached; inputting the generated slice output by the generator for the last time and another training sample in the adjacent training samples into the discriminator to obtain the similarity between the generated slice output by the generator for the last time and the another training sample; obtaining a loss function value according to the similarity; and carrying out back propagation according to the loss function value, training the generator and the discriminator until convergence is achieved, and taking the generator obtained in the convergence as the deep learning model.
In a possible implementation manner, the training module 404 may be further configured to: for adjacent training samples in the training sample sequence, inputting one training sample in the adjacent training samples into the generator, and repeatedly executing the operation of inputting the input image of the generator for a second preset number of times and the latest output generated slice of the generator into the generator until reaching a preset fourth iteration number; inputting the generated slice output by the generator for the last time and another training sample in the adjacent training samples into the discriminator to obtain the similarity between the generated slice output by the generator for the last time and the another training sample; obtaining a loss function value according to the similarity; and carrying out back propagation according to the loss function value, training the generator and the discriminator until convergence is achieved, and taking the generator obtained in the convergence as the deep learning model.
It should be noted that, although the above embodiment is taken as an example to describe the three-dimensional medical image super-resolution device method, those skilled in the art can understand that the disclosure should not be limited thereto. In fact, the user can flexibly set each implementation mode according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
In this way, in the embodiment of the present disclosure, the original slices in the three-dimensional medical image to be reconstructed are input into the trained deep learning model, and the generated slice sequence between the original slices is output, so as to obtain the three-dimensional medical image after super-resolution reconstruction. Therefore, in the image acquisition process, the acquisition time and cost can be effectively saved, the hardware requirement of acquisition equipment is reduced, and the three-dimensional medical image with ideal effect and high resolution can still be reconstructed for the acquired image with lower resolution between planes; meanwhile, doctors can obtain more information from the obtained generated slice sequence, thereby better assisting clinical medical diagnosis.
The embodiment of the present disclosure further provides a three-dimensional medical image super-resolution reconstruction apparatus, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
The disclosed embodiments also provide a non-transitory computer-readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the above-described method.
Fig. 11 shows a block diagram of an apparatus 1900 for three-dimensional medical image super-resolution reconstruction according to an embodiment of the present disclosure. For example, the apparatus 1900 may be provided as a server. Referring to FIG. 11, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, MacOS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the apparatus 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A three-dimensional medical image super-resolution reconstruction method is characterized by comprising the following steps:
acquiring an original slice sequence in a three-dimensional medical image to be reconstructed;
inputting original slices in the original slice sequence into a deep learning model to obtain a generated slice sequence among the original slices;
and obtaining a reconstructed three-dimensional medical image according to the generated slice sequence and the original slice sequence, wherein the resolution of the reconstructed three-dimensional medical image is higher than that of the three-dimensional medical image to be reconstructed.
2. The method of claim 1, wherein inputting the original slices in the sequence of original slices into a deep learning model, resulting in a sequence of generated slices between original slices, comprises:
inputting original slices in the original slice sequence into the deep learning model one by one, and repeatedly executing the operation of inputting the latest output generated slice of the deep learning model into the deep learning model aiming at each original slice until reaching a preset first iteration number;
and obtaining a generated slice sequence among the original slices according to all the generated slices output by the deep learning model.
3. The method of claim 1, wherein inputting the original slices in the sequence of original slices into a deep learning model, resulting in a sequence of generated slices between original slices, comprises:
inputting original slices in the original slice sequence into the deep learning model one by one, and repeatedly executing the operation of inputting the input image of the deep learning model for a first preset number of times and the latest output generated slice of the deep learning model into the deep learning model aiming at each original slice until a preset second iteration number is reached;
and obtaining a generated slice sequence among the original slices according to all the generated slices output by the deep learning model.
4. The method of claim 1, wherein the deep learning model is generated by performing generative confrontation training on a generative confrontation network comprising a generator and a discriminator by using a three-dimensional medical image training sample sequence.
5. The method of claim 4, wherein the three-dimensional medical image training sample sequence comprises slices of the three-dimensional medical image with the same directional resolution.
6. The method according to claim 4 or 5, characterized in that the method further comprises:
for adjacent training samples in the training sample sequence, inputting one training sample in the adjacent training samples into the generator, and repeatedly executing the operation of inputting the latest generated slice output by the generator into the generator until reaching a preset third iteration number;
inputting the generated slice output by the generator for the last time and another training sample in the adjacent training samples into the discriminator to obtain the similarity between the generated slice output by the generator for the last time and the another training sample;
obtaining a loss function value according to the similarity; and carrying out back propagation according to the loss function value, training the generator and the discriminator until convergence is achieved, and taking the generator obtained in the convergence as the deep learning model.
7. The method according to claim 4 or 5, characterized in that the method further comprises:
for adjacent training samples in the training sample sequence, inputting one training sample in the adjacent training samples into the generator, and repeatedly executing the operation of inputting the input image of the generator for a second preset number of times and the latest output generated slice of the generator into the generator until reaching a preset fourth iteration number;
inputting the generated slice output by the generator for the last time and another training sample in the adjacent training samples into the discriminator to obtain the similarity between the generated slice output by the generator for the last time and the another training sample;
obtaining a loss function value according to the similarity; and carrying out back propagation according to the loss function value, training the generator and the discriminator until convergence is achieved, and taking the generator obtained in the convergence as the deep learning model.
8. A three-dimensional medical image super-resolution reconstruction apparatus, characterized in that the apparatus comprises:
the original slice sequence acquisition module is used for acquiring an original slice sequence in a three-dimensional medical image to be reconstructed;
the generated slice sequence acquisition module is used for inputting the original slices in the original slice sequence into a deep learning model to obtain a generated slice sequence among the original slices;
and the three-dimensional medical image reconstruction module is used for obtaining a reconstructed three-dimensional medical image according to the generated slice sequence and the original slice sequence, wherein the resolution of the reconstructed three-dimensional medical image is higher than that of the three-dimensional medical image to be reconstructed.
9. The apparatus of claim 8, wherein the deep learning model is generated by performing generative warfare training on a generative warfare network comprising a generator and a discriminator using a three-dimensional medical image training sample sequence.
10. A three-dimensional medical image super-resolution reconstruction apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method of any one of claims 1 to 7 when executing the memory-stored executable instructions.
CN202010669466.6A 2020-07-13 2020-07-13 Three-dimensional medical image super-resolution reconstruction method and device Pending CN111833251A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010669466.6A CN111833251A (en) 2020-07-13 2020-07-13 Three-dimensional medical image super-resolution reconstruction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010669466.6A CN111833251A (en) 2020-07-13 2020-07-13 Three-dimensional medical image super-resolution reconstruction method and device

Publications (1)

Publication Number Publication Date
CN111833251A true CN111833251A (en) 2020-10-27

Family

ID=72922703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010669466.6A Pending CN111833251A (en) 2020-07-13 2020-07-13 Three-dimensional medical image super-resolution reconstruction method and device

Country Status (1)

Country Link
CN (1) CN111833251A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034642A (en) * 2021-03-30 2021-06-25 推想医疗科技股份有限公司 Image reconstruction method and device and training method and device of image reconstruction model
CN113108792A (en) * 2021-03-16 2021-07-13 中山大学 Wi-Fi fingerprint map reconstruction method and device, terminal equipment and medium
CN113256754A (en) * 2021-07-16 2021-08-13 南京信息工程大学 Stacking projection reconstruction method for segmented small-area tumor mass
CN113989349A (en) * 2021-10-25 2022-01-28 北京百度网讯科技有限公司 Image generation method, training method of image processing model, and image processing method
CN114742807A (en) * 2022-04-24 2022-07-12 北京医准智能科技有限公司 Chest radiography identification method and device based on X-ray image, electronic equipment and medium
CN116797457A (en) * 2023-05-20 2023-09-22 北京大学 Method and system for simultaneously realizing super-resolution and artifact removal of magnetic resonance image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140133780A1 (en) * 2011-12-14 2014-05-15 Peking University Nonlocality based super resolution reconstruction method and device
CN109255755A (en) * 2018-10-24 2019-01-22 上海大学 Image super-resolution rebuilding method based on multiple row convolutional neural networks
CN109584164A (en) * 2018-12-18 2019-04-05 华中科技大学 Medical image super-resolution three-dimensional rebuilding method based on bidimensional image transfer learning
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN110503699A (en) * 2019-07-01 2019-11-26 天津大学 A kind of CT projection path reduce in the case of CT image rebuilding method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140133780A1 (en) * 2011-12-14 2014-05-15 Peking University Nonlocality based super resolution reconstruction method and device
CN109255755A (en) * 2018-10-24 2019-01-22 上海大学 Image super-resolution rebuilding method based on multiple row convolutional neural networks
CN109584164A (en) * 2018-12-18 2019-04-05 华中科技大学 Medical image super-resolution three-dimensional rebuilding method based on bidimensional image transfer learning
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN110503699A (en) * 2019-07-01 2019-11-26 天津大学 A kind of CT projection path reduce in the case of CT image rebuilding method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113108792A (en) * 2021-03-16 2021-07-13 中山大学 Wi-Fi fingerprint map reconstruction method and device, terminal equipment and medium
CN113034642A (en) * 2021-03-30 2021-06-25 推想医疗科技股份有限公司 Image reconstruction method and device and training method and device of image reconstruction model
CN113034642B (en) * 2021-03-30 2022-05-27 推想医疗科技股份有限公司 Image reconstruction method and device and training method and device of image reconstruction model
CN113256754A (en) * 2021-07-16 2021-08-13 南京信息工程大学 Stacking projection reconstruction method for segmented small-area tumor mass
CN113256754B (en) * 2021-07-16 2021-09-28 南京信息工程大学 Stacking projection reconstruction method for segmented small-area tumor mass
CN113989349A (en) * 2021-10-25 2022-01-28 北京百度网讯科技有限公司 Image generation method, training method of image processing model, and image processing method
CN114742807A (en) * 2022-04-24 2022-07-12 北京医准智能科技有限公司 Chest radiography identification method and device based on X-ray image, electronic equipment and medium
CN116797457A (en) * 2023-05-20 2023-09-22 北京大学 Method and system for simultaneously realizing super-resolution and artifact removal of magnetic resonance image

Similar Documents

Publication Publication Date Title
CN111833251A (en) Three-dimensional medical image super-resolution reconstruction method and device
US10984535B2 (en) Systems and methods for anatomic structure segmentation in image analysis
EP3511942A2 (en) Cross-domain image analysis and cross-domain image synthesis using deep image-to-image networks and adversarial networks
US11810301B2 (en) System and method for image segmentation using a joint deep learning model
Morís et al. Data augmentation approaches using cycle-consistent adversarial networks for improving COVID-19 screening in portable chest X-ray images
CN111540025B (en) Predicting images for image processing
CN111161269B (en) Image segmentation method, computer device, and readable storage medium
JP7106307B2 (en) Medical image diagnostic apparatus, medical signal restoration method, medical signal restoration program, model learning method, model learning program, and magnetic resonance imaging apparatus
Liu et al. Learning MRI artefact removal with unpaired data
CN112669247A (en) Priori guidance type network for multitask medical image synthesis
JP6772123B2 (en) Image processing equipment, image processing methods, image processing systems and programs
CN111091575B (en) Medical image segmentation method based on reinforcement learning method
CN111210431A (en) Blood vessel segmentation method, device, equipment and storage medium
CN115564897A (en) Intelligent magnetic resonance holographic imaging method and system
JP2019159914A (en) Signal restoration apparatus, signal restoration method, signal restoration program, model learning method, and model learning program
CN111724884A (en) Generation of a resulting image
CN114787864A (en) Mesh topology adaptation
JP7462925B2 (en) BLOOD FLOW FIELD ESTIMATION DEVICE, LEARNING DEVICE, BLOOD FLOW FIELD ESTIMATION METHOD, AND PROGRAM
CN115272363B (en) Method, device and storage medium for reconstructing carotid three-dimensional image
US20220343634A1 (en) Pseudo data generation apparatus, pseudo data generation method, and non-transitory storage medium
JP7433913B2 (en) Route determination method, medical image processing device, model learning method, and model learning device
CN116740217B (en) Arterial spin marking method, device and storage medium based on artificial intelligence technology
EP4343680A1 (en) De-noising data
WO2023032438A1 (en) Regression estimation device and method, program, and trained model generation method
Bakaev et al. Feasibility of Spine Segmentation in ML-Based Recognition of Vertebrae in X-Ray Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination