CN112907691A - Neural network-based CT image reconstruction method, device, equipment and storage medium - Google Patents

Neural network-based CT image reconstruction method, device, equipment and storage medium Download PDF

Info

Publication number
CN112907691A
CN112907691A CN202110329642.6A CN202110329642A CN112907691A CN 112907691 A CN112907691 A CN 112907691A CN 202110329642 A CN202110329642 A CN 202110329642A CN 112907691 A CN112907691 A CN 112907691A
Authority
CN
China
Prior art keywords
projection
image
data
neural network
repairing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110329642.6A
Other languages
Chinese (zh)
Inventor
曾凯
冯亚崇
郭桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Anke High Tech Co ltd
Original Assignee
Shenzhen Anke High Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Anke High Tech Co ltd filed Critical Shenzhen Anke High Tech Co ltd
Priority to CN202110329642.6A priority Critical patent/CN112907691A/en
Publication of CN112907691A publication Critical patent/CN112907691A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The embodiment of the application provides a neural network-based CT image reconstruction method, device, equipment and storage medium, and relates to the technical field of image processing. The CT image reconstruction method based on the neural network is applied to a bias detector CT system and comprises the following steps: acquiring projection data acquired by the offset detector CT system; inputting the projection data into a projection domain repair neural network, and repairing the missing part of the projection data in the direction of a sinogram to obtain projection repair data, wherein the sinogram corresponds to the projection data and has mirror symmetry; obtaining a first reconstructed image according to the projection data and the projection repair data; inputting the first reconstructed image into an image domain repairing neural network to obtain a second reconstructed image; and forward projecting the second reconstructed image to obtain a CT reconstructed image. The CT image reconstruction method based on the neural network can achieve the technical effects of improving the image quality and improving the reconstruction speed.

Description

Neural network-based CT image reconstruction method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for reconstructing a CT image based on a neural network.
Background
Currently, CT (Computed Tomography) medical imaging systems have advanced significantly since the invention in the 70's of the 20 th century, with scan speeds from a few minutes at the beginning to 0.2 seconds at present. The number of detector rows also ranges from the first single row to the second row, to the present 64, 128, or even 256 rows. The change is not only upgrading and updating of system hardware, but also revolutionary change is brought about by image reconstruction technology of the system. In order to meet the requirements of different working scenes, the CT system is developed toward diversification, such as mobile CT, oral CT, intraoperative CT, and the like.
In the prior art, the CT in the operation can provide image quality which is comparable to that of the traditional CT, has flexible mobility of the C arm, and can finish detection and image verification of the screw position in the operation through seamless connection with a navigation system. Intraoperative CT generally carries a flat panel detector, which is limited by manufacturing process and cost, and the size of the flat panel detector is limited at present. When the scanned object is large, the scanned object exceeds the scanning visual field of the CT system, the projection data are subjected to bilateral truncation, and the data reconstructed by the FBP reconstruction algorithm can generate serious artifacts and influence diagnosis. The offset detector CT scan mode is a practical solution. The related reconstruction method is developed for the scanning mode, but the image reconstructed according to the traditional method and the related reconstruction method generates serious artifacts, high-intensity calculation is needed, and the requirement of clinic on speed is difficult to meet.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, a device and a storage medium for reconstructing a CT image based on a neural network, which can achieve the technical effects of improving image quality and improving reconstruction speed.
In a first aspect, an embodiment of the present application provides a neural network-based CT image reconstruction method applied to a biased detector CT system, including:
acquiring projection data acquired by the offset detector CT system;
inputting the projection data into a projection domain repair neural network, and repairing the missing part of the projection data in the direction of a sinogram to obtain projection repair data, wherein the sinogram corresponds to the projection data and has mirror symmetry;
obtaining a first reconstructed image according to the projection data and the projection repair data;
inputting the first reconstructed image into an image domain repairing neural network to obtain a second reconstructed image;
and forward projecting the second reconstructed image to obtain a CT reconstructed image.
In the implementation process, the neural network-based CT image reconstruction method combines the inherent symmetry of CT scanning and applies artificial intelligence, projection data obtain projection repair data through a projection domain repair neural network, so that the projection domain repair neural network is used for repairing missing parts in the projection data, and a first reconstructed image is obtained through the projection data and the projection repair data; the first reconstructed image is repaired through an image domain repairing neural network, robustness is improved, missing parts of the projection data can be further repaired through forward projection operation, and a CT reconstructed image is obtained; therefore, the method only needs projection data of one-time scanning, and combines the inherent symmetry and artificial intelligence of CT scanning, realizes large fov (Field of view) scanning which can obviously improve the image quality, and can realize the technical effects of improving the image quality and improving the reconstruction speed.
Further, before the step of inputting the projection data into a projection domain repair neural network, repairing the missing part of the projection data in the direction of the sinogram, and obtaining projection repair data, the method further includes:
acquiring projection training data and projection test data;
constructing the projection domain repairing neural network, wherein the projection domain repairing neural network comprises a generating network and a loss network, and the loss network is used for correcting the generating network;
and inputting the projection training data into the generation network, inputting the projection test data into the loss network, and training to obtain parameters of the generation network.
Further, the loss network is a mixing loss function including a content loss function and a style loss function, and the mixing loss function is:
Ltotal=αLcontent+βLstyle
the content loss function is:
Figure BDA0002994660310000031
the style loss function is:
Figure BDA0002994660310000032
Figure BDA0002994660310000033
Figure BDA0002994660310000034
wherein C is a content loss, S style loss, G is a target image,
Figure BDA0002994660310000035
is the output of the image at the (i, j, k) th position of the l-th layer in the lossy network, where (i, j, k) corresponds to height, width and channel,
Figure BDA0002994660310000036
is the width, height and channel number of the output of the l layer in the loss network.
Further, before the step of inputting the first reconstructed image into an image domain to repair a neural network and obtaining a second reconstructed image, the method further includes:
reconstructing the projection training data and the projection test data to obtain a projection training image and a projection test image;
constructing the image domain repairing neural network;
inputting the projection training image into the image domain repairing neural network to obtain a projection prediction image;
and processing the projection prediction image and the projection test image according to a mean square error loss function to obtain parameters of an image domain repairing neural network.
Further, the mean square error loss function is:
Figure BDA0002994660310000041
wherein p isnTo predict the value of a pixel n in the image, gnThe value of the pixel N in the real image, and N is the total number of pixels in the image.
Further, after the step of acquiring projection training data and projection test data, the method further includes:
processing the projection training data and the projection test data according to an affine transformation and an elastic transformation.
In the implementation process, the projection training data and the projection test data are subjected to data amplification, so that the data volume for neural network training can be effectively increased, and the training quality and the training precision of the neural network are improved.
In a second aspect, an embodiment of the present application provides a neural network-based CT image reconstruction apparatus, including:
the acquisition module is used for acquiring projection data acquired by the offset detector CT system;
the projection domain repairing module is used for inputting the projection data into a projection domain repairing neural network, repairing the missing part of the projection data in the direction of a sinogram, and obtaining projection repairing data, wherein the sinogram corresponds to the projection data and has mirror symmetry;
the first reconstruction module is used for obtaining a first reconstructed image according to the projection data and the projection repairing data;
the image domain repairing module is used for inputting the first reconstructed image into an image domain repairing neural network to obtain a second reconstructed image;
and the forward projection module is used for forward projecting the second reconstructed image to obtain a CT reconstructed image.
Further, the apparatus further comprises:
the device comprises a collecting module, a data processing module and a data processing module, wherein the collecting module is used for collecting projection training data scanned by a first offset detector and projection test data scanned by a second offset detector, and the size of the first offset detector is smaller than that of the second offset detector;
the projection domain building module is used for building the projection domain repairing neural network, the projection domain repairing neural network comprises a generating network and a loss network, and the loss network is used for correcting the generating network;
and the training module is used for inputting the projection training data into the generation network, inputting the projection testing data into the loss network, and training to obtain parameters of the generation network.
Further, the apparatus further comprises:
the second reconstruction module is used for acquiring the projection training image and the projection test image from the projection training data and the projection test data;
the image domain building module is used for building the image domain repairing neural network;
the prediction module is used for inputting the projection training image into the image domain repairing neural network to obtain a projection prediction image;
and the processing module is used for processing the projection prediction image and the projection test image according to a mean square error loss function to obtain parameters of an image domain repairing neural network.
Further, the apparatus further comprises:
and the data amplification module is used for processing the projection training data and the projection test data according to affine transformation and elastic transformation.
In a third aspect, an electronic device provided in an embodiment of the present application includes: memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any of the first aspect when executing the computer program.
In a fourth aspect, a storage medium is provided in an embodiment of the present application, where the storage medium has instructions stored thereon, and when the instructions are executed on a computer, the instructions cause the computer to perform the method according to any one of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which when run on a computer, causes the computer to perform the method according to any one of the first aspect.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the above-described techniques.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flowchart of a CT image reconstruction method based on a neural network according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a CT scan provided by an embodiment of the present application;
FIG. 3 is a schematic structural diagram of an offset detector CT system according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of training a projection domain repairing neural network according to an embodiment of the present application;
FIG. 5 is a diagram illustrating a projection domain repair neural network provided by an embodiment of the present application;
FIG. 6 is a schematic flow chart of a training image domain repairing neural network according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of an image domain repairing neural network provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of a CT image reconstruction apparatus based on a neural network according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of another CT image reconstruction apparatus based on a neural network according to an embodiment of the present application;
fig. 10 is a block diagram of a device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
The embodiment of the application provides a neural network-based CT image reconstruction method, a neural network-based CT image reconstruction device, neural network-based CT image reconstruction equipment and a storage medium, and the neural network-based CT image reconstruction method, the neural network-based CT image reconstruction equipment and the storage medium can be applied to a bias detector CT system to realize CT image reconstruction; the CT image reconstruction method based on the neural network combines the inherent symmetry and the applied artificial intelligence of CT scanning, projection data obtain projection repair data through a projection domain repair neural network, so that the projection domain repair neural network is used for repairing missing parts in the projection data, and a first reconstructed image is obtained through the projection data and the projection repair data; the first reconstructed image is repaired through an image domain repairing neural network, robustness is improved, missing parts of the projection data can be further repaired through forward projection operation, and a CT reconstructed image is obtained; therefore, the method only needs projection data of one-time scanning, realizes large fov scanning capable of remarkably improving image quality by combining the inherent symmetry and artificial intelligence of CT scanning, and can realize the technical effects of improving image quality and improving reconstruction speed.
Referring to fig. 1, fig. 1 is a schematic flowchart of a neural network-based CT image reconstruction method according to an embodiment of the present application, where the neural network-based CT image reconstruction method is applied to a biased detector CT system, and includes the following steps:
s100: projection data acquired by an offset detector CT system is acquired.
For example, offset detector CT scanning is one mode of intraoperative CT systems; intraoperative CT generally carries a flat panel detector, which is limited by manufacturing process and cost, and the size of the flat panel detector is limited at present. When the scanned object is large, the scanned object exceeds the scanning visual field of the CT system, the projection data are subjected to bilateral truncation, and the data reconstructed by the FBP reconstruction algorithm can generate serious artifacts and influence diagnosis. The offset detector CT scan mode is a practical solution.
Referring to fig. 2 and fig. 3, fig. 2 is a schematic diagram of a CT scan provided in an embodiment of the present application, and fig. 3 is a schematic structural diagram of an offset detector CT system provided in an embodiment of the present application.
For example, fig. 2 is a schematic diagram of a CT scan, and a projection image can be obtained by scanning an object under different angles, which is represented by P (channel, row, angle), where channel represents a column of projection data, row represents a row of projection data, and angle represents an angle of projection data.
For example, the processing based on single projection, the general extrapolation is in the P (channel) data field, and it is difficult to achieve the ideal effect, and the embodiment of the present application makes full use of the inherent characteristics of CT scan based on the P (channel) data field because there is more data symmetry in this data field, as shown in fig. 3.
Illustratively, the offset detector CT system includes a bulb 11, a detector 12, and a detection object 13, where the bulb 11 and the detector 12 perform a circular motion around the scanning detection object 13, the bulb 11 and the detector 12 move synchronously, and the scanning acquired data 20 is projection data. The image data is divided into a region a, a region B and a region C according to the angle of the projection data, the region a and the region B in the projection data are effective acquisition regions of the detector 12, the region C is a missing part, namely a region needing to be repaired, and the region a and the region C have data symmetry, namely mirror symmetry.
S200: inputting the projection data into a projection domain repairing neural network, repairing the missing part of the projection data in the direction of the sinogram, and obtaining projection repairing data, wherein the sinogram corresponds to the projection data and has mirror symmetry.
Illustratively, repairing the missing part of the projection data in the direction of the sinogram, namely repairing the missing part area C in the projection data, and obtaining the projection repairing data, namely image data of the area C part; the sinogram corresponds to the projection data and the sinogram has mirror symmetry, i.e., region a and region C have data symmetry, i.e., mirror symmetry.
Illustratively, Neural Networks (NNs), also called Artificial Neural Networks (ans) or Connection models (Connection models), are an algorithmic mathematical Model that models animal Neural network behavior characteristics for distributed parallel information processing. The network achieves the aim of processing information by adjusting the mutual connection relationship among a large number of nodes in the network depending on the complexity of the system.
S300: a first reconstructed image is obtained from the projection data and the projection repair data.
Exemplarily, the image reconstruction is performed according to the projection data and the projection repair data, and the reconstruction method may adopt a conventional FBP (filtered back projection) type reconstruction method; it should be understood that the reconstruction method is presented here by way of example and not limitation, and that other types of reconstruction methods may be used.
S400: and inputting the first reconstructed image into an image domain repairing neural network to obtain a second reconstructed image.
Illustratively, the robustness of the method may be increased.
S500: and forward projecting the second reconstructed image to obtain a CT reconstructed image.
Illustratively, the front projection operation may also supplement the repair of region C in the projection data.
In some embodiments, the steps in the neural network-based CT image reconstruction method may be repeated iteratively, that is, S100 to S500 are repeated, and the number of iterations N is set, and an empirical value N of 2 can achieve a satisfactory effect.
In some implementation scenes, the neural network-based CT image reconstruction method combines the inherent symmetry of CT scanning and applies artificial intelligence, projection data obtain projection repair data through a projection domain repair neural network, so that the projection domain repair neural network is used for repairing missing parts in the projection data, and a first reconstructed image is obtained through the projection data and the projection repair data; the first reconstructed image is repaired through an image domain repairing neural network, robustness is improved, missing parts of the projection data can be further repaired through forward projection operation, and a CT reconstructed image is obtained; therefore, the method only needs projection data of one-time scanning, realizes large fov scanning capable of remarkably improving image quality by combining the inherent symmetry and artificial intelligence of CT scanning, and can realize the technical effects of improving image quality and improving reconstruction speed.
Referring to fig. 4, fig. 4 is a schematic flowchart of training a projection domain repairing neural network according to an embodiment of the present application.
Exemplarily, S200: inputting the projection data into a projection domain repairing neural network, repairing the missing part of the projection data in the direction of the sinogram, and before the step of obtaining the projection repairing data, the method further comprises the following steps:
s210: projection training data and projection test data are acquired.
Illustratively, a training set is generated through data preprocessing, and projection data scanned by m small-size offset detectors and m large-size offset detectors are collected, wherein the small-size offset detectors collect projection training data PdataCollection of projection test data P by a large-size offset detectortargetTo keep the input and output dimensions consistent, the projection training data P is traineddataExtrapolate (fill, linear interpolation, etc.) to project the training data PdataAnd projecting the test data PtargetThe dimensions of (a) are kept consistent.
S220: and constructing a projection domain repairing neural network, wherein the projection domain repairing neural network comprises a generating network and a loss network, and the loss network is used for correcting the generating network.
Referring to fig. 5, fig. 5 is a schematic diagram of a projection domain repairing neural network according to an embodiment of the present application.
Illustratively, a projection domain repairing neural network is constructed, namely a network structure and a loss function are designed: the neural network (U-Net + VGG-16) shown in FIG. 5 is constructed as a projection domain repair neural network, and the network structure comprises two parts. One is "generation Network" (Transformation Network) and one is "Loss Network" (Loss Network). Generating network acceptance projection training data PdataAs input, then the output is also a projection (the result after projection domain repair). As shown in fig. 5, the left side is the generation network and the right side is the loss network. A training phase, in which projection test data P is first selectedtargetAs a style target image, the region A of (1) projects test data PtargetAs the content target image. The purpose of the training is to allow the generation network to efficiently generate images. The target is defined by a lossy network. An execution phase of giving a projection test data PtargetAnd inputting the projection data into a generating network, and outputting the projection repaired result. The Loss network employs a hybrid Loss function including content Loss (L2 Loss) and Style Loss (Style Loss). Content loss is obtained by subtracting and squaring the prediction graph and the target graph, so that the prediction graph is closer to the real graph, and content information can be reserved. The covariance matrix of the feature map obtained after the image passes through the convolutional layer can well represent the texture features of the image, and the style loss utilizes the covariance matrix to transfer the texture information to the image needing style transfer. The mixing loss function is:
Ltotal=αLcontent+βLstyle
the content loss function is:
Figure BDA0002994660310000111
the style loss function is:
Figure BDA0002994660310000112
Figure BDA0002994660310000113
Figure BDA0002994660310000114
wherein C is a content loss, S style loss, G is a target image,
Figure BDA0002994660310000115
is the output of the image at the (i, j, k) th position of the l-th layer in the lossy network, where (i, j, k) corresponds to height, width and channel,
Figure BDA0002994660310000116
is the width, height and channel number of the output of the l layer in the loss network.
S230: and inputting projection training data into the generation network, inputting projection test data into the loss network, and training to obtain parameters of the generation network.
In some embodiments, a projection domain repair neural network is trained by projecting training data, projection test data, to obtain parameters that generate the network; optionally, the projection training data and the projection test data are input into a projection domain repairing neural network, the projection domain repairing neural network is trained by using an Adam optimizer and with an initial learning rate of 0.001, the loss network is initialized by using a pre-training model in the training process and does not participate in training, affine transformation and elastic transformation are used for data augmentation, and finally, the parameters of the generated network are obtained after training is completed.
Optionally, the projection training data and the projection test data may also be augmented in other manners, which is not limited herein.
Referring to fig. 6, fig. 6 is a schematic flow chart of a training image domain repairing neural network according to an embodiment of the present application.
Exemplarily, S400: inputting the first reconstructed image into an image domain repairing neural network, and before the step of obtaining the second reconstructed image, the method further comprises the following steps:
s410: and reconstructing projection training data and projection test data to obtain a projection training image and a projection test image.
Illustratively, the data P is trained by reconstructing the projectiondataAnd projecting the test data PtargetAnd acquiring a projection training image and a projection test image so as to complete data set preparation and serve as input and output images of the image domain repairing neural network.
S420: and constructing an image domain repairing neural network.
Referring to fig. 7, fig. 7 is a schematic diagram of an image domain repairing neural network according to an embodiment of the present application.
Illustratively, an image domain repairing neural network is constructed, and the design of the image domain repairing neural network is completed as follows: the U-Net neural network shown in FIG. 7 is constructed as an image domain repairing neural network, an image sequence is input, a string of features smaller than the original image is obtained through down-sampling coding, the features are equivalent to compression, and then decoding is carried out, so that the original image can be restored under the ideal condition. The input and output of the network are to projection training data PdataAnd projecting the test data PtargetThe reconstructed image. The network adopts Mean Square Error (MSE) loss, the MSE loss evaluates the change degree of a real value and a predicted value, and the smaller the MSE value is, the better the accuracy of the prediction model describing experimental data is. The mean square error loss function is:
Figure BDA0002994660310000131
wherein,pnTo predict the value of a pixel n in the image, gnThe value of the pixel N in the real image, and N is the total number of pixels in the image.
S430: and inputting the projection training image into an image domain repairing neural network to obtain a projection prediction image.
S440: and processing the projection prediction image and the projection test image according to the mean square error loss function to obtain parameters of the image domain repairing neural network.
Illustratively, during the training of the image domain repairing neural network: inputting a projection training image and a projection testing image into an image domain repairing neural network, training the image domain repairing neural network by using an Adam optimizer and with an initial learning rate of 0.001, performing data augmentation by using affine transformation and elastic transformation in the training process, and finally completing the training to obtain parameters of the image domain repairing neural network.
Exemplarily, S210: after the step of acquiring the projection training data and the projection test data, the method further comprises the following steps:
the projection training data and the projection test data are processed according to the affine transformation and the elastic transformation.
Exemplarily, data amplification is performed on projection training data and projection test data, so that the data volume for neural network training can be effectively improved, and the training quality and the training precision of the neural network can be improved.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a neural network-based CT image reconstruction apparatus according to an embodiment of the present application, where the neural network-based CT image reconstruction apparatus includes:
an obtaining module 100, configured to obtain projection data acquired by a biased detector CT system;
the projection domain restoration module 200 is configured to input projection data into a projection domain restoration neural network, and restore a missing part of the projection data in a direction of a sinogram to obtain projection restoration data, where the sinogram corresponds to the projection data and has mirror symmetry;
a first reconstruction module 300 configured to obtain a first reconstructed image according to the projection data and the projection repair data;
an image domain restoration module 400, configured to input the first reconstructed image into an image domain restoration neural network to obtain a second reconstructed image;
and a forward projection module 500, configured to forward project the second reconstructed image to obtain a CT reconstructed image.
Please refer to fig. 9, fig. 9 is a schematic structural diagram of another CT image reconstruction apparatus based on a neural network according to an embodiment of the present application.
Illustratively, the neural network-based CT image reconstruction apparatus further includes:
a collecting module 210, configured to collect projection training data scanned by a first offset detector and projection test data scanned by a second offset detector, where a size of the first offset detector is smaller than that of the second offset detector;
the projection domain building module 220 is configured to build a projection domain repairing neural network, where the projection domain repairing neural network includes a generating network and a loss network, and the loss network is used to modify the generating network;
and the training module 230 is configured to input the projection training data to the generation network, input the projection test data to the loss network, and train to obtain parameters of the generation network.
Illustratively, the neural network-based CT image reconstruction apparatus further includes:
a second re-modeling block 410, configured to project the training data and the projection test data to obtain a projection training image and a projection test image;
an image domain construction module 420, configured to construct an image domain repairing neural network;
the prediction module 430 is configured to input the projection training image to the image domain repairing neural network to obtain a projection prediction image;
and the processing module 440 is configured to process the projection prediction image and the projection test image according to a mean square error loss function to obtain parameters of the image domain repairing neural network.
Illustratively, the neural network-based CT image reconstruction apparatus further includes:
and the data amplification module 211 is configured to process the projection training data and the projection testing data according to affine transformation and elastic transformation.
It should be understood that the embodiments of the neural network based CT image reconstruction apparatus shown in fig. 8 and 9 correspond to the embodiments of the neural network based CT image reconstruction method shown in fig. 1 to 7, and are not repeated herein to avoid repetition.
Fig. 10 shows a block diagram of an electronic device according to an embodiment of the present disclosure, where fig. 10 is a block diagram of the electronic device. The electronic device may include a processor 510, a communication interface 520, a memory 530, and at least one communication bus 540. Wherein the communication bus 540 is used for realizing direct connection communication of these components. In this embodiment, the communication interface 520 of the electronic device is used for performing signaling or data communication with other node devices. Processor 510 may be an integrated circuit chip having signal processing capabilities.
The Processor 510 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor 510 may be any conventional processor or the like.
The Memory 530 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Read Only Memory (EPROM), an electrically Erasable Read Only Memory (EEPROM), and the like. The memory 530 stores computer readable instructions, which when executed by the processor 510, enable the electronic device to perform the steps involved in the method embodiments of fig. 1-7.
Optionally, the electronic device may further include a memory controller, an input output unit.
The memory 530, the memory controller, the processor 510, the peripheral interface, and the input/output unit are electrically connected to each other directly or indirectly, so as to implement data transmission or interaction. For example, these elements may be electrically coupled to each other via one or more communication buses 540. The processor 510 is used to execute executable modules stored in the memory 530, such as software functional modules or computer programs included in the electronic device.
The input and output unit is used for providing a task for a user to create and start an optional time period or preset execution time for the task creation so as to realize the interaction between the user and the server. The input/output unit may be, but is not limited to, a mouse, a keyboard, and the like.
It will be appreciated that the configuration shown in fig. 10 is merely illustrative and that the electronic device may include more or fewer components than shown in fig. 10 or have a different configuration than shown in fig. 10. The components shown in fig. 10 may be implemented in hardware, software, or a combination thereof.
The embodiment of the present application further provides a storage medium, where the storage medium stores instructions, and when the instructions are run on a computer, when the computer program is executed by a processor, the method in the method embodiment is implemented, and in order to avoid repetition, details are not repeated here.
The present application also provides a computer program product which, when run on a computer, causes the computer to perform the method of the method embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A CT image reconstruction method based on a neural network is applied to a bias detector CT system and is characterized by comprising the following steps:
acquiring projection data acquired by the offset detector CT system;
inputting the projection data into a projection domain repair neural network, and repairing the missing part of the projection data in the direction of a sinogram to obtain projection repair data, wherein the sinogram corresponds to the projection data and has mirror symmetry;
obtaining a first reconstructed image according to the projection data and the projection repair data;
inputting the first reconstructed image into an image domain repairing neural network to obtain a second reconstructed image;
and forward projecting the second reconstructed image to obtain a CT reconstructed image.
2. The neural network-based CT image reconstruction method according to claim 1, wherein before the step of inputting the projection data into a projection domain repairing neural network, repairing the missing part of the projection data in the direction of the sinogram, and obtaining the projection repairing data, further comprising:
acquiring projection training data and projection test data;
constructing the projection domain repairing neural network, wherein the projection domain repairing neural network comprises a generating network and a loss network, and the loss network is used for correcting the generating network;
and inputting the projection training data into the generation network, inputting the projection test data into the loss network, and training to obtain parameters of the generation network.
3. The neural network-based CT image reconstruction method according to claim 2, wherein the loss network is a mixture loss function including a content loss function and a style loss function, the mixture loss function being:
Ltotal=αLcontent+βLstyle
the content loss function is:
Figure FDA0002994660300000021
the style loss function is:
Figure FDA0002994660300000022
Figure FDA0002994660300000023
Figure FDA0002994660300000024
wherein C is a content loss, S style loss, G is a target image,
Figure FDA0002994660300000025
is the output of the image at the (i, j, k) th position of the l-th layer in the lossy network, where (i, j, k) corresponds to height, width and channel,
Figure FDA0002994660300000026
is the width, height and channel number of the output of the l layer in the loss network.
4. The neural network-based CT image reconstruction method according to claim 2, wherein the step of inputting the first reconstructed image into an image domain to reconstruct the neural network to obtain the second reconstructed image further comprises:
reconstructing the projection training data and the projection test data to obtain a projection training image and a projection test image;
constructing the image domain repairing neural network;
inputting the projection training image into the image domain repairing neural network to obtain a projection prediction image;
and processing the projection prediction image and the projection test image according to a mean square error loss function to obtain parameters of an image domain repairing neural network.
5. The neural network-based CT image reconstruction method of claim 4, wherein the mean square error loss function is:
Figure FDA0002994660300000031
wherein p isnTo predict the value of a pixel n in the image, gnThe value of the pixel N in the real image, and N is the total number of pixels in the image.
6. The neural network-based CT image reconstruction method of claim 2, further comprising, after the step of acquiring projection training data and projection test data:
processing the projection training data and the projection test data according to an affine transformation and an elastic transformation.
7. A neural network-based CT image reconstruction apparatus, comprising:
the acquisition module is used for acquiring projection data acquired by the offset detector CT system;
the projection domain repairing module is used for inputting the projection data into a projection domain repairing neural network, repairing the missing part of the projection data in the direction of a sinogram, and obtaining projection repairing data, wherein the sinogram corresponds to the projection data and has mirror symmetry;
the first reconstruction module is used for obtaining a first reconstructed image according to the projection data and the projection repairing data;
the image domain repairing module is used for inputting the first reconstructed image into an image domain repairing neural network to obtain a second reconstructed image;
and the forward projection module is used for forward projecting the second reconstructed image to obtain a CT reconstructed image.
8. The neural network-based CT image reconstruction apparatus according to claim 7, further comprising:
the device comprises a collecting module, a data processing module and a data processing module, wherein the collecting module is used for collecting projection training data scanned by a first offset detector and projection test data scanned by a second offset detector, and the size of the first offset detector is smaller than that of the second offset detector;
the projection domain building module is used for building the projection domain repairing neural network, the projection domain repairing neural network comprises a generating network and a loss network, and the loss network is used for correcting the generating network;
and the training module is used for inputting the projection training data into the generation network, inputting the projection testing data into the loss network, and training to obtain parameters of the generation network.
9. An apparatus, comprising: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the neural network based CT image reconstruction method according to any one of claims 1 to 6 when executing the computer program.
10. A storage medium having stored thereon instructions which, when run on a computer, cause the computer to perform a neural network-based CT image reconstruction method as claimed in any one of claims 1 to 6.
CN202110329642.6A 2021-03-26 2021-03-26 Neural network-based CT image reconstruction method, device, equipment and storage medium Pending CN112907691A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110329642.6A CN112907691A (en) 2021-03-26 2021-03-26 Neural network-based CT image reconstruction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110329642.6A CN112907691A (en) 2021-03-26 2021-03-26 Neural network-based CT image reconstruction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112907691A true CN112907691A (en) 2021-06-04

Family

ID=76109305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110329642.6A Pending CN112907691A (en) 2021-03-26 2021-03-26 Neural network-based CT image reconstruction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112907691A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190206095A1 (en) * 2017-12-29 2019-07-04 Tsinghua University Image processing method, image processing device and storage medium
CN110599530A (en) * 2019-09-03 2019-12-20 西安电子科技大学 MVCT image texture enhancement method based on double regular constraints
CN110728729A (en) * 2019-09-29 2020-01-24 天津大学 Unsupervised CT projection domain data recovery method based on attention mechanism

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190206095A1 (en) * 2017-12-29 2019-07-04 Tsinghua University Image processing method, image processing device and storage medium
CN110047113A (en) * 2017-12-29 2019-07-23 清华大学 Neural network training method and equipment, image processing method and equipment and storage medium
CN110599530A (en) * 2019-09-03 2019-12-20 西安电子科技大学 MVCT image texture enhancement method based on double regular constraints
CN110728729A (en) * 2019-09-29 2020-01-24 天津大学 Unsupervised CT projection domain data recovery method based on attention mechanism

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JUSTIN JOHNSON ET AL: "Perceptual Losses for Real-Time Style Transfer and Super-Resolution", 《COMPUTER VISION -ECCV 2016》 *
LEON A. GATYS: "Image Style Transfer Using Convolutional Neural Networks", 《COMPUTER VISION -ECCV 2016》 *
许强: "基于多尺度 U-net 的三维稀疏角度 CT 图像后处理算法研究", 《中国优秀硕士学位论文全文数据库》 *

Similar Documents

Publication Publication Date Title
Gong et al. PET image reconstruction using deep image prior
US11610346B2 (en) Image reconstruction using machine learning regularizers
CN110462689B (en) Tomographic reconstruction based on deep learning
CN113658051B (en) Image defogging method and system based on cyclic generation countermeasure network
CN107610193B (en) Image correction using depth-generated machine learning models
WO2019128660A1 (en) Method and device for training neural network, image processing method and device and storage medium
US8571287B2 (en) System and method for iterative image reconstruction
US20190328348A1 (en) Deep learning based estimation of data for use in tomographic reconstruction
Zhou et al. Limited view tomographic reconstruction using a cascaded residual dense spatial-channel attention network with projection data fidelity layer
EP3874457B1 (en) Three-dimensional shape reconstruction from a topogram in medical imaging
US20210233244A1 (en) System and method for image segmentation using a joint deep learning model
US8625870B2 (en) Method and system for supplementing detail image in successive multi-scale reconstruction
JP2021013736A (en) X-ray diagnostic system, image processing apparatus, and program
US11475535B2 (en) PET-CT registration for medical imaging
US10970885B2 (en) Iterative image reconstruction
CN111833251A (en) Three-dimensional medical image super-resolution reconstruction method and device
Cheng et al. Learned full-sampling reconstruction from incomplete data
JP2023515367A (en) Out-of-distribution detection of input instances to model
Van Eyndhoven et al. Region-based iterative reconstruction of structurally changing objects in CT
Fessler Image reconstruction: Algorithms and analysis
JP2022161857A (en) System and method for utilizing deep learning network to correct bad pixel in computed tomography detector
CN116740215A (en) Four-dimensional CT image reconstruction method, device, medical equipment and storage medium
CN117197349A (en) CT image reconstruction method and device
CN112907691A (en) Neural network-based CT image reconstruction method, device, equipment and storage medium
CN116597263A (en) Training method and related device for image synthesis model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210604