WO2021189383A1 - 生成高能ct图像模型的训练及生成方法、设备、存储介质 - Google Patents

生成高能ct图像模型的训练及生成方法、设备、存储介质 Download PDF

Info

Publication number
WO2021189383A1
WO2021189383A1 PCT/CN2020/081504 CN2020081504W WO2021189383A1 WO 2021189383 A1 WO2021189383 A1 WO 2021189383A1 CN 2020081504 W CN2020081504 W CN 2020081504W WO 2021189383 A1 WO2021189383 A1 WO 2021189383A1
Authority
WO
WIPO (PCT)
Prior art keywords
energy
image
low
training
loss
Prior art date
Application number
PCT/CN2020/081504
Other languages
English (en)
French (fr)
Inventor
胡战利
梁栋
陈其航
杨永峰
刘新
郑海荣
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Priority to PCT/CN2020/081504 priority Critical patent/WO2021189383A1/zh
Publication of WO2021189383A1 publication Critical patent/WO2021189383A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • This application relates to the field of computer tomography technology, in particular to a training method, generation method, equipment and storage medium for generating a high-energy CT image model.
  • Computer tomography can provide the main anatomical and pathological information of the human body, greatly improving the level of medical diagnosis, and has been widely used in the field of clinical medicine.
  • CT Computer tomography
  • the use of high-dose X-rays in CT scans can obtain clear CT images.
  • X-rays can cause damage to the patient’s body, and may even cause cancer.
  • the main technical problem addressed by this application is to provide a training method for generating high-energy CT image models, a method for generating high-energy CT images, computer equipment and storage media, aiming to generate high-energy CT images based on low-energy CT images, so as not to increase the number of patients Improve the quality of CT image while receiving radiation dose.
  • the first technical solution adopted by this application is to provide a training method for generating high-energy CT image models, including: inputting low-energy CT images for training into a recurrent network to transform the first low-energy CT images into the first High-energy CT image, and then from the first high-energy CT image to the second low-energy CT image training, to obtain the loss function;
  • the loss function is used to construct the first high-energy CT image generation model.
  • the second technical solution adopted in this application is to provide a method for generating high-energy CT images, including: inputting the low-energy CT images to be processed into the first high-energy CT image generation model;
  • the first high-energy CT image generation model is the first high-energy CT image generation model trained by the above-mentioned training method for generating high-energy CT image models.
  • the third technical solution adopted in this application is to provide a computer device including a memory and a processor
  • the memory is used to store computer programs
  • the processor is used to execute a computer program and implement the above-mentioned training method for generating a high-energy CT image model or realize the above-mentioned method for generating a high-energy CT image when the computer program is executed.
  • the fourth technical solution adopted in this application is to provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the processor realizes the above-mentioned generation
  • this application constructs a recurrent network to transform the first low-energy CT image into the first high-energy CT image, and then into the second low-energy CT image to obtain the loss function.
  • the loss function is used to construct the first high-energy CT image generation model. Since the recurrent network continuously reduces the value of the loss function during the training process to keep it within the preset error range, the high-energy CT image generation network and the low-energy CT image generation network of the recurrent network are continuously optimized during the training process, and at the same time Due to the cyclic structure of the network, the two optimize each other and improve the performance of the two networks. Finally, the generated high-energy CT image generation network model can more accurately extract the deep feature information of the image, while retaining the details of the tissue structure around the metal object. Information to generate high-quality high-energy CT images.
  • FIG. 1 is a schematic flowchart of a first embodiment of a training method for generating a high-energy CT image model according to this application;
  • Figure 2 is a schematic diagram of the structure of the high-energy CT image discriminator network of the application.
  • FIG. 3 is a schematic flowchart of a second embodiment of a training method for generating a high-energy CT image model according to this application;
  • FIG. 4 is a schematic diagram of a flow chart of a method for generating high-energy CT images according to this application;
  • FIG. 5 is a schematic flowchart of a specific application scenario of a method for generating high-energy CT images according to this application;
  • FIG. 6 is a schematic diagram of the structure of the feature extraction layer in FIG. 5;
  • FIG. 7 is a schematic diagram of the structure of the first attention mechanism and the second attention mechanism in FIG. 5 and FIG. 6;
  • Figure 8 is a low-energy CT image to be processed obtained by dual-energy CT scanning
  • Fig. 9 is a high-energy CT image corresponding to Fig. 5 obtained by dual-energy CT scanning
  • FIG. 10 is a high-energy CT image generated by using the first high-energy CT image generation model of the present application.
  • FIG. 11 is a schematic block diagram of the structure of the first implementation manner of a computer device according to this application.
  • FIG. 12 is a schematic structural block diagram of a second implementation manner of a computer device according to this application.
  • FIG. 13 is a schematic block diagram of the structure of a computer-readable storage medium of this application.
  • CT computer tomography
  • X-rays are used in CT imaging to obtain main anatomical and pathological information of the human body.
  • the use of high-energy CT can obtain clear CT images, but the high radiation dose of high-energy CT causes serious radiation damage to the human body. Therefore, low-energy CT is tended to be used in clinical practice to reduce CT radiation, but when there is metal in human tissues
  • the reconstructed images obtained by low-energy CT scans will produce metal artifacts, resulting in poor CT image quality, making it difficult to judge the structure of human tissues, and may lead to misdiagnosed results.
  • this application proposes at least the following embodiments.
  • FIG. 1 is a schematic flowchart of a first embodiment of a training method for generating a high-energy CT image model according to this application.
  • the first embodiment of the training method for generating a high-energy CT image model of the present application includes:
  • Step S10 Input the low-energy CT image for training into the recurrent network to perform training of transforming the first low-energy CT image into the first high-energy CT image, and then from the first high-energy CT image to the second low-energy CT image, to obtain the loss function.
  • the recurrent network may include a pair of generator networks.
  • one generator network is a high-energy CT image generator network, which is used to convert input low-energy CT images into high-energy CT images.
  • the other generator network is a low-energy CT image generator network, which is used to convert input high-energy CT images into low-energy CT images.
  • the high-energy CT image generator network and the low-energy CT image generator network form a cyclic network to convert the input low-energy CT image into a high-energy CT image and then into a new low-energy CT image.
  • the recurrent network may also include a pair of discriminator networks.
  • one of the discriminator networks is a high-energy CT image discriminator network, which corresponds to the high-energy CT image generator network, and is used to discriminate the results of the high-energy CT image generator network and conduct counter-training with the high-energy CT image generator network.
  • the high-energy CT image generated by the high-energy CT image generator network is more close to the real image.
  • the other discriminator network is the low-energy CT image discriminator network, which corresponds to the low-energy CT image generator network. It is used to discriminate the generation results of the low-energy CT image generator network and conduct counter-training with the low-energy CT image generator network.
  • the low-energy CT image generated by the low-energy CT image generator network is more close to the real image.
  • the high-energy CT image discriminator network of the present application includes multiple convolutional layers, for example, five convolutional layers.
  • the high-energy CT image discriminator network extracts the image feature information input to the high-energy CT image discriminator network through multi-layer convolutional layers to discriminate the input images and measure the high-energy CT images generated by the high-energy CT image generator network and the real high-energy CT images
  • the gap is fed back to the high-energy CT image generator network of the recurrent network to optimize the parameters of each layer of the high-energy CT image generator network, and finally generate high-quality high-energy CT images.
  • the low-energy CT image for training may be a low-energy CT image from a dual-energy CT image group.
  • the dual-energy CT image group may be a pair of low-energy CT images and high-energy CT images obtained by dual-energy CT scanning.
  • the generator network and the corresponding discriminator network obtain an adversarial loss function during adversarial training, and the adversarial loss function includes a generator loss function and a discriminator loss function. Furthermore, every time the recurrent network loops, a new input graph will be regenerated.
  • the low-energy CT image for training is input into the recurrent network, after one cycle, a first high-energy CT image is generated, and then a second low-energy CT image is generated from the first high-energy CT image, that is, a new low-energy CT image is generated. Therefore, the loss function can also include reconstruction loss and cyclic loss.
  • the reconstruction loss is the image loss between the first high-energy CT image and the real high-energy CT image.
  • the cycle loss is the image loss of the second low-energy CT image and the low-energy CT image for training.
  • the processing flow of low-energy CT images for training is as follows: first, the low-energy CT images for training are input into the recurrent network, and the high-energy CT image generator network of the recurrent network inputs the low-energy CT images for training, that is, the first low-energy CT image. The image is processed and the first high-energy CT image is output. Then the low-energy CT image generator network of the cyclic network processes the input first high-energy CT image, outputs the second low-energy CT image, and completes a cyclic action. In this process, the generator network of the cyclic network and the corresponding discriminator network generate the confrontation loss, and the cyclic network generates the reconstruction loss and the cyclic loss after completing a cyclic action, thereby obtaining the loss function.
  • the confrontation loss includes generator network loss and discriminator network loss.
  • the calculation formula for the network loss of the high-energy CT image generator is:
  • G lh is a high-energy CT image generator
  • D h is a high-energy CT image discriminator
  • x is an image from a low-energy CT image data set P data (x)
  • y is an image from a high-energy CT image data set P data (y) Image
  • 1 is the label of the real image.
  • 0 is the label of the generated image.
  • the high-energy CT image generator network of the recurrent network and the high-energy CT image discriminator network confront each other, and improve the performance of the two in the process of mutual confrontation, so that the first high-energy CT image generated by the high-energy CT image generator network is more realistic High-energy CT image.
  • x cyc is a low-energy CT image generated cyclically.
  • step S10 includes:
  • Step S101 down-sampling the low-energy CT image for training to obtain a first feature map
  • the low-energy CT image for training is input into the high-energy CT image generator network of the recurrent network.
  • the high-energy CT image generator network includes a down-sampling layer, and the low-energy CT image for training is down-sampled in the down-sampling layer to obtain the first feature map.
  • the down-sampling layer includes multiple convolutional layers.
  • the first convolutional layer of the down-sampling layer first performs shallow feature extraction on the low-energy CT image for training, and extracts the low-energy CT image for training.
  • the low-level simple features of the CT image are obtained from the feature map after the shallow feature extraction.
  • the other convolution layers of the down-sampling layer respectively convolve the feature map after the convolution of the previous layer to further reduce the size of the feature map, and further extract the image feature information of the feature map.
  • the first feature map is output through the last convolutional layer of the down-sampling layer.
  • the recurrent network also includes two input layers, which are a low-energy CT image input layer and a high-energy CT image input layer.
  • the low-energy CT image used for training is input into the high-energy CT image generator network of the recurrent network, for example, through the low-energy CT image generator network.
  • the CT image input layer receives low-energy CT images for training.
  • the input layer can also standardize the input images, and then input them into the down-sampling layer to improve the learning efficiency and performance of the recurrent network.
  • Step S102 Perform multiple feature extraction on the first feature map to obtain a second feature map
  • the high-energy CT image generator network of the recurrent network includes multiple feature extraction layers, and each feature extraction layer includes multiple convolutional layers to perform deep feature extraction on the first feature map. Accumulate more feature information on the basis of the map, and map the image data information of the first feature map to other distribution spaces, isolate the data distribution of metal artifacts and remove them, and obtain a second feature that eliminates metal artifacts more thoroughly. picture.
  • multi-level image feature information extraction can be achieved, forming a richer description of image features, and obtaining a second feature map with better metal artifact removal effect.
  • step S102 includes:
  • Step S1021 Input the first feature map into multiple residual networks, and introduce an attention mechanism to the multiple residual networks;
  • the feature extraction layer includes multiple residual networks, that is, every two convolutional layers of the feature extraction layer add a shortcut to form a residual block, and multiple residual blocks are connected to form a residual network.
  • the residual network can highlight small changes. In the process of generating the second feature map, it may cause loss of image information. For example, for the input low-energy CT image, it may be possible in feature extraction, especially in the processing of metal artifacts. Some tissue information of the scanned object in the area near the metal object will be lost, mainly bone information. Through the residual network, the original features of the image can be re-introduced to prevent the loss of information, for example, to prevent the loss of some organizational information near the metal object of the scanned object.
  • the attention mechanism can help the recurrent network assign different weights to each part of the input, pick out the more critical and important information in the image, so that the recurrent network can more accurately judge the importance of different parts of the image, and the introduction of the attention mechanism does not It will bring additional burden to the storage and calculation of the cyclic network.
  • each residual network introduces two attention mechanisms, namely the first attention mechanism and the second attention mechanism.
  • the first attention mechanism includes an average pooling layer and two convolutional layers
  • the second attention mechanism includes two convolutional layers.
  • the function of introducing the attention mechanism is to enhance the expressive ability of the cyclic network through the interdependence between the channels of the convolutional layer, and assign a weight to the channels of each convolutional layer of the residual network, and apply the weight to the corresponding image
  • the features are weighted to highlight the more concerned part of the feature map and allocate more information processing resources to this part.
  • the more critical image parts in the feature map are highlighted to focus on the key features of the feature map in subsequent image generation. Part of the image.
  • Step S1022 Use a convolutional layer to cascade the output images after multiple feature extractions
  • the output results of multiple feature extraction layers are jump-connected to the convolutional layer, and the convolutional layer is used to cascade the output images after the multiple feature extraction layers to avoid the loss of image feature information, and at the same time to reduce the dimensionality of the image data , Simplify the complexity of network calculation.
  • a convolutional layer with a convolution kernel size of 3*3 is used to cascade the output results of multiple feature extraction layers, that is, the output results of multiple feature extraction layers are jump-connected to one A 3*3 convolutional layer, through which the output results of multiple feature extraction layers are merged, and dimensionality reduction processing is performed to simplify the network complexity.
  • Step S1023 Introduce an attention mechanism to the new output map after cascading to obtain a second feature map.
  • the attention mechanism includes the first attention mechanism and the second attention mechanism.
  • the first attention mechanism includes an average pooling layer and two convolutional layers.
  • the second attention mechanism includes two convolutional layers.
  • the cascaded new output image first passes through the first attention mechanism, and then passes through the second attention mechanism. Its purpose is to highlight the more concerned image features in the new output image. Information to get the second feature map.
  • Step S103 Combine the first feature map and the second feature map and then input the up-sampling layer for up-sampling to obtain a first high-energy CT image.
  • the first feature map and the second feature map are combined and then input to the up-sampling layer for up-sampling, so as to reduce the information loss of the image in the feature extraction, and avoid the information loss during the deep feature extraction of the image. Missing results in a large deviation between the generated image and the target image.
  • the feature map of the input up-sampling layer is up-sampled in the up-sampling layer.
  • the feature map is up-sampled by bilinear interpolation.
  • the deconvolution method can also be used. Or one or any combination of the de-pooling method to obtain the first high-energy CT image.
  • step S103 includes:
  • Step S1031 cascade the feature maps of the same size in the up-sampling layer and down-sampling;
  • the up-sampling layer includes multiple convolutional layers, and the convolution kernel size of the multiple convolutional layers is all 3*3, so as to extract richer image feature information.
  • the shallow features of the feature map obtained by the down-sampling layer can be fused with the high-level features of the feature map extracted in the up-sampling layer, thereby preserving detailed information such as the organization information of the image.
  • the up-sampling layer and the down-sampling layer are cascaded Fusion is more conducive to network training.
  • Step S1032 Up-sampling the new feature map after cascading
  • the new cascaded feature maps are up-sampled again.
  • the feature map size of the input up-sampling layer is first enlarged by two times using the bilinear interpolation method, and then the twice-enlarged feature map and the feature map of the same size in the down-sampling layer are staged. Then use bilinear interpolation to enlarge the size of the new cascaded feature map twice, and then upsample the new feature map with twice the size to have the same size as the down-sampling layer.
  • Figures are cascaded.
  • the cyclic network includes a cascade fusion of multiple up-sampling layers and down-sampling layers, so that the shallow features and high-level features of the feature map can be more fused to avoid the lack of important information of the feature map.
  • Step S1033 Perform feature fusion on the output result of the up-sampling layer to obtain the first high-energy CT image.
  • the recurrent network high-energy CT image generator network contains a fusion output layer, which is used to perform feature fusion on the output result of the up-sampling layer, and output the first high-energy CT image fused with all feature map image information.
  • the fusion output layer is a convolutional layer, and the convolutional layer performs feature fusion on the feature map output by the up-sampling layer, so as to obtain the first high-energy CT image.
  • step S10 also includes:
  • Step S104 The network structure for converting the first high-energy CT image into the second low-energy CT image is the same as the network structure for converting the first low-energy CT image into the first high-energy CT image.
  • the recurrent network includes a high-energy CT image generator network and a low-energy CT image generator network.
  • the structure of the high-energy CT image generator network is the same as that of the low-energy CT generation network, but the function is opposite.
  • the high-energy CT image generator network is used to convert input low-energy CT images into high-energy CT images; the low-energy CT image generator network is used to convert input high-energy CT images into low-energy CT images.
  • the network structure for transforming the first high-energy CT image into the second low-energy CT image is the same as the network structure for transforming the first low-energy CT image into the first high-energy CT image. Therefore, the two generator networks of the recurrent network can be trained at the same time to optimize the network parameters and network structure, and build a high-energy CT image generation model with better image generation effect.
  • Step S11 Use the loss function to construct the first high-energy CT image generation model.
  • the recurrent network when the value of the loss function is outside the preset error range, the recurrent network continues to perform iterative training until the value of the loss function is within the preset error range, then the recurrent network stops training and obtains the first high energy CT image generation model. In addition, after the recurrent network is trained, the first high-energy CT image discrimination model, low-energy CT image generation model, and low-energy CT image discrimination model are also obtained.
  • the purpose of iterative training of the recurrent network is to reduce the value of the loss function, that is, to reduce the confrontation loss, reconstruction loss and recurring loss, so that the images generated by the two generator networks are more close to the real image.
  • the value of the loss function includes high-energy CT image generation loss, high-energy CT image discrimination loss, low-energy CT image generation loss, low-energy CT image discrimination loss, high-energy CT image reconstruction loss, and low-energy CT image cycle loss.
  • step S11 includes:
  • Step S110 Use Adam algorithm to iteratively optimize the parameters of each layer of the recurrent network
  • the Adam algorithm is used to adjust and optimize the parameters of each layer of the recurrent network, and its effect is to make the value of the loss function within the preset error range.
  • the Adam algorithm is used to adjust and optimize the parameters of the recurrent network model, including adjusting the down-sampling layer, feature extraction layer, up-sampling layer, and fusion output layer of the convolution kernel feature values and weights of the recurrent network, as well as the convolution of other parts Kernel feature values and weights, etc., until the value of the loss function is within the preset error range, the model converges, and the first high-energy CT image generation model is obtained.
  • Step S111 Use the sample set to train the recurrent network to obtain the first high-energy CT image generation model.
  • the sample set includes a dual-energy CT image group
  • the dual-energy CT image group may be data obtained by dual-energy CT scanning, including a pair of low-energy CT images and high-energy CT images.
  • the dual-energy CT image group is used to train the cyclic network
  • the low-energy CT image is used for training, that is, the low-energy CT image from the dual-energy CT image group, which is the first low-energy CT image input loop
  • the high-energy CT image generator network of the network distinguishes the first high-energy CT image generated from the high-energy CT image in the dual-energy CT image group, and performs the second low-energy CT image and the low-energy CT image of the dual-energy CT image group.
  • the first low-energy CT image is transformed into the first high-energy CT image, and then into the second low-energy CT image.
  • the loss function is obtained
  • the first high-energy CT image generation model is constructed using the loss function.
  • the counter loss, reconstruction loss and cyclic loss are obtained.
  • the high-energy CT image generation network and the low-energy CT image generation network are continuously optimized during the training process, so that the two interact with each other. Promote and improve the network performance of the two at the same time, so that the generated high-energy CT image generation network model can extract the deep feature information of the image more accurately, while retaining the detailed information of the tissue structure around the metal object to generate high-quality high-energy CT images.
  • the recurrent network of this embodiment has an "end-to-end” structure.
  • the low-energy CT image for training is directly operated “end-to-end", without involving projection data calculations and other complex calculations, which simplifies the cycle The calculation process and operation process when using the network.
  • FIG. 3 is a schematic flowchart of a second embodiment of a training method for generating a high-energy CT image model according to the present application.
  • the second embodiment of the training method for generating a high-energy CT image model of the present application is further explained on the basis of the first embodiment of the training method for generating a high-energy CT image model of the present application. Therefore, this embodiment and the present application for generating high-energy CT images The same steps in the first implementation of the model training method will not be repeated here.
  • This embodiment includes:
  • Step S20 Input the low-energy CT image for training into the recurrent network to perform the training of transforming the first low-energy CT image into the first high-energy CT image, and then from the first high-energy CT image to the second low-energy CT image, and the high-energy CT image for training
  • the image input recurrent network performs training to transform the second high-energy CT image into the third low-energy CT image, and then from the third low-energy CT image to the third high-energy CT image, to obtain the loss function.
  • the cyclic network includes a high-energy CT image generator network and a low-energy CT image generator network.
  • the two networks have the same structure and opposite functions.
  • the two generator networks are trained at the same time, that is, when the low-energy CT image for training is input into the high-energy CT image generator network, the second high-energy CT image is input into the low-energy CT image generator network.
  • Two networks are trained at the same time, which can further optimize the parameters of each layer of the network and make the generated image effect better.
  • the low-energy CT image discriminator network has the same structure as the high-energy CT image discriminator network, and is used for confrontation training with the low-energy CT image generator network to optimize the performance of the two, and finally play a role in optimizing the overall performance of the recurrent network.
  • the high-energy CT image generator network and the low-energy CT image generator network are trained at the same time, that is, the low-energy CT image for training and the second high-energy CT image are input at the same time, and the loss function is obtained after the training is completed.
  • the loss function includes high-energy CT image generation loss, high-energy CT image discrimination loss, low-energy CT image generation loss, low-energy CT image discrimination loss, high-energy CT image reconstruction loss, high-energy CT image cycle loss, low-energy CT image reconstruction loss, and low-energy CT image Circulation loss.
  • G hl is a low-energy CT image generator network.
  • y cyc is a high-energy CT image generated cyclically.
  • Step S21 Use the loss function to construct the first high-energy CT image generation model.
  • the training method for generating a high-energy CT image model in this embodiment is to simultaneously train the two generator networks of the recurrent network by simultaneously inputting a low-energy CT image and a high-energy CT image.
  • the two generator networks are trained at the same time, which further improves the network performance of the recurrent network.
  • the high-energy CT image is input at the same time to train the recurrent network, it is more accurate to adjust and optimize the parameters of the two generator networks according to the value of the loss function obtained, so that the performance of the first high-energy CT image generation model is better.
  • the removal of metal artifacts in the first high-energy CT image is more thorough, and the tissue information around the metal object is better preserved, the structure is clearer, and it is closer to the real high-energy CT image.
  • the first high-energy CT image generation model training method can adjust the network structure of the recurrent network according to the processing requirements of different images, such as adjusting the number of feature extraction layers, upsampling methods, etc., or increasing Or reduce the network structure of the recurrent network, such as removing the attention mechanism or changing the number or position of the attention mechanism, and then use the corresponding sample set to train the recurrent network after the structure adjustment, so that the recurrent network after the training is generated
  • the model can meet the corresponding image processing needs, for example, it is used to improve the image quality of positron emission computed tomography (PET), single photon emission computed tomography (SPECT), etc.
  • PET positron emission computed tomography
  • SPECT single photon emission computed tomography
  • FIG. 4 is a schematic flowchart of a method for generating a high-energy CT image according to the present application.
  • the method for generating high-energy CT images in this application includes:
  • Step S31 Input the low-energy CT image to be processed into the first high-energy CT image generation model
  • the first high-energy CT image is the first high-energy CT image generation model trained by the training method of the first high-energy CT image generation model provided by any of the foregoing embodiments of this application.
  • step S31 it may further include:
  • Step S30 Obtain a low-energy CT image to be processed
  • the device on which the first high-energy CT image generation model runs can directly receive the low-energy CT image to be processed sent by the CT scanning device, or it can directly send the acquisition command to the CT image database server.
  • the acquisition order includes patient information, examination time, and so on.
  • the CT image database server After the CT image database server receives the acquisition command, it searches based on the patient information, examination time and other information to obtain the corresponding low-energy CT images to be processed, and sends the retrieved low-energy CT images to be processed to the first high-energy CT image generation model institute.
  • the operating equipment enables the first high-energy CT image generation model to obtain the low-energy CT image to be processed and run the generation program.
  • the low-energy CT images to be processed can also be input into the recurrent network manually or in other ways.
  • FIG. 5 is a schematic flow diagram of a specific application scenario of the method for generating high-energy CT images in this application
  • Fig. 6 is a schematic diagram of the structure of the feature extraction layer in Fig. 5
  • Fig. 7 is Fig. 5
  • the device where the first high-energy CT image generation model is located receives the low-energy CT image to be processed from the CT scanning device, and instructs the first high-energy CT image generation model to execute the generation program.
  • the down-sampling layer of the first high-energy CT image generation model includes three convolutional layers, which are convolutional layer 1, convolutional layer 2, and convolutional layer 3.
  • the convolutional layer 1 is a 7*7 convolutional layer, which is used to extract the shallow features of the input image
  • the convolutional layer 2 and convolutional layer 3 are both 3*3 convolutional layers.
  • the size of the feature map after the convolution of a layer of convolutional layer is reduced by 1/2, and the image feature information is further extracted at the same time, and the first feature map is output.
  • the first feature map is input to the feature extraction layer of the first high-energy CT image generation model.
  • the high-energy CT image generation model includes three feature extraction layers, and each feature extraction layer includes three residual networks and a 3*3 The convolutional layer. Among them, each residual network includes two 3*3 convolutional layers and two attention mechanisms. Further, the above two 3*3 convolutional layers are connected by skipping to form a residual block.
  • the two attention mechanisms include the first attention mechanism and the second attention mechanism. Among them, the first attention mechanism includes an average pooling layer and two layers of 3*3 convolutional layers, and the second attention mechanism includes two layers of 3*3 convolutional layers. The above residual block is connected with the first attention mechanism and then connected with the second attention mechanism.
  • the above two attention mechanisms can both be cascaded jump connections to avoid the loss of characteristic information such as image organization structure.
  • the input image first passes through three residual networks, and then connects to a 3*3 convolutional layer to reduce the dimensionality of the image.
  • three feature extraction layers are jump-connected to a 3*3 convolutional layer to reduce the dimensionality of the feature maps passing through the feature extraction layer.
  • the feature map after the dimensionality reduction process passes through the first attention mechanism and then through the second attention mechanism to highlight the feature information of the focused part of the feature map to obtain the second feature map.
  • the second feature map is combined with the first feature map and then input to the up-sampling layer.
  • the up-sampling layer includes 4 layers of 3*3 convolutional layers, which are convolutional layer 1, convolutional layer 2, convolutional layer 3, and convolutional layer 4, respectively.
  • convolutional layer 1 and convolutional layer 3 before convolving the feature map of the input upsampling layer first use bilinear interpolation to enlarge the size of the feature map by two times, that is, use bilinear interpolation to respectively After layers 1 and 3 are convolved, the feature map size before convolution is doubled, and then convolution is carried out through convolution layer 1, convolution layer 3.
  • Convolution layer 2 and convolution layer 4 are convolved with the convolution in the downsampling layer. Layer 2 and convolutional layer 1 are cascaded to fuse the feature map information of the down-sampling layer.
  • the output image of the up-sampling layer outputs the first high-energy CT image through the fusion output layer.
  • the fusion output layer includes a layer of 7*7 convolutional layer, which is used to fuse the feature information of all feature maps and output the first high-energy CT image.
  • Figure 8 is the low-energy CT image to be processed obtained by dual-energy CT scan
  • Figure 9 is the high-energy CT image corresponding to Figure 5 obtained by dual-energy CT scan
  • Figure 10 is the use of this Apply for the high-energy CT image generated by the first high-energy CT image generation model. It can be seen that using the high-energy CT image generated by the first high-energy CT image generation model of the present application, metal artifacts are removed more thoroughly, and the tissue structure near the metal object is retained better.
  • Table 1 is the image quality evaluation parameter table of the low-energy CT image to be processed and the high-energy CT image (reconstructed high-energy CT image) generated by this application.
  • Table 1 compared with the low-energy CT image to be processed,
  • the first high-energy CT image obtained by the method of generating high-energy CT images of the present application has significantly improved peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), and has a smaller normalized mean square error (NMSE). It is proved that the image obtained by the method of generating high-energy CT images of the present application is more close to the real high-energy CT image, thereby verifying the effectiveness of the method of generating high-energy CT images of the present application.
  • PSNR peak signal-to-noise ratio
  • SSIM structural similarity
  • NMSE normalized mean square error
  • Table 1 The image quality evaluation parameter table of the low-energy CT image to be processed and the reconstructed high-energy CT image
  • the first high-energy CT image generation model includes multiple feature extraction layers and multiple jump connections, which can extract image feature information in a deep level.
  • multiple feature extraction layers include multiple residual networks, which can remove metal artifacts while retaining the organization information near the metal objects, and improve the quality of processed images.
  • this embodiment reduces the complexity of the network by reducing the number of convolutional layers in the residual network, and avoids the problem of image distortion after multiple feature extractions through cascading jump connections, which can remove metal artifacts while improving Good retention of the details of the structure near the metal object.
  • FIG. 11 is a schematic structural block diagram of a first embodiment of a computer device according to this application.
  • the computer device 40 of this embodiment may be a server or a terminal.
  • the server can be an independent server or a server cluster.
  • Terminals can be electronic devices such as mobile phones, tablet computers, notebook computers, desktop computers, personal digital assistants, and wearable devices.
  • the computer device 40 of this embodiment includes a memory 41 and a processor 42 connected by a system bus.
  • the memory 41 can store a computer program.
  • the computer program includes program instructions.
  • the processor 42 can execute any training method for generating a high-energy CT image model of the present application, or execute the method for generating a high-energy CT image of the present application.
  • the processor 42 is used to provide calculation and control capabilities, and support the operation of the entire computer equipment.
  • the processor 42 may be a central processing unit (CPU).
  • the processor 42 may also be other general-purpose processors, digital signal processors (DSP), application specific integrated circuits (ASIC), field programmable gate arrays (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware Components, etc.
  • DSP digital signal processors
  • ASIC application specific integrated circuits
  • FPGA field programmable gate arrays
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor.
  • FIG. 12 is a schematic structural block diagram of a second implementation manner of a computer device according to this application.
  • the computer device 50 of this embodiment includes a down-sampling module 51, a feature extraction module 52, an image dimensionality reduction module 53, a first attention module 54, a second attention module 55, an up-sampling module 56 and fusion Output module 57.
  • the down-sampling module 51 is used to down-sample the input image to extract image feature information and reduce the size of the image to generate a first feature map.
  • the feature extraction module 52 is configured to perform deep feature extraction on the first feature map to obtain a feature map with key information.
  • the image dimensionality reduction module 53 is used to perform dimensionality reduction processing on the above-mentioned feature map to simplify calculation complexity.
  • the first attention module 54 and the second attention module 55 are used to highlight important feature information of the image.
  • the first feature map generates a second feature map through a feature extraction module, an image dimensionality reduction module, a first attention module, and a second attention module.
  • the first feature map and the second feature map are combined and input to the up-sampling module 56.
  • the up-sampling module 56 is used for up-sampling the combined feature map to enlarge the image size.
  • the fusion output module 57 is used to fuse the image information of all the feature maps and output the target image.
  • modules described as separate components may or may not be physically separated.
  • the components displayed as modules may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of this embodiment.
  • the functional modules in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
  • the integrated unit can be implemented in the form of hardware or software functional unit.
  • modules of the computer device 50 or more functions of each module in this embodiment please refer to any of the above-mentioned training methods for generating high-energy CT image models in this application, or refer to the above-mentioned method for generating high-energy CT images in this application.
  • FIG. 13 is a schematic block diagram of the structure of a computer-readable storage medium of this application.
  • the computer-readable storage medium 60 provided by the present application stores program data 61.
  • the program data 61 can be stored in a computer program, and the program data 61 can be executed by a processor to implement any training method for generating a high-energy CT image model in this application, or refer to the above-mentioned method for generating a high-energy CT image in this application.
  • the computer-readable storage medium 60 may be an internal storage unit of any computer device in the foregoing embodiments, for example, a hard disk or memory of any computer device in the foregoing embodiment. It may also be an external storage device of any computer device in the foregoing embodiments, for example, a plug-in hard disk, a smart memory card, a secure digital card, a flash memory card, etc. of the device on the computer device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本申请公开了一种生成高能CT图像模型的训练方法、生成方法、设备和存储介质。其中,所述训练方法包括将训练用低能CT图像输入循环网络进行第一低能CT图像转变为第一高能CT图像、再从第一高能CT图像转变为第二低能CT图像的训练,得到损失函数;利用损失函数构建第一高能CT图像生成模型。所述生成方法包括将待处理低能CT图像输入上述第一高能CT图像生成模型。通过上述方法,本申请能够得到高质量的高能CT图像。

Description

生成高能CT图像模型的训练及生成方法、设备、存储介质 【技术领域】
本申请涉及计算机断层扫描技术领域,尤其是一种生成高能CT图像模型的训练方法、生成方法、设备和存储介质。
【背景技术】
计算机断层成像技术(Computed Tomography,CT)能够提供人体的主要解剖和病理信息,极大地提高医学诊断水平,目前已被广泛应用于临床医学领域。CT扫描时使用高剂量的X射线能够得到清晰的CT图像,然而X射线会对患者身体产生伤害,甚至可能引发癌变,X射线剂量越高,患者身体受到的损伤风险越大,所以在临床上通常使用低剂量CT对患者进行检查。但在患者成像部位有金属植入物的情况下,由于金属对X射线具有很高的衰减性,导致探测器接收到的数据存在严重的测量误差,重建后的CT图像含有明显的金属伪影,金属伪影使得CT图像质量变差,组织结构难以判断,可能导致误诊的结果。
【发明内容】
本申请主要解决的技术问题是:提供一种生成高能CT图像模型的训练方法、生成高能CT图像的方法、计算机设备和存储介质,旨在基于低能CT图像生成高能CT图像,以在不增加患者所受辐射剂量的同时改善CT图像质量。
为解决上述技术问题,本申请采用的第一个技术方案是:提供一种生成高能CT图像模型的训练方法,包括:将训练用低能CT图像输入循环网络进行第一低能CT图像转变为第一高能CT图像、再从第一高能CT图像转变为第二低能CT图像的训练,得到损失函数;
利用损失函数构建第一高能CT图像生成模型。
为解决上述技术问题,本申请采用的第二个技术方案是:提供一种生成高能CT图像的方法,包括:将待处理低能CT图像输入第一高能CT图像生成模型;
其中,第一高能CT图像生成模型为采用上述生成高能CT图像模型的训练方法训练得到的第一高能CT图像生成模型。
为解决上述技术问题,本申请采用的第三个技术方案是:提供一种计算机 设备,包括存储器和处理器;
其中,存储器用于存储计算机程序;
其中,处理器用于执行计算机程序并在执行计算机程序时实现上述生成高能CT图像模型的训练方法,或者实现上述生成高能CT图像的方法。
为解决上述技术问题,本申请采用的第四个技术方案是:提供一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被处理器执行时使处理器实现上述生成高能CT图像模型的训练方法,或者实现上述生成高能CT图像的方法。
本申请的有益效果是:与现有技术相比,本申请通过构建循环网络,进行第一低能CT图像转变为第一高能CT图像,再转变为第二低能CT图像的训练,得到损失函数,利用损失函数构建第一高能CT图像生成模型。由于循环网络在训练过程中不断减小损失函数的值,以使其位于预设误差范围之内,因而循环网络的高能CT图像生成网络和低能CT图像生成网络在训练过程中不断被优化,同时由于网络的循环结构,使得两者相互优化,同时提升两者的网络性能,最终使得生成的高能CT图像生成网络模型能够更加准确的提取图像的深层特征信息,同时保留金属物周围组织结构的细节信息,生成高质量的高能CT图像。
【附图说明】
图1为本申请生成高能CT图像模型的训练方法的第一实施方式的流程示意图;
图2为本申请高能CT图像判别器网络的结构示意图;
图3为本申请生成高能CT图像模型的训练方法的第二实施方式的流程示意图;
图4为本申请一种生成高能CT图像的方法一流程示意图;
图5为本申请一种生成高能CT图像的方法一具体应用场景的流程示意图;
图6为图5中特征提取层的结构示意图;
图7为图5及图6中第一注意力机制和第二注意力机制的结构示意图;
图8为双能CT扫描得到的待处理低能CT图像;
图9为双能CT扫描得到的于图5对应的高能CT图像;
图10为采用本申请第一高能CT图像生成模型生成的高能CT图像;
图11为本申请一种计算机设备的第一实施方式的结构示意框图;
图12为本申请一种计算机设备的第二实施方式的结构示意框图;
图13为本申请一种计算机可读存储介质的结构示意框图。
【具体实施方式】
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的发明人经过长期的研究发现:计算机断层成像技术(CT)广泛应用于临床医学领域,CT成像中利用X射线获取人体主要解剖和病理信息。其中,使用高能CT能够得到清晰的CT图像,但是高能CT的辐射剂量高,对人体所造成的辐射损伤严重,因而在临床中倾向于使用低能CT,以减少CT辐射,但当人体组织存在金属植入物时,低能CT扫描得到的重建图像会产生金属伪影,导致CT图像的质量变差,使得人体组织结构难以判断,可能导致误诊的结果。CT图像质量与辐射剂量之间形成了相互矛盾的关系,为了解决该问题,本申请提出至少如下实施例。
参阅图1,图1为本申请生成高能CT图像模型的训练方法的第一实施方式的流程示意图。如图1所示,本申请生成高能CT图像模型的训练方法的第一实施方式包括:
步骤S10:将训练用低能CT图像输入循环网络进行第一低能CT图像转变为第一高能CT图像、再从第一高能CT图像转变为第二低能CT图像的训练,得到损失函数。
在本实施方式中,循环网络可以包括一对生成器网络。其中,一个生成器网络为高能CT图像生成器网络,用于将输入的低能CT图像转变为高能CT图像。另一个生成器网络为低能CT图像生成器网络,用于将输入的高能CT图像转变为低能CT图像。其中,高能CT图像生成器网络与低能CT图像生成器网络构成一个循环网络,用以将输入的低能CT图像转变为高能CT图像再转变为新的低能CT图像。
进一步地,循环网络还可以包括一对判别器网络。其中,一个判别器网络为高能CT图像判别器网络,对应于高能CT图像生成器网络,用以对高能CT图像生成器网络的生成结果进行判别,以和高能CT图像生成器网络进行对抗训 练,使高能CT图像生成器网络生成的高能CT图像更加逼近真图。另一个判别器网络为低能CT图像判别器网络,对应于低能CT图像生成器网络,用以对低能CT图像生成器网络的生成结果进行判别,以和低能CT图像生成器网络进行对抗训练,使低能CT图像生成器网络生成的低能CT图像更加逼近真图。
参阅图2,图2为本申请高能CT图像判别器网络的结构示意图。如图2所示,本申请高能CT图像判别器网络包括多层卷积层,例如包括五层卷积层。高能CT图像判别器网络通过多层卷积层提取输入高能CT图像判别器网络的图像特征信息,以对输入的图像进行判别,衡量高能CT图像生成器网络生成的高能CT图像与真实高能CT图像的差距,并反馈到循环网络的高能CT图像生成器网络中,以优化高能CT图像生成器网络的各层参数,最终生成高质量的高能CT图像。
其中,训练用低能CT图像可以为来自双能CT图像组的低能CT图像。双能CT图像组可以为利用双能CT扫描得到的一一配对的低能CT图像和高能CT图像。
在本实施方式中,生成器网络和对应的判别器网络在进行对抗训练中得到对抗损失函数,对抗损失函数包括生成器损失函数和判别器损失函数。进一步地,循环网络每循环一次,将重新生成一张新的输入图。例如将训练用低能CT图像输入循环网络,进行一次循环后,将生成第一高能CT图像,然后由该第一高能CT图像生成第二低能CT图像,即生成一张新的低能CT图像。因此,损失函数还可以包括重建损失和循环损失。其中,重建损失为第一高能CT图像与真实高能CT图像的图像损失。循环损失为第二低能CT图像与训练用低能CT图像的图像损失。
在本实施方式中,训练用低能CT图像的处理流程为:首先将训练用低能CT图像输入循环网络,循环网络的高能CT图像生成器网络对输入的训练用低能CT图像,即第一低能CT图像进行处理,输出第一高能CT图像。然后循环网络的低能CT图像生成器网络对输入的第一高能CT图像进行处理,输出第二低能CT图像,完成一个循环动作。在该过程中,循环网络的生成器网络和对应的判别器网络生成对抗损失,循环网络在完成一个循环动作后生成重建损失和循环损失,从而得到损失函数。
其中,对抗损失包括生成器网络损失和判别器网络损失。以高能CT图像生成器网络为例,高能CT图像生成器网络损失计算公式为:
Figure PCTCN2020081504-appb-000001
其中,G l-h为高能CT图像生成器,D h为高能CT图像判别器,x为来自低能CT图像数据集P data(x)的图像,y为来自高能CT图像数据集P data(y)的图像,1为真实图像的标签。
高能CT图像判别器网络损失计算公式为:
Figure PCTCN2020081504-appb-000002
其中,0为生成图像的标签。
循环网络的高能CT图像生成器网络和高能CT图像判别器网络两者相互对抗,在互相对抗过程中提高两者的性能,以使得高能CT图像生成器网络生成的第一高能CT图像更加逼近真实的高能CT图像。
循环网络中低能CT图像生成器网络损失和低能CT图像判别器网络损失的计算公式具有相应形式,在此不再赘述。
重建损失的计算公式为:
Figure PCTCN2020081504-appb-000003
循环损失的计算公式为:
Figure PCTCN2020081504-appb-000004
其中,x cyc为循环生成的低能CT图像。
进一步地,步骤S10包括:
步骤S101:对训练用低能CT图像进行下采样,以得到第一特征图;
将训练用低能CT图像输入循环网络的高能CT图像生成器网络,高能CT图像生成器网络包括下采样层,训练用低能CT图像在下采样层进行下采样,得到第一特征图。
其中,下采样层包括多个卷积层,训练用低能CT图像通过下采样层时,下采样层的第一层卷积层首先对训练用低能CT图像进行浅层特征提取,提取训练用低能CT图像低层次的简单特征,得到浅层特征提取后的特征图。而后下采样层的其他卷积层分别对上一层卷积层卷积后的特征图进行卷积,以进一步缩小特征图的尺寸,并进一步提取特征图的图像特征信息。最后,通过下采样层的最后一层卷积层,输出第一特征图。
在一些实施方式中,循环网络还包括两个输入层,分别是低能CT图像输入 层和高能CT图像输入层,将训练用低能CT图像输入循环网络的高能CT图像生成器网络,例如是通过低能CT图像输入层接收训练用低能CT图像,输入层还可以将输入的图像进行标准化处理,然后在将其输入下采样层,以提升循环网络的学习效率和表现。
步骤S102:对第一特征图进行多次特征提取,以得到第二特征图;
在本实施方式中,循环网络的高能CT图像生成器网络包括多个特征提取层,每个特征提取层包括多个卷积层,用以对第一特征图进行深层特征提取,在第一特征图的基础上累积更多的特征信息,并将第一特征图的图像数据信息映射到其他分布空间,隔离出金属伪影的数据分布并加以去除,得到金属伪影消除比较彻底的第二特征图。
在本实施方式中,通过使用多个特征提取层,可以实现多层次图像特征信息的提取,形成对图像特征更加丰富的描述,得到金属伪影消除效果较好的第二特征图。
进一步地,步骤S102包括:
步骤S1021:将第一特征图输入多个残差网络,并对多个残差网络引入注意力机制;
在本实施方式中,特征提取层包括多个残差网络,即特征提取层的每两层卷积层增加一个捷径,构成一个残差块,多个残差块连接起来构成一个残差网络。残差网络可以突出微小的变化,在生成第二特征图的过程中,可能会导致图像信息损失,例如对于输入的低能CT图像,在特征提取中特别是在金属伪影的处理过程中,可能会损失金属物附近区域的被扫描对象的一些组织信息,主要为骨骼信息。通过残差网络可以将图像原特征再引入过来,防止信息丢失,例如防止被扫描对象的金属物附近的一些组织信息的丢失。对多个残差网络引入注意力机制,可以为对每个残差网络引入注意力机制,多个残差网络引入多个注意力机制。注意力机制可以帮助循环网络对输入的每个部分赋予不同权重,挑选出图像中更关键更重要的信息,使循环网络能够更准确的判断图像不同部分特征的重要性,且引入注意力机制不会对循环网络的存储和计算带来额外负担。
在本实施方式中,每个残差网络引入两个注意力机制,分别为第一注意力机制和第二注意力机制。其中,第一注意力机制包括一个平均池化层和两层卷积层,第二注意力机制包括两层卷积层。引入注意力机制的作用在于,通过卷 积层通道之间的相互依赖性来增强循环网络的表达能力,对残差网络的每层卷积层的通道赋予一个权重,应用该权重对相应的图像特征进行加权,从而凸显特征图中更关注的部分,并对该部分分配更多的信息处理资源,同时,凸显特征图中更关键的图像部分,以在后续的图像生成中聚焦特征图中关键的图像部分。
步骤S1022:利用卷积层对多次特征提取后的输出图进行级联;
多个特征提取层的输出结果跳跃连接至卷积层,利用卷积层对通过多个特征提取层后的输出图进行级联,以避免图像特征信息的损失,同时对图像数据进行降维处理,简化网络计算的复杂度。
具体地,在一个应用例中,利用一个卷积核尺寸为3*3的卷积层对多个特征提取层的输出结果进行级联,即多个特征提取层的输出结果跳跃连接至一个为3*3的卷积层,通过该卷积层对多个特征提取层的输出结果进行融合,并进行降维处理,以简化网络复杂度。
步骤S1023:对级联后的新的输出图引入注意力机制,以得到第二特征图。
其中,注意力机制包括第一注意力机制和第二注意力机制。第一注意力机制包括一个平均池化层和两个卷积层。第二注意力机制包括两个卷积层,级联后的新的输出图首先通过第一注意力机制,然后通过第二注意力机制,其目的在于凸显新的输出图中更关注的图像特征信息,以得到第二特征图。
步骤S103:将第一特征图与第二特征图结合后输入上采样层进行上采样,以得到第一高能CT图像。
在本实施方式中,将第一特征图与第二特征图结合后输入上采样层进行上采样,以减小图像在特征提取中的信息损失,避免在对图像进行深层特征提取的时候由于信息缺失而导致生成的图像与目标图像产生较大偏差。
进一步地,输入上采样层的特征图在上采样层进行上采样,在本实施方式中,采用双线性插值法对特征图进行上采样,在其他实施方式中,也可以采用反卷积法或反池化法中的一种或任意组合,以得到第一高能CT图像。
进一步地,步骤S103包括:
步骤S1031:将上采样层与下采样中相同尺寸的特征图进行级联;
在本实施方式中,上采样层包括多个卷积层,多个卷积层的卷积核尺寸均为3*3,以提取更加丰富的图像特征信息。输入上采样层的特征图通过多个卷积层得到多个不同尺寸的特征图,将上采样层与下采样层中相同尺度的特征图进 行级联。如此,下采样层得到的特征图的浅层特征可以和上采样层中提取的特征图的高层特征相融合,从而保留图像的组织信息等细节信息,同时上采样层和下采样层的级联融合更有利于网络的训练。
步骤S1032:对级联后的新的特征图进行上采样;
在本实施方式中,将上采样层与下采样层中相同尺寸的特征图进行级联后,对级联后的新的特征图再次进行上采样。
具体地,在一个应用例中,首先利用双线性插值法将输入上采样层的特征图尺寸放大两倍,然后将放大两倍后的特征图和下采样层中相同尺寸的特征图进行级联,进而再次利用双线性插值法将级联后的新的特征图尺寸放大两倍,然后对尺寸放大两倍的新的特征图再次进行上采样,以和下采样层的尺寸相同的特征图进行级联。如此设置,使得循环网络包括多个上采样层和下采样层的级联融合,从而能够更多的融合特征图的浅层特征和高层特征,以避免特征图重要信息的缺失。
步骤S1033:对上采样层的输出结果进行特征融合,以得到第一高能CT图像。
循环网络高能CT图像生成器网络包含融合输出层,用于对上采样层的输出结果进行特征融合,并输出融合所有特征图图像信息的第一高能CT图像。
具体地,在一个应用例中,融合输出层为一层卷积层,该卷积层对上采样层输出的特征图进行特征融合,从而得到第一高能CT图像。
进一步地,步骤S10还包括:
步骤S104:第一高能CT图像转变为第二低能CT图像的网络结构与第一低能CT图像转变为第一高能CT图像的网络结构相同。
循环网络包括高能CT图像生成器网络和低能CT图像生成器网络。其中,高能CT图像生成器网络的结构与低能CT生成网络的结构相同,功能相反。高能CT图像生成器网络用于将输入的低能CT图像转变为高能CT图像;低能CT图像生成器网络用于将输入的高能CT图像转变为低能CT图像。
在本实施方式中,第一高能CT图像转变为第二低能CT图像的网络结构与第一低能CT图像转变为第一高能CT图像的网络结构相同。从而循环网络的两个生成器网络能够同时进行训练,以优化网络参数和网络结构,构建图像生成效果更佳的高能CT图像生成模型。
步骤S11:利用损失函数构建第一高能CT图像生成模型。
在本实施方式中,损失函数的值在预设误差范围之外时,则循环网络继续进行迭代训练,直至损失函数的值在预设误差范围之内,则循环网络停止训练,得到第一高能CT图像生成模型。此外,循环网络在训练完毕后,还得到第一高能CT图像判别模型、低能CT图像生成模型和低能CT图像判别模型。
其中,循环网络迭代训练的目的在于减小损失函数的值,即减小对抗损失、重建损失和循环损失,以使得两个生成器网络生成的图像更加逼近真实图像。
在一些实施方式中,损失函数的值包括高能CT图像生成损失、高能CT图像判别损失、低能CT图像生成损失、低能CT图像判别损失、高能CT图像重建损失、低能CT图像循环损失。
进一步地,步骤S11包括:
步骤S110:利用Adam算法对所述循环网络的各层参数进行迭代优化;
在本实施方式中,利用Adam算法对循环网络的各层参数进行调整和优化,其作用在于使损失函数的值在预设误差范围内。
具体地,利用Adam算法调整和优化循环网络模型参数,包括调整循环网络的下采样层、特征提取层、上采样层、融合输出层的卷积核特征值和权值,以及其他部位的卷积核特征值和权值等,直到损失函数的值在预设误差范围内,模型收敛,得到第一高能CT图像生成模型。
步骤S111:利用样本集对循环网络进行训练,以得到第一高能CT图像生成模型。
在本实施方式中,样本集包括双能CT图像组,双能CT图像组可以为利用双能CT扫描得到的数据,包括一一配对的低能CT图像和高能CT图像。
具体地,在一个应用例中,利用双能CT图像组对循环网络进行训练,将训练用低能CT图像,即来自双能CT图像组的低能CT图像,也即是第一低能CT图像输入循环网络的高能CT图像生成器网络,将生成的第一高能CT图像与双能CT图像组中的高能CT图像进行判别,将生成的第二低能CT图像与双能CT图像组的低能CT图像进行判别,从而得到对抗损失的值,同时得到重建损失以及循环损失的值,即得到损失函数的值,继而判断损失函数的值是否在预设误差范围内,若不在,则利用Adam算法对循环网络的各层参数进行调整和优化,直至损失函数的值在预设误差范围之内。在利用样本集对循环网络进行多次训练后,模型收敛,得到第一高能CT图像生成模型。
本实施方式通过构建循环网络,进行第一低能CT图像转变为第一高能CT 图像,再转变为第二低能CT图像的训练,得到损失函数,利用损失函数构建第一高能CT图像生成模型。通过循环网络结构,得到对抗损失、重建损失和循环损失。由于循环网络在训练过程中不断减小损失函数的值,以使其位于预设误差范围之内,因而高能CT图像生成网络和低能CT图像生成网络在训练过程中不断被优化,使得两者相互促进,同时提升两者的网络性能,最终使得生成的高能CT图像生成网络模型能够更加准确的提取图像的深层特征信息,同时保留金属物周围组织结构的细节信息,生成高质量的高能CT图像。
另外,本实施方式的循环网络为“端到端”结构,在训练过程中,直接对训练用低能CT图像进行“端到端”操作,不涉及投影数据的计算以及其他复杂计算,简化了循环网络使用时的计算过程和操作流程。
参阅图3,图3为本申请生成高能CT图像模型的训练方法的第二实施方式的流程示意图。本申请生成高能CT图像模型的训练方法的第二实施方式,是在本申请生成高能CT图像模型的训练方法的第一实施方式的基础上进一步阐释,因此本实施方式与本申请生成高能CT图像模型的训练方法的第一实施方式相同的步骤在此不再赘述。本实施方式包括:
步骤S20:将训练用低能CT图像输入循环网络进行第一低能CT图像转变为第一高能CT图像、再从第一高能CT图像转变为第二低能CT图像的训练的同时,将训练用高能CT图像输入循环网络进行第二高能CT图像转变为第三低能CT图像、再从第三低能CT图像转变为第三高能CT图像的训练,得到损失函数。
在本实施方式中,循环网络包括高能CT图像生成器网络和低能CT图像生成器网络,两个网络结构相同,功能相反。在对循环网络进行训练中,两个生成器网络同时进行训练,即将训练用低能CT图像输入高能CT图像生成器网络中的同时,将第二高能CT图像输入低能CT图像生成器网络中。两个网络同时进行训练,能够进一步优化网络各层参数,使生成的图像效果更好。
进一步地,低能CT图像判别器网络具有和高能CT图像判别器网络相同的结构,以和低能CT图像生成器网络进行对抗训练,优化两者的性能,最终起到优化循环网络整体性能的作用。
在本实施方式中,高能CT图像生成器网络和低能CT图像生成器网络同时进行训练,即同时输入训练用低能CT图像和第二高能CT图像,在训练完毕后,得到损失函数。其中,损失函数包括高能CT图像生成损失、高能CT图像判别 损失、低能CT图像生成损失、低能CT图像判别损失、高能CT图像重建损失、高能CT图像循环损失、低能CT图像重建损失、低能CT图像循环损失。
进一步地,重建损失的计算公式为:
Figure PCTCN2020081504-appb-000005
其中,G h-l为低能CT图像生成器网络。
循环损失的计算公式为:
Figure PCTCN2020081504-appb-000006
其中,y cyc为循环生成的高能CT图像。
步骤S21:利用损失函数构建第一高能CT图像生成模型。
本实施方式生成高能CT图像模型的训练方法,通过同时输入低能CT图像和高能CT图像,对循环网络的两个生成器网络同时进行训练。两个生成器网络同时进行训练,进一步提升了循环网络的网络性能。且由于同时输入了高能CT图像对循环网络进行训练,根据所得的损失函数的值对两个生成器网络的参数进行调整和优化更加准确,使得得到第一高能CT图像生成模型性能更加优良,生成的第一高能CT图像金属伪影去除更加彻底,且金属物周围的组织信息保留更好,结构更加清晰,更加逼近真实高能CT图像。
此外,本申请提供的第一高能CT图像生成模型的训练方法,可根据不同图像的处理需求调整循环网络的网络结构,例如是调整特征提取层的数量、上采样的方法等,也可以是增加或减少循环网络的网络结构,例如是去除注意力机制或是改变注意力机制的个数或位置,进而使用相应的样本集对结构调整后的循环网络进行训练,使得训练完毕后的循环网络生成的模型,能够满足相应的图像处理需求,例如是用于提升正电子发射型计算机断层显像(PET)、单光子发射计算机断层成像术(SPECT)等的图像质量。
参阅图4,图4为本申请一种生成高能CT图像的方法一流程示意图。如图4所示,本申请生成高能CT图像的方法包括:
步骤S31:将待处理低能CT图像输入第一高能CT图像生成模型;
在本实施方式中,第一高能CT图像为采用本申请前述任一实施方式提供的第一高能CT图像生成模型的训练方法训练得到的第一高能CT图像生成模型。
进一步地,在步骤S31之前,还可以包括:
步骤S30:获取待处理低能CT图像;
在本实施方式中,第一高能CT图像生成模型所运行的设备可以直接接收CT扫描设备发送的待处理低能CT图像,或者也可以直接发送获取命令给CT图像数据库服务器。其中,获取命令包括患者信息、检查时间等。CT图像数据库服务器接收到获取命令之后,根据患者信息、检查时间等信息进行检索以得到对应的待处理低能CT图像,并将检索得到的待处理低能CT图像发送给第一高能CT图像生成模型所运行的设备,使得第一高能CT图像生成模型能够获取待处理低能CT图像并运行生成程序。当然,也可以以人工方式或其他方式将待处理低能CT图像输入循环网络。
在本实施方式中,通过将待处理低能CT图像输入第一高能CT图像生成模型,能够得到金属伪影去除更为彻底,且金属物附近的一些组织结构保留更为完整的高质量高能CT图像。
共同参阅图5、图6和图7,其中,图5为本申请生成高能CT图像的方法一具体应用场景的流程示意图,图6为图5中特征提取层的结构示意图,图7为图5及图6中第一注意力机制和第二注意力机制的结构示意图。
在该具体应用场景中,第一高能CT图像生成模型所在的设备接收到CT扫描设备发送的待处理低能CT图像,并指令第一高能CT图像生成模型执行生成程序。第一高能CT图像生成模型的下采样层包括三层卷积层,分别为卷积层1、卷积层2和卷积层3。其中,卷积层1为7*7的卷积层,用以对输入的图像进行浅层特征提取,卷积层2和卷积层3均为3*3的卷积层,用以将上一层卷积层卷积后的特征图尺寸缩小1/2,同时进一步提取图像特征信息,输出第一特征图。
进一步地,第一特征图被输入到第一高能CT图像生成模型的特征提取层,高能CT图像生成模型包括三个特征提取层,每个特征提取层包括三个残差网络和一个3*3的卷积层。其中,每个残差网络包括两层3*3的卷积层和两个注意力机制。进一步地,上述两层3*3的卷积层通过跳跃连接构成一个残差块。两个注意力机制包括第一注意力机制和第二注意力机制。其中,第一注意力机制包括一个平均池化层和两层3*3的卷积层,第二注意力机制包括两层3*3的卷积层。上述残差块与第一注意力机制连接后与第二注意力机制连接。
进一步地,上述两个注意力机制可以均为级联跳跃连接,以避免图像组织结构等特征信息的损失。在每个特征提取层,输入的图像首先通过三个残差网络,然后连接至一个3*3的卷积层,以对图像进行降维处理。在高能CT图像生成器网络中,三个特征提取层跳跃连接至一个3*3的卷积层,以对通过特征提 取层的特征图进行降维处理。降维处理后的特征图通过第一注意力机制,然后再通过第二注意力机制,以凸显特征图中重点关注部分的特征信息,得到第二特征图。将第二特征图与第一特征图结合后输入上采样层。
在该具体应用场景中,上采样层包括4层3*3的卷积层,分别为卷积层1、卷积层2、卷积层3和卷积层4。其中,卷积层1和卷积层3在对输入上采样层的特征图进行卷积之前,先利用双线性插值法将特征图尺寸放大两倍,即利用双线性插值法分别将卷积层1、3卷积之前的特征图尺寸放大两倍后,再通过卷积层1、卷积层3进行卷积,卷积层2和卷积层4分别与下采样层中的卷积层2和卷积层1进行级联,以融合下采样层的特征图信息。
最后,在该具体应用场景中,上采样层的输出图通过融合输出层输出第一高能CT图像。其中,融合输出层包括一层7*7的卷积层,用以对所有特征图的特征信息进行融合,并输出第一高能CT图像。
参阅图8、图9和图10,其中,图8为双能CT扫描得到的待处理低能CT图像,图9为双能CT扫描得到的与图5对应的高能CT图像,图10为采用本申请第一高能CT图像生成模型生成的高能CT图像。可以看出,采用本申请第一高能CT图像生成模型生成的高能CT图像,金属伪影去除较为彻底,且金属物附近的组织结构保留度较好。
参阅表1,表1为上述待处理低能CT图像与本申请生成的高能CT图像(重建高能CT图像)的图像质量评估参数表,如表1所示,与待处理低能CT图像相比,采用本申请生成高能CT图像的方法得到的第一高能CT图像,在峰值信噪比(PSNR)和结构相似性(SSIM)上均有明显提升,且归一化均方误差(NMSE)更小,证明了采用本申请生成高能CT图像的方法得到的图像更加逼近真实高能CT图像,从而验证了本申请生成高能CT图像的方法的有效性。
表1:待处理低能CT图像与重建高能CT图像的图像质量评估参数表
评估参数 PSNR SSIM NMSE
低能CT图像 24.71 0.6052 0.1007
重建高能CT图像 27.22 0.6452 0.0552
在本实施例中,第一高能CT图像生成模型包括多个特征提取层和多个跳跃连接,能够深层次提取图像特征信息。同时多个特征提取层包括多个残差网络,能够在去除金属伪影的同时,保留金属物附近的组织信息,提高处理图像的质量。同时,本实施方式通过减少残差网络中卷积层层数降低了网络的复杂度, 通过级联跳跃连接避免了图像在多次特征提取后的失真问题,可以在去除金属伪影的同时更好的保留金属物附近的组织细节特征。
参阅图11,图11为本申请一种计算机设备的第一实施方式的结构示意框图。本实施方式的计算机设备40可以是服务器或终端。其中,服务器可以为独立的服务器,也可以是服务器集群。终端可以是手机、平板电脑、笔记本电脑、台式电脑、个人数字助理和穿戴式设备等电子设备。
如图11所示,本实施方式的计算机设备40包括通过系统总线连接的存储器41和处理器42。其中,存储器41可以存储计算机程序。该计算机程序包括程序指令,该程序指令被处理器42执行时,可使得处理器42执行本申请任意一种生成高能CT图像模型的训练方法,或者执行本申请生成高能CT图像的方法。
其中,处理器42用于提供计算和控制能力,支撑整个计算机设备的运行。处理器42可以是中央处理单元(CPU)。处理器42还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。其中,通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
本实施方式中关于计算机设备40的更多功能与作用可以参照上述本申请任意一种生成高能CT图像模型的训练方法或者生成高能CT图像模型的方法的描述。
参阅图12,图12为本申请一种计算机设备的第二实施方式的结构示意框图。
如图12所示,本实施方式的计算机设备50包括下采样模块51、特征提取模块52、图像降维模块53、第一注意力模块54、第二注意力模块55、上采样模块56和融合输出模块57。
其中,下采样模块51用于对输入的图像进行下采样,以提取图像特征信息并缩小图像尺寸,生成第一特征图。特征提取模块52用于对第一特征图进行深层特征提取,以获得具有关键信息的特征图。图像降维模块53用于对上述特征图进行降维处理,以简化计算复杂度。第一注意力模块54和第二注意力模块55用于凸显图像重要特征信息。第一特征图通过特征提取模块、图像降维模块、第一注意力模块和第二注意力模块生成第二特征图。第一特征图和第二特征图结合后输入上采样模块56。上采样模块56用于对结合后的特征图进行上采样, 以扩大图像尺寸。融合输出模块57用于融合所有特征图的图像信息,并输出目标图像。
作为分离部件说明的模块可以是也可以不是物理上分开的,同样地,作为模块显示的部件可以是也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施方式的目的。
另外,本实施方式中各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,或者是两个或两个以上单元集成在一个单元中。其中集成的单元可以采用硬件形式实现,也可以采用软件功能单元的形式实现。
本实施方式中关于计算机设备50更多模块或者各模块的更多功能可以参照上述本申请任意一种生成高能CT图像模型的训练方法,或者参照上述本申请生成高能CT图像的方法。
参阅图13,图13为本申请一种计算机可读存储介质的结构示意框图。如图13所示,本申请提供的计算机可读存储介质60存储有程序数据61。其中,程序数据61可存储于计算机程序,且上述程序数据61能够被处理器执行,实现本申请任意一种生成高能CT图像模型的训练方法,或者参照上述本申请生成高能CT图像的方法。
在本申请实施方式中,计算机可读存储介质60可以为前述实施方式中任意一种计算机设备的内部存储单元,例如为前述任意一种计算机设备的硬盘或内存。也可以为前述实施方式中的任意一种计算机设备的外部存储设备,例如为计算机设备上的设备的插接式硬盘、智能存储卡、安全数字卡、闪存卡等。
以上所述仅为本申请的实施方式,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所做的等效结构或等效流程变换,或直接或间接运用在其他相关技术领域,均同理包括在本申请的专利保护范围内。

Claims (11)

  1. 一种生成高能CT图像模型的训练方法,其特征在于,所述训练方法包括以下步骤:
    将训练用低能CT图像输入循环网络进行第一低能CT图像转变为第一高能CT图像、再从所述第一高能CT图像转变为第二低能CT图像的训练,得到损失函数;
    利用所述损失函数构建第一高能CT图像生成模型。
  2. 根据权利要求1所述的方法,其特征在于,
    所述将训练用低能CT图像输入循环网络进行第一低能CT图像转变为第一高能CT图像、再从所述第一高能CT图像转变为第二低能CT图像的训练,得到损失函数的同时,包括:
    将训练用高能CT图像输入所述循环网络进行第二高能CT图像转变为第三低能CT图像、再从所述第三低能CT图像转变为第三高能CT图像的训练,得到所述损失函数。
  3. 根据权利要求2所述的方法,其特征在于,
    所述将训练用低能CT图像输入循环网络进行第一低能CT图像转变为第一高能CT图像,包括:
    对所述训练用低能CT图像进行下采样,以得到第一特征图;
    对所述第一特征图进行多次特征提取,以得到第二特征图;
    将所述第一特征图与所述第二特征图结合后输入上采样层进行上采样,以得到所述第一高能CT图像。
  4. 根据权利要求3所述的方法,其特征在于,
    对所述第一特征图进行多次特征提取,以得到第二特征图,包括:
    将所述第一特征图输入多个残差网络,并对所述多个残差网络引入注意力机制;
    利用卷积层对所述多次特征提取后的输出图进行级联;
    对级联后的新的所述输出图引入注意力机制,以得到所述第二特征图。
  5. 根据权利要求3所述的方法,其特征在于,
    将所述第一特征图与所述第二特征图结合后输入上采样层进行上采样,以得到所述第一高能CT图像,包括:
    将所述上采样层与所述下采样中相同尺寸的特征图进行级联;
    对级联后的新的所述特征图进行上采样;
    对所述上采样层的输出结果进行特征融合,以得到所述第一高能CT图像。
  6. 根据权利要求2所述的方法,其特征在于,
    所述再从所述第一高能CT图像转变为第二低能CT图像,包括:
    所述第一高能CT图像转变为所述第二低能CT图像的网络结构与所述第一低能CT图像转变为第一高能CT图像的网络结构相同。
  7. 根据权利要求2所述的方法,其特征在于,
    所述利用所述损失函数构建第一高能CT图像生成模型,包括:
    利用所述第一高能CT图像的生成损失和所述第二低能CT图像的生成损失、所述第一高能CT图像的判别损失和所述第二低能CT图像的判别损失、所述第一高能CT图像的重建损失和所示第二低能CT图像的重建损失、所述第一低能CT图像的循环损失构建损失函数;和/或,
    利用所述第三低能CT图像的生成损失和所述第三高能CT图像的生成损失、所述第三低能CT图像的判别损失和所述第三高能CT图像的判别损失、所述第三低能CT图像的重建损失和所示第三高能CT图像的重建损失、所述第三高能CT图像的循环损失构建损失函数。
  8. 根据权利要求2所述的方法,其特征在于,
    所述利用所述损失函数构建第一高能CT图像生成模型,包括:
    利用Adam算法对所述循环网络的各层参数进行迭代优化;
    利用样本集对所述循环网络进行训练,以得到所述第一高能CT图像生成模型。
  9. 一种生成高能CT图像的方法,其特征在于,所述方法包括:
    将待处理低能CT图像输入第一高能CT图像生成模型;
    其中,所述第一高能CT图像生成模型为采用权利要求1-8任一所述的生成高能CT图像模型的训练方法训练得到的所述第一高能CT图像生成模型。
  10. 一种计算机设备,其特征在于,所述计算机设备包括存储器和处理器;
    所述存储器用于存储计算机程序;
    所述处理器用于执行所述计算机程序并在执行所述计算机程序时实现如权利要求1-8中任一项所述的生成高能CT图像模型的训练方法,或者如权利要求9所述的生成高能CT图像的方法。
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存 储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如权利要求1-8中任一所述的生成高能CT图像模型的训练方法,或者如权利要求9所述的生成高能CT图像的方法。
PCT/CN2020/081504 2020-03-26 2020-03-26 生成高能ct图像模型的训练及生成方法、设备、存储介质 WO2021189383A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/081504 WO2021189383A1 (zh) 2020-03-26 2020-03-26 生成高能ct图像模型的训练及生成方法、设备、存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/081504 WO2021189383A1 (zh) 2020-03-26 2020-03-26 生成高能ct图像模型的训练及生成方法、设备、存储介质

Publications (1)

Publication Number Publication Date
WO2021189383A1 true WO2021189383A1 (zh) 2021-09-30

Family

ID=77889889

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/081504 WO2021189383A1 (zh) 2020-03-26 2020-03-26 生成高能ct图像模型的训练及生成方法、设备、存储介质

Country Status (1)

Country Link
WO (1) WO2021189383A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274418A (zh) * 2023-10-08 2023-12-22 北京长木谷医疗科技股份有限公司 基于正侧位x线图像的ct图像生成方法、装置及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090208084A1 (en) * 2008-02-15 2009-08-20 Xin Liu System and method for quantitative imaging of chemical composition to decompose more than two materials
CN109299342A (zh) * 2018-11-30 2019-02-01 武汉大学 一种基于循环生成式对抗网络的跨模态检索方法
WO2019027641A1 (en) * 2017-08-01 2019-02-07 Varex Imaging Corporation DOUBLE LAYER DETECTOR FOR MOVING MOUSE TISSUE MOTION
CN109949215A (zh) * 2019-03-29 2019-06-28 浙江明峰智能医疗科技有限公司 一种低剂量ct图像模拟方法
CN110559009A (zh) * 2019-09-04 2019-12-13 中山大学 基于gan的多模态低剂量ct转换高剂量ct的方法、系统及介质
CN110728727A (zh) * 2019-09-03 2020-01-24 天津大学 一种低剂量能谱ct投影数据的恢复方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090208084A1 (en) * 2008-02-15 2009-08-20 Xin Liu System and method for quantitative imaging of chemical composition to decompose more than two materials
WO2019027641A1 (en) * 2017-08-01 2019-02-07 Varex Imaging Corporation DOUBLE LAYER DETECTOR FOR MOVING MOUSE TISSUE MOTION
CN109299342A (zh) * 2018-11-30 2019-02-01 武汉大学 一种基于循环生成式对抗网络的跨模态检索方法
CN109949215A (zh) * 2019-03-29 2019-06-28 浙江明峰智能医疗科技有限公司 一种低剂量ct图像模拟方法
CN110728727A (zh) * 2019-09-03 2020-01-24 天津大学 一种低剂量能谱ct投影数据的恢复方法
CN110559009A (zh) * 2019-09-04 2019-12-13 中山大学 基于gan的多模态低剂量ct转换高剂量ct的方法、系统及介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274418A (zh) * 2023-10-08 2023-12-22 北京长木谷医疗科技股份有限公司 基于正侧位x线图像的ct图像生成方法、装置及设备
CN117274418B (zh) * 2023-10-08 2024-04-02 北京长木谷医疗科技股份有限公司 基于正侧位x线图像的ct图像生成方法、装置及设备

Similar Documents

Publication Publication Date Title
US11854160B2 (en) CT super-resolution GAN constrained by the identical, residual and cycle learning ensemble (GAN-circle)
US10970829B2 (en) Synthesizing and segmenting cross-domain medical images
Bera et al. Noise conscious training of non local neural network powered by self attentive spectral normalized Markovian patch GAN for low dose CT denoising
Tang et al. Unpaired low‐dose CT denoising network based on cycle‐consistent generative adversarial network with prior image information
JP2023025056A (ja) ディープ畳み込みニューラルネットワークを使用した医用画像化のための線量低減
CN110648337A (zh) 髋关节分割方法、装置、电子设备和存储介质
EP3980972A1 (en) Sct image generation using cyclegan with deformable layers
CN111368849B (zh) 图像处理方法、装置、电子设备及存储介质
CN111489406B (zh) 生成高能ct图像模型的训练及生成方法、设备、存储介质
Bai et al. Probabilistic self‐learning framework for low‐dose CT denoising
Zhou et al. DuDoUFNet: dual-domain under-to-fully-complete progressive restoration network for simultaneous metal artifact reduction and low-dose CT reconstruction
CN110223255A (zh) 一种用于低剂量ct图像去噪的浅层残差编解码递归网络
Li et al. Incorporation of residual attention modules into two neural networks for low‐dose CT denoising
WO2023142781A1 (zh) 图像三维重建方法、装置、电子设备及存储介质
CN114863225A (zh) 图像处理模型训练方法、生成方法、装置、设备及介质
Kim et al. Convolutional neural network–based metal and streak artifacts reduction in dental CT images with sparse‐view sampling scheme
WO2021189383A1 (zh) 生成高能ct图像模型的训练及生成方法、设备、存储介质
CN117813055A (zh) 用于从快速spect扫描和ct图像合成spect图像的多模态和多尺度特征聚合
Trung et al. Dilated residual convolutional neural networks for low-dose CT image denoising
CN114511497A (zh) 应用于锥束ct稀疏采样的成像方法及装置
Xie et al. Artificial intelligence–based computed tomography processing framework for surgical telementoring of congenital heart disease
Shang et al. Short‐Axis PET Image Quality Improvement by Attention CycleGAN Using Total‐Body PET
WO2021159236A1 (zh) 基于非衰减校正pet图像生成合成pet-ct图像的方法和系统
Mahmoud et al. Variant Wasserstein Generative Adversarial Network Applied on Low Dose CT Image Denoising.
Jia et al. A densely connected LDCT image denoising network based on dual-edge extraction and multi-scale attention under compound loss

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926856

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20926856

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20926856

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20926856

Country of ref document: EP

Kind code of ref document: A1