WO2022126588A1 - 基于双编码融合网络模型的pet-mri图像去噪方法、装置 - Google Patents

基于双编码融合网络模型的pet-mri图像去噪方法、装置 Download PDF

Info

Publication number
WO2022126588A1
WO2022126588A1 PCT/CN2020/137567 CN2020137567W WO2022126588A1 WO 2022126588 A1 WO2022126588 A1 WO 2022126588A1 CN 2020137567 W CN2020137567 W CN 2020137567W WO 2022126588 A1 WO2022126588 A1 WO 2022126588A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pet
mri
dual
network model
Prior art date
Application number
PCT/CN2020/137567
Other languages
English (en)
French (fr)
Inventor
张娜
郑海荣
刘新
胡战利
梁栋
杨永峰
张立沛
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Priority to PCT/CN2020/137567 priority Critical patent/WO2022126588A1/zh
Publication of WO2022126588A1 publication Critical patent/WO2022126588A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the invention relates to the technical field of image processing, in particular to a PET-MRI image denoising method and device based on a dual coding fusion network model.
  • PET-MRI also known as PET/MRI
  • PET/MRI combines the molecular imaging capabilities of Positron Emission Computed Tomography (PET, Positron Emission Computed Tomography) with the excellent soft tissue contrast capabilities of Magnetic Resonance Imaging (MRI, Magnetic Resonance Imaging).
  • PET-MRI Magnetic Resonance Imaging
  • PET-MRI is used to image diseased cells spreading in tissues.
  • PET images and MRI images of diseased cells can be collected separately, which combines the advantages of PET for sensitive detection of lesions and the advantages of MRI for multi-sequence imaging. Compared with other methods, it has the advantages of high sensitivity and good accuracy, and has the value of early detection and early diagnosis of many diseases, such as tumors and the most common heart and brain diseases.
  • the radiation dose and imaging agent of PET may greatly increase the possibility of various diseases, affect human physiological functions, damage human tissues and organs, and even endanger human body tissues and organs.
  • relevant technical personnel advocate that the imaging agent and radiation dose should be reduced as much as possible under the condition that the doctor's clinical diagnosis requirements for PET images are met.
  • the use of low-dose imaging agents in PET imaging can easily lead to a large amount of quantum noise and blurred morphological features in reconstructed images, thereby reducing image quality.
  • the embodiment of the present invention provides a PET-MRI image denoising method based on a dual-coding fusion network model, which is used to solve the problem that the denoising ability of the PET-MRI image denoising method in the prior art is weak and the quality of the output image is not good. high question.
  • Embodiments of the present invention further provide a PET-MRI image denoising device based on a dual-coding fusion network model, an electronic device, and a computer-readable storage medium.
  • a PET-MRI image denoising method based on a double-coding fusion network model comprising: acquiring a positron emission computed tomography PET image and a magnetic resonance imaging MRI image of a target object; inputting the PET image and the MRI image into a pre-trained The PET-MRI image of the target object is obtained in the double-encoding fusion network model of .
  • the dilated convolutional network the densely connected recurrent convolutional network is used to extract the texture information of the image; the dilated convolutional network is used to extract the spatial information of the image.
  • a PET-MRI image denoising device based on a double-coding fusion network model comprising an acquisition module and an input module, wherein: the acquisition module is used for acquiring the positron emission tomography PET image and the nuclear magnetic resonance imaging MRI image of the target object
  • the input module is used to input the PET image and MRI image into the pre-trained dual-coding fusion network model to obtain the PET-MRI image of the target object; wherein, the dual-coding fusion network model is based on the PET image sample and the MRI image sample.
  • the fusion feature training is obtained.
  • the double-coded fusion network includes a densely connected cyclic convolutional network and a dilated convolutional network.
  • the densely connected cyclic convolutional network is used to extract the texture information of the image; the dilated convolutional network is used to extract the space of the image. information.
  • An electronic device comprising: a memory, a processor and a computer program stored in the memory and running on the processor, when the computer program is executed by the processor, the above-mentioned PET-MRI image denoising based on a dual-coding fusion network model is realized steps of the method.
  • a computer-readable storage medium stores a computer program on the computer-readable storage medium, and when the computer program is executed by a processor, implements the steps of the above-mentioned PET-MRI image denoising method based on a dual-coding fusion network model.
  • FIG. 1 is a schematic diagram of the implementation flow of a PET-MRI image denoising method based on a dual-coding fusion network model provided by an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a densely connected cyclic convolutional network according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a circular convolutional network provided by an embodiment of the present invention.
  • FIG. 4 (a) in Figure 4 is a schematic diagram of the structure of the Inception V2 network
  • FIG. 4 is a schematic structural diagram of a dilated convolutional network provided by an embodiment of the present invention.
  • Fig. 5 is a kind of implementation flow schematic diagram of the method for obtaining a dual-coding fusion network model based on the fusion feature training of PET image samples and MRI image samples provided by the embodiment of the present invention
  • FIG. 6 is a schematic diagram of an application process of a method provided by an embodiment of the present invention in practice
  • FIG. 7 is a schematic structural diagram of a PET-MRI image denoising device based on a dual-coding fusion network model according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the embodiment of the present invention provides a PET-MRI image denoising method based on a dual-coding fusion network model. noise method.
  • the execution body of the method may be various types of computing devices, or may be an application or an application (Application, APP) installed on the computing device.
  • the computing device for example, may be a user terminal such as a mobile phone, a tablet computer, a smart wearable device, or the like, or a server.
  • the embodiment of the present invention takes the execution body of the method as a server as an example to introduce the method.
  • the embodiments of the present invention take a server as an example to introduce the method, which is only an exemplary illustration, and does not limit the protection scope of the claims corresponding to the solution.
  • the implementation process of the method provided by the embodiment of the present invention is shown in FIG. 1 , and includes the following steps.
  • Step 11 acquiring the positron emission computed tomography PET image and the nuclear magnetic resonance imaging MRI image of the target object.
  • the embodiments of the present invention are proposed to solve the problems of weak denoising ability and low output image quality in the prior art PET-MRI image denoising method.
  • a PET-MRI image denoising method based on a dual-coding fusion network model therefore, the target object can be understood as the object to be generated PET-MRI image, for example, the target object can include the internal organs of the human body or a certain object in the human body. a diseased area, etc.
  • PET images which can include images of the target object obtained by positron emission tomography under low radiation dose and low imaging agent conditions.
  • the PET image may include the image obtained by positron emission tomography technology in the malignant tumor area under the condition of low radiation dose and low imaging agent.
  • a radioisotope-labeled tracer such as fluorodeoxyglucose labeled with an isotope such as 11 C, 13 N, 15 O, or 18 F
  • an isotope such as 11 C, 13 N, 15 O, or 18 F
  • the PET projection data can be obtained after detecting the gamma photon pair by the PET detector, and finally the PET image can be obtained through the reconstruction technology of the computer based on the PET projection data.
  • a radioisotope-labeled tracer can be injected into the human body to make it participate in the blood flow and metabolic process of human tissues. Therefore, the isotope-labeled tracer can participate in the blood circulation process in the body after injection, and the isotope decays and produces positrons due to instability, and positrons are in the human body. After moving 1-3mm, the annihilation effect occurs when it encounters the negative electrons, and produces a pair of gamma photons with equal energy and opposite movement directions.
  • the MRI image may include an image obtained by the magnetic resonance imaging technology for the target object.
  • a magnetic resonance phenomenon can be used to obtain an electromagnetic signal from a target object, and information of the target object can be reconstructed based on the obtained electromagnetic signal, thereby obtaining an MRI image.
  • the image size and image format of the acquired PET image and MRI image may be determined according to actual needs, which are not limited in this embodiment of the present invention.
  • the image size of the PET image and the MRI image may be 256*256, and the image format may be a grayscale image.
  • Step 12 Input the PET image and the MRI image into the pre-trained dual-coding fusion network model to obtain the PET-MRI image of the target object.
  • the dual-coding fusion network model is trained based on the fusion features of PET image samples and MRI image samples.
  • the dual-coding fusion network includes a densely connected recurrent convolutional network and a dilated convolutional network.
  • the densely connected recurrent convolutional network is used to extract The texture information of the image; the dilated convolutional network is used to extract the spatial information of the image.
  • step 12 the dual coding fusion network model involved in the embodiment of the present invention is first introduced.
  • the dual-coding fusion network model can be understood as a network model that includes two coding networks for generating PET-MRI images.
  • the high-dimensional information of the image is usually obtained by continuously stacking convolution operations.
  • the high-dimensional information of the image can be obtained, in the process of continuously stacking the convolution operation. It will also cause an increase in the number of parameters, and at the same time, the difficulty of back-propagation of the gradient will increase. Therefore, in order to solve this problem, one of the encoding networks of the dual-encoding fusion network model in the embodiment of the present invention may be used to obtain images, for example.
  • the cyclic convolution structure can ensure the accumulation and extraction of image features in different time domains while the number of parameters remains unchanged; on the other hand, the introduction of dense connection operations between the cyclic convolution networks can ensure that backpropagation updates the gradient stability.
  • FIG. 2 it is a schematic structural diagram of a densely connected cyclic convolutional network provided by an embodiment of the present invention.
  • the densely connected recurrent convolutional network may include a recurrent convolutional network (Recurrent Conv in FIG. 2 ) and a dense connection between recurrent convolutions (recurrent Conv in FIG. 2 ) Concatenate), which can include at least two convolution kernels, the parameters of at least two convolution kernels are the same, and each recurrent convolution network has a dense connection mechanism between the output, which is based on the connection between channels.
  • Recurrent Conv in FIG. 2 recurrent convolutional network
  • recurrent Conv in FIG. 2 dense connection between recurrent convolutions
  • the dense connection mechanism considering that after the dense connection mechanism, the number of channels of the cyclic convolutional network increases, thereby increasing the consumption of computing resources, therefore, in this embodiment of the present invention, after the dense connection mechanism, 1 ⁇ 1 can also be used.
  • the convolution kernel and ReLU activation function are used to restore the original number of channels. In this way, while increasing the nonlinear transformation of representation learning, it can also strengthen the depth supervision, thereby speeding up the convergence.
  • FIG. 3 it is a schematic structural diagram of a cyclic convolution network provided by an embodiment of the present invention.
  • the cyclic convolution network may include at least two convolution kernels, and in order to ensure that no parameters are added while adding convolution operations
  • the parameters of at least two convolution kernels are the same. Taking three 3*3 convolution kernels (3*3conv in the figure) as an example in Figure 3, the input of the circular convolution network can undergo three convolution operations, and the parameters of the three convolution kernels are shared (Fig. Shared weights in ).
  • the output will add pixels to the initial initial input (input in the figure) to achieve the accumulation of image feature information, and then enter the convolution operation of shared parameters, In this way, in the back-propagation of gradient update, it can be equivalent that a convolution kernel will be updated in different time domains, thus making the training process easier.
  • another encoding network of the double-encoding fusion network model can be, for example, a dilated convolutional network for extracting the spatial information of an image and encoding the image information. , therefore, it can have a better ability to capture image features of different shapes and sizes in the image, increasing the network width and the ability to extract spatial information.
  • the dilated convolutional network may include a target convolution adjusted based on a preset dilation ratio; the preset dilation ratio is determined according to the information capture capability required by target objects of different shapes and sizes in the image; the target convolution is used for Capture the information of the target object in the image.
  • FIG. 4(b) it is a schematic structural diagram of an expanded convolutional network provided by an embodiment of the present invention.
  • the expanded convolutional network is based on Inception V2
  • the structure of (see (a) in Figure 4) is specifically adjusted using a preset expansion ratio.
  • the expansion convolution is introduced in the third and fourth operations of Inception V2, so that the image
  • the image features of different shapes and sizes in the medium have better grasping ability.
  • the dilated convolution can be determined based on the following equation.
  • Height (after expansion)/width (after expansion) (dilation ratio-1) ⁇ (convolution kernel size-1)+convolution kernel size.
  • the dilated convolutional network can also combine some 1 ⁇ 1 convolution operations (Conv1*1 in Figure 4(b)) and global pooling operations (AvgPool3*3 in Figure 4(b)), In order to increase the network width and spatial information extraction ability.
  • the number of feature maps for each operation route is 1/4 of the final total output, and finally all feature maps are spliced to restore the specified number of output feature maps.
  • encoding and decoding operations can be performed based on the dual-coding fusion network model, and finally the PET-MRI image is recovered.
  • the image features of the PET image and the image of the MRI image can be extracted respectively based on the densely connected recurrent convolutional network of the dual-coding fusion network model.
  • feature wherein the image feature includes texture information of the image.
  • the two features can be fused, that is, the pixels of the PET image and the pixels of the MRI image are added and fused, and the fused image
  • the image features are used as the input of the densely connected recurrent convolutional network and the dilated convolutional network, respectively, to obtain the encoding results.
  • both the densely connected cyclic convolutional network and the dilated convolutional network include at least two coding layers
  • the The encoding result of the first layer of encoding layer is used as the input of the second layer of encoding layer, and the encoding result of the second layer of encoding layer is obtained, and so on, until the last layer of encoding in the densely connected recurrent convolutional network and the dilated convolutional network. layer to get the final encoding result.
  • a decoding operation may be performed based on the final encoding result to restore the PET-MRI image of the target object.
  • the circular convolution operation and upsampling operation can be performed through a densely connected circular convolution network based on the final coding result obtained, and then spliced with the features obtained in the coding stage, and finally the PET of the target object can be obtained through the convolution decoding operation.
  • -MRI images are used to generate images.
  • the upsampling operation is performed based on the obtained final encoding result in order to improve the resolution of the image.
  • the dual-coding fusion network model is obtained by training based on the fusion features of PET image samples and MRI image samples, and the dual-coding fusion network includes densely connected cyclic convolutional networks and dilated convolutional networks
  • the densely connected recurrent convolutional network is used to extract the texture information of the image; the dilated convolutional network is used to extract the spatial information of the image.
  • the acquired PET images and MRI images are input into the pre-trained dual-coding fusion network model.
  • the texture information and spatial information of the image can be captured at the same time based on the densely connected recurrent convolutional network and the dilated convolutional network, which enhances the grasping ability of texture information and spatial information, and can eliminate the low The noise and artifacts caused by the dose; and the image features of the PET image and the image features of the MRI image can be fused to ensure the correlation between the PET image and the MRI image, thereby improving the quality of the PET-MRI image.
  • an embodiment of the present invention provides a method for obtaining a dual-coding fusion network model by training based on fusion features of PET image samples and MRI image samples. As shown in FIG. 5a , the method includes the following steps.
  • Step 51 extract the first image feature of the PET image sample and the second image feature of the MRI image sample through the densely connected recurrent convolutional network of the dual-coding fusion network model.
  • the dual-coding fusion network model can be understood as a network model that includes two coding networks for generating PET-MRI images.
  • the densely connected recurrent convolutional network can be understood as one of the two encoding networks in the dual-coding fusion network model, which can be used to extract the texture information of the image and encode the image information.
  • the densely connected recurrent convolutional network may include at least two convolution kernels, and the parameters of the at least two convolution kernels are the same.
  • Step 52 fuse the first image feature and the second image feature to obtain a fused third image feature.
  • fusion may be performed, that is, the pixel points of the above two image features are added.
  • the method further includes: using a 1*1 convolution kernel and an activation function ReLU to perform a The feature and the second image feature are convolved and activated to restore the number of channels of the recurrent convolutional network.
  • the first specifying operation includes the following steps.
  • the third image feature is used as the input of the n-th coding layer of the PET encoder of the dual-coding fusion network model and the input of the n-th feature extraction layer of the MRI feature extractor of the dual-coding fusion network model, to obtain the PET encoder.
  • N represents the serial number of the last encoding layer of the PET encoder and the serial number of the last feature extraction layer of the MRI feature extractor; the nth fusion result is used as the input of the n+1th encoding layer of the PET encoder and the n+1th feature extraction layer of the MRI feature extractor, respectively. input, obtain the n+1th output result of the PET encoder and the n+1th extraction result of the MRI feature extractor; combine the n+1th output result of the PET encoder and the n+1th output result of the MRI feature extractor One extraction result is fused to obtain the fourth image feature of the PET-MRI image.
  • Step 53 will be described below with reference to an example.
  • the third The image features are used as the input of the first encoding layer of the PET encoder of the dual-coding fusion network model and the input of the first feature extraction layer of the MRI feature extractor of the dual-coding fusion network model, and the first layer of the PET encoder is obtained.
  • the output result and the 1st extraction result of the MRI feature extractor are used as the input of the first encoding layer of the PET encoder of the dual-coding fusion network model and the input of the first feature extraction layer of the MRI feature extractor of the dual-coding fusion network model.
  • feature fusion is performed based on the first output result of the PET encoder and the first extraction result of the MRI feature extractor to obtain the first fusion result. Then, the first fusion result is used as the input of the second coding layer of the PET encoder and the input of the second feature extraction layer of the MRI feature extractor, and the second output result of the PET encoder and the MRI feature extraction are obtained. 2nd extraction result of the machine.
  • the second encoding layer and the second feature extraction layer are not the last layers of the PET encoder and the MRI feature extractor, it is necessary to continue to perform the above operations in a loop, that is, the second layer of the PET encoder.
  • the output result is fused with the second extraction result of the MRI feature extractor to obtain the second fusion result; the second fusion result is used as the input of the third coding layer of the PET encoder and the input of the MRI feature extractor.
  • the input of the third layer feature extraction layer, the third output result of the PET encoder and the third extraction result of the MRI feature extractor are obtained.
  • the third encoding layer and the third feature extraction layer are the last layers of the PET encoder and the MRI feature extractor, respectively, the third output result of the PET encoder and the MRI feature extractor can be combined.
  • the third extraction result is fused to obtain the fourth image feature of the PET-MRI image.
  • step 54 a decoding operation is performed based on the fourth image feature to obtain a dual-coding fusion network model.
  • a decoding operation is performed based on the fourth image feature to obtain a dual-coding fusion network model, which may specifically include the following steps 541 to 542 .
  • Step 541 Determine the fourth image feature as the first input of the decoder of the dual-coding fusion network model.
  • the second specifying operation includes the following steps.
  • the convolution operation is performed to obtain the result of the convolution operation; the mth convolution decoding result of the decoder is obtained by splicing the mth processing result with the convolution operation result, and the mth convolution decoding result is determined as the decoder's th convolution decoding result. m+1 inputs.
  • Step 54 will be described below with reference to an example.
  • the number of coding layers of the PET encoder of the dual-coding fusion network model, the number of feature extraction layers of the MRI feature extractor, and the number of decoding layers of the decoder of the dual-coding fusion network model are all 3, then first The first input of the decoder of the dual-coding fusion network model can be subjected to circular convolution operation and upsampling operation to obtain the first processing result of the decoder; secondly, the second extraction result of the MRI feature extractor is obtained, and Convolution operation is performed on the second extraction result based on the densely connected recurrent convolution network to obtain the result of the convolution operation; finally, the first convolution decoding of the decoder is obtained by splicing the first processing result with the result of the convolution operation result, and determine the 1st convolutional decoding result as the 2nd input to the decoder.
  • the dual-coding fusion network model can be obtained.
  • the standard PET-MRI image and the pre- A loss function is set to update the parameters of the preset dual-coding fusion network model until the dual-coding fusion network model converges to the preset range.
  • the preset loss function is established at least according to the mean absolute error function for calibrating the noise distribution of the dual-coding fusion network model and the function for preventing overfitting of the dual-coding fusion network model.
  • the embodiment of the present invention can use the L1 loss function. , which can better adapt to the real noise distribution.
  • a total variation(TV)regularizer can also be used as a regularizer to prevent overfitting.
  • the equation is as follows.
  • N represents the number of pixels in the denoised PET-MRI image
  • G(x i ) represents the denoised PET-MRI image
  • y i represents the standard PET-MRI image; the latter two represent the denoised PET-MRI image respectively.
  • the dual-coding fusion network model is obtained by training based on the fusion features of PET image samples and MRI image samples, and the dual-coding fusion network includes densely connected cyclic convolutional networks and dilated convolutional networks
  • the densely connected recurrent convolutional network is used to extract the texture information of the image; the dilated convolutional network is used to extract the spatial information of the image.
  • the acquired PET images and MRI images are input into the pre-trained dual-coding fusion network model.
  • the texture information and spatial information of the image can be captured at the same time based on the densely connected recurrent convolutional network and the dilated convolutional network, which enhances the grasping ability of texture information and spatial information, and can eliminate the low The noise and artifacts caused by the dose; and the image features of the PET image and the image features of the MRI image can be fused to ensure the correlation between the PET image and the MRI image, thereby improving the quality of the PET-MRI image.
  • FIG. 6 it is a schematic diagram of a practical application process of the method provided by the embodiment of the present invention.
  • the network model can include three parts, the first part is the encoder for PET features (PET Feature Encoder in the figure), and the second part is the feature extractor for MRI (Inception Extractor for MRI), and the third part is the denoising decoder (PET Denoise Decoder in the figure).
  • the first part is the encoder for PET features (PET Feature Encoder in the figure)
  • the second part is the feature extractor for MRI (Inception Extractor for MRI)
  • the third part is the denoising decoder (PET Denoise Decoder in the figure).
  • 1 represents the image feature extraction based on the densely connected cyclic convolutional network
  • 2 represents the circular convolution operation and the maximum pooling operation of the output result of the encoding layer based on the densely connected cyclic convolutional network
  • 3 represents the dilated convolution based
  • the network performs feature extraction operations
  • 4 indicates that the densely connected convolutional cyclic network performs cyclic convolution operations and upsampling operations on the input of the decoder network
  • 5 indicates that the output results of the decoder network are densely connected.
  • the cyclic convolution operation and The activation function activates the operation.
  • 256 ⁇ 256 PET grayscale images and MRI grayscale images can be used as input to the dual-coding fusion network model, so that PET grayscale images and MRI grayscale images can be processed based on densely connected circular convolutions. Feature extraction operation (1 in the figure).
  • additive fusion can be performed, and the fused features are used as the input of the encoders of the PET feature encoder and the MRI feature extractor in the next stage respectively.
  • the fused feature is used as the input of the next stage of the PET feature encoder.
  • the fused feature can also be cyclically convolved based on the densely connected cyclic convolution network. and the max pooling operation (2 in the figure).
  • the fused features are used as the input of the next stage of the MRI feature extractor.
  • the feature extraction operation can also be performed on the fused features based on the dilated convolution network (3 in the figure). ).
  • the additively fused features can be preserved and spliced with the corresponding layers in the decoding and denoising stage.
  • the circular convolution operation and the upsampling operation (4 in the figure) can be performed through the densely connected circular convolution network to improve the resolution.
  • the output of the decoder is spliced with the feature map of the encoder stage, and further convolution decoding operations and activation operations (5 in the figure) are performed to output a grayscale image with a dimension of 256 ⁇ 256, where the feature The trend of the number of graphs is: 32 ⁇ 64 ⁇ 128 ⁇ 256 ⁇ 256 ⁇ 256 ⁇ 128 ⁇ 64 ⁇ 32.
  • the dual-coding fusion network model is obtained by training based on the fusion features of PET image samples and MRI image samples, and the dual-coding fusion network includes densely connected cyclic convolutional networks and dilated convolutional networks
  • the densely connected recurrent convolutional network is used to extract the texture information of the image; the dilated convolutional network is used to extract the spatial information of the image.
  • the acquired PET images and MRI images are input into the pre-trained dual-coding fusion network model.
  • the texture information and spatial information of the image can be captured at the same time based on the densely connected recurrent convolutional network and the dilated convolutional network, which enhances the grasping ability of texture information and spatial information, and can eliminate the low The noise and artifacts caused by the dose; and the image features of the PET image and the image features of the MRI image can be fused to ensure the correlation between the PET image and the MRI image, thereby improving the quality of the PET-MRI image.
  • the PET-MRI image denoising method in the prior art has the problems of weak denoising ability and low output image quality.
  • the embodiment of the present invention provides a PET-MRI image denoising device based on a dual-coding fusion network model , the specific structural schematic diagram of the device 70 is shown in FIG. 7 , including an acquisition module 71 and an input module 72 .
  • the functions of each module are as follows.
  • the acquiring module 71 is configured to acquire the positron emission computed tomography PET image and the nuclear magnetic resonance imaging MRI image of the target object.
  • the input module 72 is used to input the PET image and the MRI image into the pre-trained dual-coding fusion network model to obtain the PET-MRI image of the target object; wherein, the dual-coding fusion network model is based on the PET image sample and the MRI image sample.
  • the fusion feature training is obtained.
  • the double-coded fusion network includes a densely connected cyclic convolutional network and a dilated convolutional network.
  • the densely connected cyclic convolutional network is used to extract the texture information of the image; the dilated convolutional network is used to extract the space of the image. information.
  • the apparatus may further include a training module for obtaining a dual-coding fusion network model by training based on the fusion features of the PET image samples and the MRI image samples.
  • the input of the nth feature extraction layer of the MRI feature extractor of the coding fusion network model is obtained, and the nth output result of the PET encoder and the nth extraction result of the MRI feature extractor are obtained;
  • the number is the same as the number of feature extraction layers of the MRI feature extractor;
  • the feature fusion is performed based on the nth output result of the PET encoder and the nth extraction result of the MRI feature extractor, and the nth fusion result is obtained;
  • the value of n The range is [1, 2,...N], where N represents the serial number of the last coding layer of the PET encoder and the serial number of the last feature extraction layer of the MRI feature extractor;
  • the nth fusion result is taken as the PET encoder respectively
  • the extraction result; the n+1th output result of the PET encoder and the n+1th extraction result of the MRI feature extractor are fused to obtain the fourth image feature of the PET-MRI image; the decoding unit is used for the fourth image feature based on the fourth The image features are decoded to obtain a dual-coding fusion network model.
  • the second specified operation includes: sequentially performing a circular convolution operation and an upsampling operation on the mth input of the decoder of the dual-coding fusion network model to obtain the mth processing result of the decoder; the value range of m is [1, 2, ...
  • the network performs a convolution operation on the N-1th extraction result to obtain the result of the convolution operation; the mth processing result and the convolution operation result are spliced to obtain the mth convolution decoding result of the decoder, and the mth convolution decoding result is obtained.
  • the convolutional decoding result is determined as the m+1th input to the decoder.
  • the device further includes an update module for: updating the parameters of the preset dual-coding fusion network model based on the standard PET-MRI image and the preset loss function, until the dual-coding fusion network model converges to the preset range;
  • the preset loss function is established at least according to the mean absolute error function for calibrating the noise distribution of the dual-coding fusion network model and the function for preventing overfitting of the dual-coding fusion network model.
  • the function for preventing overfitting of the dual-coding fusion network model is established at least according to the gradient value of the PET-MRI image in the horizontal direction and the gradient value of the PET-MRI image in the vertical direction obtained by the dual-coding fusion network model.
  • the device further includes a processing module, which is used to pass a 1*1 convolution kernel and an activation function ReLU before fusing the first image feature and the second image feature to obtain the fused third image feature.
  • the output of the densely connected RCNN is processed to recover the number of channels of the RCNN.
  • the densely connected recurrent convolutional network includes at least two convolution kernels, and the parameters of the at least two convolution kernels are the same.
  • the dilated convolutional network includes: a target convolution adjusted based on a preset dilation ratio; the preset dilation ratio is determined according to the information capture capability required by target objects of different shapes and sizes in the image; the target convolution It is used to capture the information of the target object in the image.
  • the dual-coding fusion network model is obtained by training based on the fusion features of PET image samples and MRI image samples, and the dual-coding fusion network includes densely connected cyclic convolutional networks and dilated convolutional networks
  • the densely connected recurrent convolutional network is used to extract the texture information of the image
  • the dilated convolutional network is used to extract the spatial information of the image, in this way, the input module inputs the PET image and MRI image obtained by the acquisition module into the pre-trained double-coded
  • the texture information and spatial information of the image can be captured simultaneously based on the densely connected recurrent convolutional network and the dilated convolutional network, which enhances the ability to capture texture information and spatial information, and can be used in large
  • the noise and artifacts caused by low dose can be eliminated to a certain extent; and the image features of the PET image and the image features of the MRI image can be fused to ensure the correlation between the
  • the electronic device 800 includes but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, and a display unit 806 , a user input unit 807 , an interface unit 808 , a memory 809 , a processor 810 , and a power supply 811 and other components.
  • a radio frequency unit 801 a radio frequency unit 801
  • a network module 802 an audio output unit 803, an input unit 804, a sensor 805, and a display unit 806
  • a user input unit 807 e.g., a user input unit 807
  • an interface unit 808 e.g., a memory 809
  • a processor 810 e.g., a power supply 811 and other components.
  • the electronic device includes but is not limited to a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like
  • the processor 810 is used to obtain the positron emission computed tomography PET image and the nuclear magnetic resonance imaging MRI image of the target object; input the PET image and the MRI image into the pre-trained dual-coding fusion network model to obtain the target object
  • the PET-MRI image is obtained by training the dual-coding fusion network model based on the fusion features of PET image samples and MRI image samples.
  • the dual-coding fusion network includes a densely connected recurrent convolutional network and a dilated convolutional network.
  • the convolutional network is used to extract the texture information of the image; the dilated convolutional network is used to extract the spatial information of the image.
  • the memory 809 is used to store a computer program that can be executed on the processor 810.
  • the computer program is executed by the processor 810, the above-mentioned functions implemented by the processor 810 are implemented.
  • the radio frequency unit 801 can be used for receiving and sending signals during sending and receiving of information or during a call. Specifically, after receiving the downlink data from the base station, it is processed by the processor 810; The uplink data is sent to the base station.
  • the radio frequency unit 801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 801 can also communicate with the network and other devices through a wireless communication system.
  • the electronic device provides the user with wireless broadband Internet access through the network module 802, such as helping the user to send and receive emails, browse web pages, access streaming media, and the like.
  • the audio output unit 803 may convert audio data received by the radio frequency unit 801 or the network module 802 or stored in the memory 809 into audio signals and output as sound. Also, the audio output unit 803 may also provide audio output related to a specific function performed by the electronic device 800 (eg, call signal reception sound, message reception sound, etc.).
  • the audio output unit 803 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 804 is used to receive audio or video signals.
  • the input unit 8404 may include a graphics processor (Graphics Processing Unit, GPU) 8041 and a microphone 8042, and the graphics processor 8041 is used for still pictures or video images obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode data is processed.
  • the processed image frames may be displayed on the display unit 806 .
  • the image frames processed by the graphics processor 8041 may be stored in the memory 809 (or other storage medium) or transmitted via the radio frequency unit 801 or the network module 802 .
  • the microphone 8042 can receive sound and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be transmitted to a mobile communication base station via the radio frequency unit 801 for output in the case of a telephone call mode.
  • the electronic device 800 also includes at least one sensor 805, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 8061 according to the brightness of the ambient light, and the proximity sensor can turn off the display panel 8061 and the display panel 8061 when the electronic device 800 is moved to the ear. / or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of electronic devices (such as horizontal and vertical screen switching, related games , magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; the sensor 805 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, Infrared sensors, etc., are not repeated here.
  • the display unit 806 is used to display information input by the user or information provided to the user.
  • the display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • the user input unit 807 may be used to receive input numerical or character information, and generate key signal input related to user settings and function control of the electronic device.
  • the user input unit 807 includes a touch panel 8071 and other input devices 8072 .
  • the touch panel 8071 also referred to as a touch screen, collects the user's touch operations on or near it (such as the user's finger, stylus, etc., any suitable object or accessory on or near the touch panel 8071). operate).
  • the touch panel 8071 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it to the touch controller.
  • the touch panel 8071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the user input unit 807 may also include other input devices 8072 .
  • other input devices 8072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • the touch panel 8071 can be covered on the display panel 8061.
  • the touch panel 8071 detects a touch operation on or near it, it transmits it to the processor 810 to determine the type of the touch event, and then the processor 810 determines the type of the touch event according to the touch
  • the type of event provides a corresponding visual output on the display panel 8061.
  • the touch panel 8071 and the display panel 8061 are used as two independent components to realize the input and output functions of the electronic device, but in some embodiments, the touch panel 8071 and the display panel 8061 may be integrated
  • the implementation of the input and output functions of the electronic device is not specifically limited here.
  • the interface unit 808 is an interface for connecting an external device to the electronic device 800 .
  • external devices may include wired or wireless headset ports, external power (or battery charger) ports, wired or wireless data ports, memory card ports, ports for connecting devices with identification modules, audio input/output (I/O) ports, video I/O ports, headphone ports, and more.
  • the interface unit 808 may be used to receive input (eg, data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic device 800 or may be used between the electronic device 800 and external Transfer data between devices.
  • the memory 809 may be used to store software programs as well as various data.
  • the memory 809 may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, an application program (such as a sound playback function, an image playback function, etc.) required for at least one function, and the like; Data created by the use of the mobile phone (such as audio data, phone book, etc.), etc.
  • memory 809 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the processor 810 is the control center of the electronic device, and uses various interfaces and lines to connect various parts of the entire electronic device, by running or executing the software programs and/or modules stored in the memory 809, and calling the data stored in the memory 809. , perform various functions of electronic equipment and process data, so as to monitor electronic equipment as a whole.
  • the processor 810 may include one or more processing units; preferably, the processor 810 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, etc., and the modem
  • the processor mainly handles wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 810.
  • the electronic device 800 may also include a power supply 811 (such as a battery) for supplying power to various components.
  • a power supply 811 (such as a battery) for supplying power to various components.
  • the power supply 811 may be logically connected to the processor 810 through a power management system, so as to manage charging, discharging, and power consumption management through the power management system and other functions.
  • the electronic device 800 includes some functional modules not shown, which will not be repeated here.
  • an embodiment of the present invention further provides an electronic device, including a processor 810 , a memory 809 , a computer program stored in the memory 809 and running on the processor 810 , when the computer program is executed by the processor 810
  • an electronic device including a processor 810 , a memory 809 , a computer program stored in the memory 809 and running on the processor 810 , when the computer program is executed by the processor 810
  • Embodiments of the present invention further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the above-mentioned PET-MRI image denoising method based on a dual-coding fusion network model is implemented. In order to avoid repetition, the details are not repeated here.
  • the computer-readable storage medium such as read-only memory (Read-Only Memory, referred to as ROM), random access memory (Random Access Memory, referred to as RAM), magnetic disk or optical disk and so on.
  • an embodiment of the present invention also provides a computer program product, the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, when the program instructions are When executed by a computer, the computer is caused to execute the method in any of the above method embodiments.
  • embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • Memory may include forms of non-persistent memory, random access memory (RAM) and/or non-volatile memory in computer readable media, such as read only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
  • RAM random access memory
  • ROM read only memory
  • flash RAM flash memory
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

一种基于双编码融合网络模型的PET-MRI图像去噪方法、装置、电子设备及计算机可读存储介质。该方法包括:获取目标对象的正电子发射计算机断层显像PET图像和核磁共振成像MRI图像;将PET图像和MRI图像输入到预先训练得到的双编码融合网络模型中,得到目标对象的PET-MRI图像;其中,双编码融合网络模型基于PET图像样本和MRI图像样本的融合特征训练得到,双编码融合网络包括密集连接的循环卷积网络和膨胀化卷积网络,密集连接的循环卷积网络用于提取图像的纹理信息;膨胀化卷积网络用于提取图像的空间信息。

Description

基于双编码融合网络模型的PET-MRI图像去噪方法、装置 技术领域
本发明涉及图像处理技术领域,尤其涉及一种基于双编码融合网络模型的PET-MRI图像去噪方法、装置。
背景技术
PET-MRI,也可记为PET/MRI,是将正电子发射计算机断层显像(PET,Positron Emission Computed Tomography)的分子成像功能与核磁共振成像(MRI,Magnetic Resonance Imaging)卓越的软组织对比功能结合起来的一种新技术,包括同机融合PET-MRI和异机融合PET-MRI。PET-MRI用于对在组织中扩散的疾病细胞进行成像,具体地,可以分别收集疾病细胞的PET影像和MRI影像,融合了PET对病灶的敏感检测优势和MRI的多序列成像优势,相比于其他方法而言,具有灵敏度高、准确性好等优点,且对许多疾病,比如肿瘤和最为常见的心脑疾病具有早期发现、早期诊断的价值。
实际应用中,考虑到在收集疾病细胞的PET影像过程中,PET的辐射量和显像剂,可能会大幅度增加各种疾病发生的可能性,影响人体生理机能,破坏人体组织器官,甚至危害到患者的生命安全,因此,相关技术人员提倡在满足医生对PET图像的临床诊断要求下,尽可能的减少显像剂和辐射剂量。然而,通常情况下,在PET成像时采用低剂量的显像剂,容易导致重建图像产生大量量子噪声和模糊的形态特征,从而降低图像质量。
为了降低重建图像的噪声,提高图像质量,相关技术中提供以下两种方法:
方式一:例如,Yang Lei等人于2019年在Physics in Medicine&Biology期刊上发表文章“Whole-body PET estimation from low count statistics using cycle-consistent generative adversarial networks”,该文章提出通过在生成网络 中引入了残差卷积模块,以提高去噪准确率。
方法二:例如,Kevin T.Chen等人于2019在Radiology刊上发表文章“Ultra–Low-Dose 18F-Florbetaben Amyloid PET Imaging Using Deep Learning with Multi-Contrast MRII Inputs”,该文章提出通过将MRII图像作为先验知识以输入通道的方式引入PET图像的去噪任务中,其中,去噪网络使用U-Net结构,输入为PET和MRI,最后将网络输出的单一特征图与MRI像素点相加得到最后输出。
然而,采用上述方式一,虽然可以提高去噪准确率,但由于生成网络带来的超参数过多,使得训练过程复杂度较高,难以调节;采用上述方式二,虽然可以降低图像噪声,但是并未对PET图像和MRI图像进行空间上信息抓取与关联,从而其对于噪声和伪影的消除效果不佳。
发明内容
本发明实施例提供一种基于双编码融合网络模型的PET-MRI图像去噪方法,用以解决现有技术中的PET-MRI图像去噪方法存在的去噪能力较弱,输出的图像质量不高的问题。
本发明实施例还提供一种基于双编码融合网络模型的PET-MRI图像去噪装置,一种电子设备,以及一种计算机可读存储介质。
本发明实施例采用下述技术方案。
一种基于双编码融合网络模型的PET-MRI图像去噪方法,包括:获取目标对象的正电子发射计算机断层显像PET图像和核磁共振成像MRI图像;将PET图像和MRI图像输入到预先训练得到的双编码融合网络模型中,得到目标对象的PET-MRI图像;其中,双编码融合网络模型基于PET图像样本和MRI图像样本的融合特征训练得到,双编码融合网络包括密集连接的循环卷积网络和膨胀化卷积网络,密集连接的循环卷积网络用于提取图像的纹理信息;膨胀化卷积网络用于提取图像的空间信息。
一种基于双编码融合网络模型的PET-MRI图像去噪装置,包括获取模块和输入模块,其中:获取模块,用于获取目标对象的正电子发射计算机断层显像PET图像和核磁共振成像MRI图像;输入模块,用于将PET图像和MRI图像输入到预先训练得到的双编码融合网络模型中,得到目标对象的PET-MRI图像;其中,双编码融合网络模型基于PET图像样本和MRI图像样本的融合特征训练得到,双编码融合网络包括密集连接的循环卷积网络和膨胀化卷积网络,密集连接的循环卷积网络用于提取图像的纹理信息;膨胀化卷积网络用于提取图像的空间信息。
一种电子设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,计算机程序被处理器执行时实现如上的基于双编码融合网络模型的PET-MRI图像去噪方法的步骤。
一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现如上的基于双编码融合网络模型的PET-MRI图像去噪方法的步骤。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本发明的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1为本发明实施例提供的一种基于双编码融合网络模型的PET-MRI图像去噪方法的实现流程示意图;
图2为本发明实施例提供的一种密集连接的循环卷积网络的结构示意图;
图3为本发明实施例提供的一种循环卷积网络的结构示意图;
图4中的(a)为Inception V2网络的结构示意图;
图4中的(b)为本发明实施例提供的一种膨胀化卷积网络的结构示意图;
图5为本发明实施例提供的一种基于PET图像样本和MRI图像样本的 融合特征训练得到双编码融合网络模型的方法的实现流程示意图;
图6为本发明实施例提供的方法在实际中的一种应用流程的示意图;
图7为本发明实施例提供一种基于双编码融合网络模型的PET-MRI图像去噪装置的具体结构示意图;
图8为本发明实施例提供的一种电子设备的结构示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明具体实施例及相应的附图对本发明技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
以下结合附图,详细说明本发明各实施例提供的技术方案。
实施例1
为解决现有技术中的PET-MRI图像去噪方法存在的去噪能力较弱,输出的图像质量不高的问题,本发明实施例提供一种基于双编码融合网络模型的PET-MRI图像去噪方法。
该方法的执行主体,可以是各种类型的计算设备,或者,可以是安装于计算设备上的应用程序或应用(Application,APP)。所述的计算设备,比如可以是手机、平板电脑、智能可穿戴设备等用户终端,也可以是服务器等。
为便于描述,本发明实施例以该方法的执行主体为服务器为例,对该方法进行介绍。本领域技术人员可以理解,本发明实施例以服务器为例对方法进行介绍,仅是一种示例性说明,并不对本方案对应的权利要求保护范围构成限制。具体地,本发明实施例提供的该方法的实现流程如图1所示,包括如下步骤。
步骤11,获取目标对象的正电子发射计算机断层显像PET图像和核磁共振成像MRI图像。
根据本发明实施例所要解决的技术问题可知,本发明实施例是为了解决现有技术中的PET-MRI图像去噪方法存在的去噪能力较弱,输出的图像质量不高的问题,而提出的一种基于双编码融合网络模型的PET-MRI图像去噪方法,因此,该目标对象,可以理解为待生成PET-MRI图像的对象,例如,目标对象可以包括人体内部的器官或者人体内部某个患病的区域等。
PET图像,可以包括目标对象在低辐射量和低显像剂条件下,通过正电子发射断层成像技术得到的图像。例如,以目标对象为人体内的恶性肿瘤区域为例,则PET图像可以包括该恶性肿瘤区域在低辐射量和低显像剂条件下,通过正电子发射断层成像技术得到的图像。
本发明实施例中,可以通过在目标对象内部注射放射性同位素标记的示踪剂,比如通过 11C、 13N、 15O或 18F等同位素标记的氟代脱氧葡萄糖,使其参与到目标对象的组织血流和代谢过程中,这样,由于同位素在自然环境中不稳定会发生衰变,并产生正电子,而正电子在人体内移动1~3mm后会与负电子相遇发生湮灭效应,并产生一对能量相等、运动方向相反的伽玛光子,进而可以通过PET探测器探测伽玛光子对后得到PET投影数据,最后基于PET投影数据经过计算机的重建技术获得PET图像。
例如,以目标对象为人体内的恶性肿瘤区域为例,可以往人体内注射放射性同位素标记的示踪剂,使其参与到人体组织血流和代谢过程中,这样,由于不同组织的代谢程度具有差异性,并且恶性肿瘤组织的代谢异常旺盛,因此,带同位素标记的示踪剂经注射后便可以参与到体内血液循环过程,而同位素由于不稳定而发生衰变并产生正电子,正电子在人体内移动1~3mm后与负电子相遇发生湮灭效应,并产生一对能量相等、运动方向相反的伽玛光子,此时,即可通过PET探测器以及计算机的重建技术获得恶性肿瘤区域的PET图像。
需要说明的是,上述例举的获取PET图像的方法仅是本发明实施例的一种示例性说明,并不对本发明实施例造成任何限定。
MRI图像,可以包括针对目标对象,通过磁共振成像技术得到的图像。本发明实施例中,例如可以利用磁共振现象,从目标对象中获得电磁信号,并基于获得的电磁信号重建出目标对象信息,从而得到MRI图像。
需要说明的是,获取的PET图像和MRI图像的图像尺寸以及图像格式可以根据实际需要确定,本发明实施例中不作限定。例如,在一种可选的实施方式中,PET图像和MRI图像的图像尺寸可以是256*256,图像格式可以是灰度图。
步骤12,将PET图像和MRI图像输入到预先训练得到的双编码融合网络模型中,得到目标对象的PET-MRI图像。
其中,双编码融合网络模型基于PET图像样本和MRI图像样本的融合特征训练得到,双编码融合网络包括密集连接的循环卷积网络和膨胀化卷积网络,密集连接的循环卷积网络用于提取图像的纹理信息;膨胀化卷积网络用于提取图像的空间信息。
为了更清楚地描述步骤12,以下在介绍该步骤之前,先对本发明实施例涉及的双编码融合网络模型进行介绍。
双编码融合网络模型可以理解为一种包括两条编码网络的,用于生成PET-MRI图像的网络模型。可选的,考虑到相关技术中,通常会通过不断叠加卷积操作以此来获取图像的高维信息,这样,虽然可以获取到图像的高维信息,但是在不断叠加卷积操作的过程中也会造成参数数量的增多,同时梯度的反向传播难度会加大等问题,因此,为了解决该问题,本发明实施例中双编码融合网络模型的其中一条编码网络比如可以是用于获取图像的纹理信息以及对图像信息进行编码的密集连接的循环卷积网络,其中,密集连接的循环卷积网络可以同时包括循环卷积网络和循环卷积间的密集连接,这样一来,一方面,循环卷积结构可以保证在参数数量不变的同时,对不同时域上 的图像特征进行累积和提取;另一方面,在循环卷积网络之间引入密集连接操作,可以保证反向传播更新梯度的稳定性。
例如,如图2所示,为本发明实施例提供的一种密集连接的循环卷积网络的结构示意图。该密集连接的循环卷积网络可以包括循环卷积网络(图2中的Recurrent Conv)和循环卷积间的密集连接(图2中的
Figure PCTCN2020137567-appb-000001
Concatenate),可以包括至少两个卷积核,至少两个卷积核的参数相同,且每个循环卷积网络与输出之间都有密集连接机制,该机制是基于通道之间的连接。
可选的,考虑到在密集连接机制之后,循环卷积网络的通道数的增加,进而造成计算资源消耗增大,因此,本发明实施例中在该密集连接机制后,还可以使用1×1的卷积核和ReLU激活函数来恢复原通道数,这样,在增加了表征学习的非线性变换的同时,还可以加强深度监督,从而加快收敛。
如图3所示,为本发明实施例提供的一种循环卷积网络的结构示意图,该循环卷积网络可以包括至少两个卷积核,且为了保证在增加卷积操作同时,不增加参数量,至少两个卷积核的参数相同。图3中以3个3*3的卷积核(图中的3*3conv)为例,循环卷积网络的输入可以经过三个卷积操作,三个卷积核的参数是共享的(图中的Shared weights)。而且,在每次卷积操作后,输出将会与最开始的初始输入(图中的input)进行像素点的相加,以实现图像特征信息的累积,之后再进入共享参数的卷积操作,这样,在梯度更新的反向传播中,则可以相当于一个卷积核将会在不同时域中进行更新,从而使训练过程更容易。
此外,考虑到相关技术中,在对图像降噪时并未对PET图像和MRI图像进行空间上信息抓取与关联,使得其对于噪声和伪影的消除效果不佳的问题,本发明实施例中,双编码融合网络模型的另一条编码网络比如可以是用于提取图像的空间信息以及对图像信息进行编码的膨胀化卷积网络,这样,由于膨胀化卷积网络中引入了膨胀化卷积,因此,可以对图像中不同形态和尺寸的图像特征具有更好的抓取能力,增加了网络宽度和空间信息的提取能 力。
其中,膨胀化卷积网络可以包括基于预设膨胀比例调节后的目标卷积;预设膨胀比例根据图像中不同形态和不同尺寸的目标对象所需的信息抓取能力确定;目标卷积用于对图像中目标对象的信息进行抓取。
例如,如图4中的(b)所示(以下简称图4(b)),为本发明实施例提供的一种膨胀化卷积网络的结构示意图,该膨胀化卷积网络是基于Inception V2的结构(参见图4中的(a))采用预设膨胀比例进行了特定的调整,具体地,是在Inception V2的第三条和第四条操作中引入了膨胀化卷积,这样对图像中不同形态和尺寸的图像特征具有更好的抓取能力。
其中,膨胀化卷积可以基于以下等式确定。
高(膨胀后)/宽(膨胀后)=(膨胀比例-1)×(卷积核大小-1)+卷积核大小。
膨胀比例Dilationrate比如可以是Dilationrate=(1,2)或者Dilationrate=(3,1)。
例如,以图4中(a)的第三条为例(即Inception V2中从下往上数第二层左起第2个),此时该卷积核为3*3,假设膨胀比例为Dilation rate=(1,2),则膨胀后的卷积核大小为3*5。或者,假设膨胀比例为Dilation rate=(3,1),则膨胀后的卷积核大小为7*3。
需要说明的是,上述例举的膨胀比例以及膨胀后的卷积核大小仅是本发明实施例的一种示例性说明,并不对本发明实施例造成任何限定。
可选的,膨胀化卷积网络还可以结合一些1×1的卷积操作(图4(b)中的Conv1*1)和全局池化操作(图4(b)中的AvgPool3*3),以增加网络宽度和空间信息提取能力。同时,也可以在卷积中加入卷积核滑动步长stride为2(图4(b)中的stride=2)的操作,以对特征图进行降维,每条操作路线的特征图数量为最后总输出的1/4,最后所有特征图进行拼接恢复指定输出特征图数量。
以上内容即关于双编码融合网络模型的相关介绍,下面将结合上述描述 内容对步骤12进行详细介绍。
本发明实施例中,将PET图像和MRI图像输入到预先训练得到的双编码融合网络模型后,则可以基于双编码融合网络模型进行编码和解码操作,最终恢复得到PET-MRI图像。
具体地,将PET图像和MRI图像输入到预先训练得到的双编码融合网络模型后,可以先基于双编码融合网络模型的密集连接的循环卷积网络分别提取PET图像的图像特征和MRI图像的图像特征,其中,该图像特征包括图像的纹理信息。在得到PET图像的图像特征和MRI图像的图像特征之后,则可以将这两种特征进行特征融合,也即对PET图像的像素点和MRI图像的像素点进行相加融合,并将融合后的图像特征分别作为密集连接的循环卷积网络和膨胀化卷积网络两条编码网络的输入,以得到编码结果。
或者,在一种可选的实施方式中,若密集连接的循环卷积网络和膨胀化卷积网络均包括至少两层编码层,则在得到第一层编码层的编码结果后,还可以将第一层编码层的编码结果作为第二层编码层的输入,得到第二层编码层的编码结果,依次类推,直至通过密集连接的循环卷积网络和膨胀化卷积网络中最后一层编码层,得到最终编码结果。
在得到最终编码结果后,则可以基于该最终编码结果进行解码操作,以恢复出目标对象的PET-MRI图像。具体地,可以先基于得到的最终编码结果经过密集连接的循环卷积网络进行循环卷积操作和上采样操作,然后与编码阶段得到的特征进行拼接,最后通过卷积解码操作得到目标对象的PET-MRI图像。
其中,基于得到的最终编码结果进行上采样操作,是为了提升图像的分辨率。
采用本发明实施例提供的方法,由于双编码融合网络模型是基于PET图像样本和MRI图像样本的融合特征训练得到,而且双编码融合网络包括密集连接的循环卷积网络和膨胀化卷积网络,密集连接的循环卷积网络用于提取 图像的纹理信息;膨胀化卷积网络用于提取图像的空间信息,这样,将获取的PET图像和MRI图像输入到预先训练得到的双编码融合网络模型后,则可以基于密集连接的循环卷积网络和膨胀化卷积网络同时对图像的纹理信息和空间信息进行抓取,增强了纹理信息和空间信息的抓取能力,能够在较大程度上消除低剂量带来的噪声和伪影;而且还可以将PET图像的图像特征和MRI图像的图像特征融合,保证PET图像和MRI图像关联,从而提高PET-MRI图像的质量。
实施例2
通常,在执行上述实施例1中的步骤12之前,还需要预先得到双编码融合网络模型。基于此,本发明实施例提供一种基于PET图像样本和MRI图像样本的融合特征训练得到双编码融合网络模型的方法,如图5a所示,该方法包括如下步骤。
步骤51,通过双编码融合网络模型的密集连接的循环卷积网络提取PET图像样本的第一图像特征和MRI图像样本的第二图像特征。
双编码融合网络模型可以理解为一种包括两条编码网络的、用于生成PET-MRI图像的网络模型。
密集连接的循环卷积网络,可以理解为双编码融合网络模型中两条编码网络中的其中一条,可以用于提取图像的纹理信息以及对图像信息进行编码。
本发明实施例中,密集连接的循环卷积网络可以包括至少两个卷积核,且至少两个卷积核的参数相同。
步骤52,将第一图像特征和第二图像特征进行融合,以得到融合后的第三图像特征。
本发明实施例中,在得到第一图像特征和第二图像特征之后,可以进行融合,也即将上述两个图像特征的像素点进行相加。
可选的,在将第一图像特征和第二图像特征进行融合,以得到融合后的 第三图像特征之前,方法还包括:通过1*1的卷积核和激活函数ReLU,对第一图像特征和第二图像特征进行卷积处理和激活处理,以恢复循环卷积网络的通道数。
步骤53,循环执行第一指定操作,直至n+1=N时终止循环执行第一指定操作,并将得到的融合结果确定为PET-MRI图像的第四图像特征。
第一指定操作包括以下步骤。
将第三图像特征分别作为双编码融合网络模型的PET编码器的第n层编码层的输入和双编码融合网络模型的MRI特征提取器的第n层特征提取层的输入,得到PET编码器的第n个输出结果和MRI特征提取器的第n个提取结果;其中,PET编码器的编码层的数目与MRI特征提取器的特征提取层的数目相同;基于PET编码器的第n个输出结果和MRI特征提取器的第n个提取结果进行特征融合,得到第n个融合结果;n的取值范围为[1,2,…N],N表示PET编码器的最后一层编码层的序号和MRI特征提取器的最后一层特征提取层的序号;将第n个融合结果分别作为PET编码器的第n+1层编码层的输入和MRI特征提取器的第n+1层特征提取层的输入,得到PET编码器的第n+1个输出结果和MRI特征提取器的第n+1个提取结果;将PET编码器的第n+1个输出结果和MRI特征提取器的第n+1个提取结果进行融合,得到PET-MRI图像的第四图像特征。
以下结合实例,对步骤53进行相关说明。
例如,假设双编码融合网络模型的PET编码器的编码层的数目与MRI特征提取器的特征提取层的数目均为3,则在得到融合后的第三图像特征之后,首先,可以将第三图像特征分别作为双编码融合网络模型的PET编码器的第1层编码层的输入和双编码融合网络模型的MRI特征提取器的第1层特征提取层的输入,得到PET编码器的第1个输出结果和MRI特征提取器的第1个提取结果。
其次,基于PET编码器的第1个输出结果和MRI特征提取器的第1个 提取结果进行特征融合,得到第1个融合结果。然后,将第1个融合结果分别作为PET编码器的第2层编码层的输入和MRI特征提取器的第2层特征提取层的输入,得到PET编码器的第2个输出结果和MRI特征提取器的第2个提取结果。
此时,由于第二层编码层以及第2层特征提取层并不是PET编码器和MRI特征提取器的最后一层,因此,还需要继续循环执行上述操作,也即将PET编码器的第2个输出结果和MRI特征提取器的第2个提取结果进行特征融合,得到第2个融合结果;并将第2个融合结果分别作为PET编码器的第3层编码层的输入和MRI特征提取器的第3层特征提取层的输入,得到PET编码器的第3个输出结果和MRI特征提取器的第3个提取结果。此时,由于第3层编码层和第3层特征提取层分别是PET编码器和MRI特征提取器的最后一层,因此,可以将PET编码器的第3个输出结果和MRI特征提取器的第3个提取结果进行融合,得到PET-MRI图像的第四图像特征。
步骤54,基于第四图像特征进行解码操作,得到双编码融合网络模型。
其中,如图5b所示,基于第四图像特征进行解码操作,得到双编码融合网络模型,具体可以包括以下步骤541~步骤542。
步骤541,将第四图像特征确定为双编码融合网络模型的解码器的第1个输入。
步骤542,循环执行第二指定操作,直至m+1=M时终止循环执行第二指定操作,将第m+1个输入对应的卷积解码结果确定为解码去噪后的PET-MRI图像,以及将双编码融合网络模型确定为预先训练得到的双编码融合网络模型。
可选的,第二指定操作包括以下步骤。
依次将双编码融合网络模型的解码器的第m个输入进行循环卷积操作和上采样操作,得到解码器的第m个处理结果;m的取值范围为[1,2,…M],M表示解码器的最后一层解码层的序号,且M=N;获取MRI特征提取器的 第N-1个提取结果,并基于密集连接的循环卷积网络对第N-1个提取结果进行卷积操作,得到卷积操作结果;将第m个处理结果与卷积操作结果进行拼接得到解码器的第m个卷积解码结果,并将第m个卷积解码结果确定为解码器的第m+1个输入。
以下结合实例,对步骤54进行相关说明。
沿用上例,假设双编码融合网络模型的PET编码器的编码层的数目、MRI特征提取器的特征提取层的数目以及双编码融合网络模型的解码器的解码层的数目均为3,则首先可以将双编码融合网络模型的解码器的第1个输入进行循环卷积操作和上采样操作,得到解码器的第1个处理结果;其次,获取MRI特征提取器的第2个提取结果,并基于密集连接的循环卷积网络对第2个提取结果进行卷积操作,得到卷积操作结果;最后,将第1个处理结果与卷积操作结果进行拼接得到解码器的第1个卷积解码结果,并将第1个卷积解码结果确定为解码器的第2个输入。
执行完上述步骤54之后,则可以得到双编码融合网络模型。在一种可选的实施方式中,为了保证双编码融合网络模型生成的PET-MRI图像尽可能的与标准PET-MRI图像相似,本发明实施例中,还可以基于标准PET-MRI图像和预设损失函数,对预设双编码融合网络模型的参数进行更新,直至双编码融合网络模型收敛于预设范围。
其中,预设损失函数至少根据用于校准双编码融合网络模型的噪声分布的平均绝对误差函数和用于防止双编码融合网络模型过拟合的函数建立。
例如,考虑到相关技术中采用L2损失函数时,虽然可以在一定程度上提升图像的峰值信噪比,但容易使得PET-MRI图像平滑,丢失细节,因此,本发明实施例可以采用L1损失函数,这样能够更好适应真实噪声分布。
可选的,还可以使用了a total variation(TV)regularizer作为正则项,以防止过拟合。其中等式如下。
Figure PCTCN2020137567-appb-000002
其中,N表示去噪后的PET-MRI图像的像素点的数量,G(x i)表示去噪后的PET-MRI图像,y i表示标准PET-MRI图像;后两项分别表示去噪后的PET-MRI图像在水平方向的梯度的二阶范数的平方,以及去噪后的PET-MRI图像在垂直方向的梯度的二阶范数的平方。
采用本发明实施例提供的方法,由于双编码融合网络模型是基于PET图像样本和MRI图像样本的融合特征训练得到,而且双编码融合网络包括密集连接的循环卷积网络和膨胀化卷积网络,密集连接的循环卷积网络用于提取图像的纹理信息;膨胀化卷积网络用于提取图像的空间信息,这样,将获取的PET图像和MRI图像输入到预先训练得到的双编码融合网络模型后,则可以基于密集连接的循环卷积网络和膨胀化卷积网络同时对图像的纹理信息和空间信息进行抓取,增强了纹理信息和空间信息的抓取能力,能够在较大程度上消除低剂量带来的噪声和伪影;而且还可以将PET图像的图像特征和MRI图像的图像特征融合,保证PET图像和MRI图像关联,从而提高PET-MRI图像的质量。
实施例3
以下结合实际场景,说明本发明实施例提供的方法在实际中如何应用。
请参见图6,为本发明实施例提供的方法在实际中的一种应用流程的示意图。
以下在描述该方法之前,先对本发明实施例涉及的双编码融合网络模型进行相关介绍。
如图6所示,该网络模型可以包括三个部分,第一部分是对PET特征的编码器(图中的PET Feature Encoder),第二部分是对MRI的特征提取器(图中的Inception Extractor for MRI),第三部分是去噪解码器(图中的PET Denoise Decoder)。其中,①表示基于密集连接的循环卷积网络提取图像特征; ②表示基于密集连接的循环卷积网络对编码层的输出结果进行循环卷积操作以及最大池化操作;③表示基于膨胀化卷积网络进行特征提取操作;④表示基于密集连接的卷积循环网络对解码器网络的输入进行循环卷积操作以及上采样操作;⑤表示对解码器网络的输出结果进行密集连接的循环卷积操作和激活函数激活操作。
实际应用中,可以将256×256大小的PET灰度图和MRI灰度图作为输入,输入到双编码融合网络模型,以便基于密集连接的循环卷积对PET灰度图和MRI灰度图进行特征提取操作(图中的①)。
特征提取之后,可以进行相加融合,并将融合后的特征分别作为PET特征的编码器和MRI的特征提取器下一阶段各自编码器的输入。
需要说明的是,将融合后的特征作为PET特征的编码器的下一阶段的输入,输入PET特征的编码器之前,还可以基于密集连接的循环卷积网络对融合后的特征进行循环卷积和最大池化操作(图中的②)。同时,将融合后的特征作为MRI的特征提取器的下一阶段的输入,输入MRI的特征提取器之前,还可以基于膨胀化卷积网络对融合后的特征进行特征提取操作(图中的③)。
与此同时,相加融合后的特征可以进行保留,并在解码去噪阶段与相应的层进行拼接。如图6所示,在最底层PET特征的编码器和MRI的特征提取器特征融合后,可以经过密集连接循环卷积网络进行循环卷积操作以及上采样操作(图中的④),以提升分辨率。
最后,将解码器的输出与对于编码器阶段的特征图进行拼接,并进行进一步卷积解码操作以及激活操作(图中的⑤)即可以输出维度为256×256的灰度图,其中,特征图数量走向为:32→64→128→256→256→256→128→64→32。
采用本发明实施例提供的方法,由于双编码融合网络模型是基于PET图像样本和MRI图像样本的融合特征训练得到,而且双编码融合网络包括密集连接的循环卷积网络和膨胀化卷积网络,密集连接的循环卷积网络用于提取 图像的纹理信息;膨胀化卷积网络用于提取图像的空间信息,这样,将获取的PET图像和MRI图像输入到预先训练得到的双编码融合网络模型后,则可以基于密集连接的循环卷积网络和膨胀化卷积网络同时对图像的纹理信息和空间信息进行抓取,增强了纹理信息和空间信息的抓取能力,能够在较大程度上消除低剂量带来的噪声和伪影;而且还可以将PET图像的图像特征和MRI图像的图像特征融合,保证PET图像和MRI图像关联,从而提高PET-MRI图像的质量。
实施例4
现有技术中的PET-MRI图像去噪方法存在的去噪能力较弱,输出的图像质量不高的问题,本发明实施例提供一种基于双编码融合网络模型的PET-MRI图像去噪装置,该装置70的具体结构示意图如图7所示,包括获取模块71和输入模块72。各模块的功能如下。
获取模块71,用于获取目标对象的正电子发射计算机断层显像PET图像和核磁共振成像MRI图像。
输入模块72,用于将PET图像和MRI图像输入到预先训练得到的双编码融合网络模型中,得到目标对象的PET-MRI图像;其中,双编码融合网络模型基于PET图像样本和MRI图像样本的融合特征训练得到,双编码融合网络包括密集连接的循环卷积网络和膨胀化卷积网络,密集连接的循环卷积网络用于提取图像的纹理信息;膨胀化卷积网络用于提取图像的空间信息。
可选的,该装置还可以包括训练模块,用于基于PET图像样本和MRI图像样本的融合特征训练得到双编码融合网络模型。
其中,训练模块,包括:提取单元,用于通过双编码融合网络模型的密集连接的循环卷积网络提取PET图像样本的第一图像特征和MRI图像样本的第二图像特征;融合单元,用于将第一图像特征和第二图像特征进行融合,以得到融合后的第三图像特征;循环单元,用于循环执行第一指定操作,直 至n+1=N时终止循环执行第一指定操作,并将得到的融合结果确定为PET-MRI图像的第四图像特征;第一指定操作包括:将第三图像特征分别作为双编码融合网络模型的PET编码器的第n层编码层的输入和双编码融合网络模型的MRI特征提取器的第n层特征提取层的输入,得到PET编码器的第n个输出结果和MRI特征提取器的第n个提取结果;其中,PET编码器的编码层的数目与MRI特征提取器的特征提取层的数目相同;基于PET编码器的第n个输出结果和MRI特征提取器的第n个提取结果进行特征融合,得到第n个融合结果;n的取值范围为[1,2,…N],N表示PET编码器的最后一层编码层的序号和MRI特征提取器的最后一层特征提取层的序号;将第n个融合结果分别作为PET编码器的第n+1层编码层的输入和MRI特征提取器的第n+1层特征提取层的输入,得到PET编码器的第n+1个输出结果和MRI特征提取器的第n+1个提取结果;将PET编码器的第n+1个输出结果和MRI特征提取器的第n+1个提取结果进行融合,得到PET-MRI图像的第四图像特征;解码单元,用于基于第四图像特征进行解码操作,得到双编码融合网络模型。
可选的,解码单元,用于:将第四图像特征确定为双编码融合网络模型的解码器的第1个输入;循环执行第二指定操作,直至m+1=M时终止循环执行第二指定操作,将第m+1个输入对应的卷积解码结果确定为解码去噪后的PET-MRI图像,以及将双编码融合网络模型确定为预先训练得到的双编码融合网络模型。
可选的,第二指定操作包括:依次将双编码融合网络模型的解码器的第m个输入进行循环卷积操作和上采样操作,得到解码器的第m个处理结果;m的取值范围为[1,2,…M],M表示解码器的最后一层解码层的序号,且M=N;获取MRI特征提取器的第N-1个提取结果,并基于密集连接的循环卷积网络对第N-1个提取结果进行卷积操作,得到卷积操作结果;将第m个处理结果与卷积操作结果进行拼接得到解码器的第m个卷积解码结果,并将 第m个卷积解码结果确定为解码器的第m+1个输入。
可选的,装置还包括更新模块,用于:基于标准PET-MRI图像和预设损失函数,对预设双编码融合网络模型的参数进行更新,直至双编码融合网络模型收敛于预设范围;预设损失函数至少根据用于校准双编码融合网络模型的噪声分布的平均绝对误差函数和用于防止双编码融合网络模型过拟合的函数建立。
可选的,用于防止双编码融合网络模型过拟合的函数至少根据双编码融合网络模型得到的PET-MRI图像在水平方向的梯度值和PET-MRI图像在垂直方向的梯度值建立。
可选的,装置还包括处理模块,用于在将第一图像特征和第二图像特征进行融合,以得到融合后的第三图像特征之前,通过1*1的卷积核和激活函数ReLU,对密集连接的循环卷积网络的输出结果进行处理,以恢复循环卷积网络的通道数。
可选的,密集连接的循环卷积网络包括至少两个卷积核,至少两个卷积核的参数相同。
可选的,膨胀化卷积网络包括:基于预设膨胀比例调节后的目标卷积;预设膨胀比例根据图像中不同形态和不同尺寸的目标对象所需的信息抓取能力确定;目标卷积用于对图像中目标对象的信息进行抓取。
采用本发明实施例提供的装置,由于双编码融合网络模型是基于PET图像样本和MRI图像样本的融合特征训练得到,而且双编码融合网络包括密集连接的循环卷积网络和膨胀化卷积网络,密集连接的循环卷积网络用于提取图像的纹理信息;膨胀化卷积网络用于提取图像的空间信息,这样,输入模块将获取模块获取的PET图像和MRI图像输入到预先训练得到的双编码融合网络模型之后,则可以基于密集连接的循环卷积网络和膨胀化卷积网络同时对图像的纹理信息和空间信息进行抓取,增强了纹理信息和空间信息的抓取能力,能够在较大程度上消除低剂量带来的噪声和伪影;而且还可以将PET 图像的图像特征和MRI图像的图像特征融合,保证PET图像和MRI图像关联,从而提高PET-MRI图像的质量。
实施例5
图8为实现本发明各个实施例的一种电子设备的硬件结构示意图,该电子设备800包括但不限于:射频单元801、网络模块802、音频输出单元803、输入单元804、传感器805、显示单元806、用户输入单元807、接口单元808、存储器809、处理器810、以及电源811等部件。本领域技术人员可以理解,图8中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本发明实施例中,电子设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。
其中,处理器810,用于获取目标对象的正电子发射计算机断层显像PET图像和核磁共振成像MRI图像;将PET图像和MRI图像输入到预先训练得到的双编码融合网络模型中,得到目标对象的PET-MRI图像;其中,双编码融合网络模型基于PET图像样本和MRI图像样本的融合特征训练得到,双编码融合网络包括密集连接的循环卷积网络和膨胀化卷积网络,密集连接的循环卷积网络用于提取图像的纹理信息;膨胀化卷积网络用于提取图像的空间信息。
存储器809,用于存储可在处理器810上运行的计算机程序,该计算机程序被处理器810执行时,实现处理器810所实现的上述功能。
应理解的是,本发明实施例中,射频单元801可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器810处理;另外,将上行的数据发送给基站。通常,射频单元801包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元801还可以通过无线通信系统与网络和其他设备通信。
电子设备通过网络模块802为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元803可以将射频单元801或网络模块802接收的或者在存储器809中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元803还可以提供与电子设备800执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元803包括扬声器、蜂鸣器以及受话器等。
输入单元804用于接收音频或视频信号。输入单元8404可以包括图形处理器(Graphics Processing Unit,GPU)8041和麦克风8042,图形处理器8041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元806上。经图形处理器8041处理后的图像帧可以存储在存储器809(或其它存储介质)中或者经由射频单元801或网络模块802进行发送。麦克风8042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元801发送到移动通信基站的格式输出。
电子设备800还包括至少一种传感器805,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板8061的亮度,接近传感器可在电子设备800移动到耳边时,关闭显示面板8061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别电子设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器805还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元806用于显示由用户输入的信息或提供给用户的信息。显示单元806可包括显示面板8061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板8061。
用户输入单元807可用于接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元807包括触控面板8071以及其他输入设备8072。触控面板8071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板8071上或在触控面板8071附近的操作)。触控面板8071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器810,接收处理器810发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板8071。除了触控面板8071,用户输入单元807还可以包括其他输入设备8072。具体地,其他输入设备8072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板8071可覆盖在显示面板8061上,当触控面板8071检测到在其上或附近的触摸操作后,传送给处理器810以确定触摸事件的类型,随后处理器810根据触摸事件的类型在显示面板8061上提供相应的视觉输出。虽然在图8中,触控面板8071与显示面板8061是作为两个独立的部件来实现电子设备的输入和输出功能,但是在某些实施例中,可以将触控面板8071与显示面板8061集成而实现电子设备的输入和输出功能,具体此处不做限定。
接口单元808为外部装置与电子设备800连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线 或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元808可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到电子设备800内的一个或多个元件或者可以用于在电子设备800和外部装置之间传输数据。
存储器809可用于存储软件程序以及各种数据。存储器809可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器809可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器810是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器809内的软件程序和/或模块,以及调用存储在存储器809内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。处理器810可包括一个或多个处理单元;优选的,处理器810可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器810中。
电子设备800还可以包括给各个部件供电的电源811(比如电池),优选的,电源811可以通过电源管理系统与处理器810逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,电子设备800包括一些未示出的功能模块,在此不再赘述。
优选的,本发明实施例还提供一种电子设备,包括处理器810,存储器809,存储在存储器809上并可在所述处理器810上运行的计算机程序,该计算机程序被处理器810执行时实现上述基于双编码融合网络模型的PET-MRI图像去噪方法实施例的各个过程,且能达到相同的技术效果,为避免重复, 这里不再赘述。
本发明实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述基于双编码融合网络模型的PET-MRI图像去噪方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。
此外,本发明实施例还提供了一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行上述任意方法实施例中的方法。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器 中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、 商品或者设备中还存在另外的相同要素。
以上所述仅为本发明的实施例而已,并不用于限制本发明。对于本领域技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本发明的权利要求范围之内。

Claims (11)

  1. 一种基于双编码融合网络模型的PET-MRI图像去噪方法,其中,包括:
    获取目标对象的正电子发射计算机断层显像PET图像和核磁共振成像MRI图像;
    将所述PET图像和所述MRI图像输入到预先训练得到的双编码融合网络模型中,得到所述目标对象的PET-MRI图像;其中,所述双编码融合网络模型基于PET图像样本和MRI图像样本的融合特征训练得到,所述双编码融合网络包括密集连接的循环卷积网络和膨胀化卷积网络,所述密集连接的循环卷积网络用于提取图像的纹理信息;所述膨胀化卷积网络用于提取图像的空间信息。
  2. 如权利要求1所述的方法,其中,在将所述PET图像和所述MRI图像输入到预先训练得到的双编码融合网络模型中,得到所述目标对象的PET-MRI图像之前,所述方法还包括:基于PET图像样本和MRI图像样本的融合特征训练得到所述双编码融合网络模型;
    其中,基于PET图像样本和MRI图像样本的融合特征训练得到所述双编码融合网络模型,包括:
    通过双编码融合网络模型的密集连接的循环卷积网络提取PET图像样本的第一图像特征和MRI图像样本的第二图像特征;
    将所述第一图像特征和所述第二图像特征进行融合,以得到融合后的第三图像特征;
    循环执行第一指定操作,直至n+1=N时终止循环执行所述第一指定操作,并将得到的融合结果确定为PET-MRI图像的第四图像特征;
    所述第一指定操作包括:
    将所述第三图像特征分别作为所述双编码融合网络模型的PET编码器的第n层编码层的输入和所述双编码融合网络模型的MRI特征提取器的第n层 特征提取层的输入,得到所述PET编码器的第n个输出结果和所述MRI特征提取器的第n个提取结果;其中,所述PET编码器的编码层的数目与所述MRI特征提取器的特征提取层的数目相同;
    基于所述PET编码器的第n个输出结果和所述MRI特征提取器的第n个提取结果进行特征融合,得到第n个融合结果;n的取值范围为[1,2,…N],N表示所述PET编码器的最后一层编码层的序号和所述MRI特征提取器的最后一层特征提取层的序号;
    将所述第n个融合结果分别作为所述PET编码器的第n+1层编码层的输入和MRI特征提取器的第n+1层特征提取层的输入,得到所述PET编码器的第n+1个输出结果和所述MRI特征提取器的第n+1个提取结果;
    将所述PET编码器的第n+1个输出结果和所述MRI特征提取器的第n+1个提取结果进行融合,得到PET-MRI图像的第四图像特征;
    基于所述第四图像特征进行解码操作,得到所述双编码融合网络模型。
  3. 如权利要求2所述的方法,其中,在将所述第一图像特征和所述第二图像特征进行融合,以得到融合后的第三图像特征之前,所述方法还包括:
    通过1*1的卷积核和激活函数ReLU,对所述第一图像特征和所述第二图像特征进行卷积处理和激活处理,以恢复所述循环卷积网络的通道数。
  4. 如权利要求2所述的方法,其中,基于所述第四图像特征进行解码操作,得到所述双编码融合网络模型,包括:
    将所述第四图像特征确定为所述双编码融合网络模型的解码器的第1个输入;
    循环执行第二指定操作,直至m+1=M时终止循环执行所述第二指定操作,将第m+1个输入对应的卷积解码结果确定为解码去噪后的PET-MRI图像,以及将所述双编码融合网络模型确定为所述预先训练得到的双编码融合网络模型;
    所述第二指定操作包括:
    依次将所述双编码融合网络模型的解码器的第m个输入进行循环卷积操作和上采样操作,得到所述解码器的第m个处理结果;m的取值范围为[1,2,…M],M表示所述解码器的最后一层解码层的序号,且M=N;
    获取所述MRI特征提取器的第N-1个提取结果,并基于所述密集连接的循环卷积网络对所述第N-1个提取结果进行卷积操作,得到卷积操作结果;
    将所述第m个处理结果与所述卷积操作结果进行拼接得到所述解码器的第m个卷积解码结果,并将所述第m个卷积解码结果确定为所述解码器的第m+1个输入。
  5. 如权利要求2所述的方法,其中,所述方法还包括:
    基于标准PET-MRI图像和预设损失函数,对所述预设双编码融合网络模型的参数进行更新,直至所述双编码融合网络模型收敛于预设范围;
    所述预设损失函数至少根据用于校准所述双编码融合网络模型的噪声分布的平均绝对误差函数和用于防止所述双编码融合网络模型过拟合的函数建立。
  6. 如权利要求5所述的方法,其中,所述用于防止所述双编码融合网络模型过拟合的函数至少根据所述双编码融合网络模型得到的PET-MRI图像在水平方向的梯度值和PET-MRI图像在垂直方向的梯度值建立。
  7. 如权利要求1所述的方法,其中,所述密集连接的循环卷积网络包括至少两个卷积核,所述至少两个卷积核的参数相同。
  8. 如权利要求1所述的方法,其中,所述膨胀化卷积网络包括:基于预设膨胀比例调节后的目标卷积;
    所述预设膨胀比例根据图像中不同形态和不同尺寸的目标对象所需的信息抓取能力确定;
    所述目标卷积用于对图像中所述目标对象的信息进行抓取。
  9. 一种基于双编码融合网络模型的PET-MRI图像去噪装置,其中,包括获取模块和输入模块,其中:
    获取模块,用于获取目标对象的正电子发射计算机断层显像PET图像和核磁共振成像MRI图像;
    输入模块,用于将所述PET图像和所述MRI图像输入到预先训练得到的双编码融合网络模型中,得到所述目标对象的PET-MRI图像;其中,所述双编码融合网络模型基于PET图像样本和MRI图像样本的融合特征训练得到,所述双编码融合网络包括密集连接的循环卷积网络和膨胀化卷积网络,所述密集连接的循环卷积网络用于提取图像的纹理信息;所述膨胀化卷积网络用于提取图像的空间信息。
  10. 一种电子设备,其中,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至8中任一项所述的基于双编码融合网络模型的PET-MRI图像去噪方法的步骤。
  11. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8中任一项所述的基于双编码融合网络模型的PET-MRI图像去噪方法的步骤。
PCT/CN2020/137567 2020-12-18 2020-12-18 基于双编码融合网络模型的pet-mri图像去噪方法、装置 WO2022126588A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/137567 WO2022126588A1 (zh) 2020-12-18 2020-12-18 基于双编码融合网络模型的pet-mri图像去噪方法、装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/137567 WO2022126588A1 (zh) 2020-12-18 2020-12-18 基于双编码融合网络模型的pet-mri图像去噪方法、装置

Publications (1)

Publication Number Publication Date
WO2022126588A1 true WO2022126588A1 (zh) 2022-06-23

Family

ID=82059932

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/137567 WO2022126588A1 (zh) 2020-12-18 2020-12-18 基于双编码融合网络模型的pet-mri图像去噪方法、装置

Country Status (1)

Country Link
WO (1) WO2022126588A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115331083A (zh) * 2022-10-13 2022-11-11 齐鲁工业大学 基于逐步密集特征融合去雨网络的图像去雨方法及系统
US11540798B2 (en) 2019-08-30 2023-01-03 The Research Foundation For The State University Of New York Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising
CN115914630A (zh) * 2023-01-06 2023-04-04 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 一种图像压缩方法、装置、设备及存储介质
CN116757966A (zh) * 2023-08-17 2023-09-15 中科方寸知微(南京)科技有限公司 基于多层级曲率监督的图像增强方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984957A (zh) * 2014-05-04 2014-08-13 中国科学院深圳先进技术研究院 胶囊内窥镜图像可疑病变区域自动预警系统
CN109730704A (zh) * 2018-12-29 2019-05-10 上海联影智能医疗科技有限公司 一种控制医用诊疗设备曝光的方法及系统
US20190209867A1 (en) * 2017-11-08 2019-07-11 Shanghai United Imaging Healthcare Co., Ltd. System and method for diagnostic and treatment
CN110084772A (zh) * 2019-03-20 2019-08-02 浙江医院 基于弯曲波的mri/ct融合方法
CN111325714A (zh) * 2020-01-21 2020-06-23 上海联影智能医疗科技有限公司 感兴趣区域的处理方法、计算机设备和可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984957A (zh) * 2014-05-04 2014-08-13 中国科学院深圳先进技术研究院 胶囊内窥镜图像可疑病变区域自动预警系统
US20190209867A1 (en) * 2017-11-08 2019-07-11 Shanghai United Imaging Healthcare Co., Ltd. System and method for diagnostic and treatment
CN109730704A (zh) * 2018-12-29 2019-05-10 上海联影智能医疗科技有限公司 一种控制医用诊疗设备曝光的方法及系统
CN110084772A (zh) * 2019-03-20 2019-08-02 浙江医院 基于弯曲波的mri/ct融合方法
CN111325714A (zh) * 2020-01-21 2020-06-23 上海联影智能医疗科技有限公司 感兴趣区域的处理方法、计算机设备和可读存储介质

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11540798B2 (en) 2019-08-30 2023-01-03 The Research Foundation For The State University Of New York Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising
CN115331083A (zh) * 2022-10-13 2022-11-11 齐鲁工业大学 基于逐步密集特征融合去雨网络的图像去雨方法及系统
CN115914630A (zh) * 2023-01-06 2023-04-04 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 一种图像压缩方法、装置、设备及存储介质
CN115914630B (zh) * 2023-01-06 2023-05-30 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 一种图像压缩方法、装置、设备及存储介质
CN116757966A (zh) * 2023-08-17 2023-09-15 中科方寸知微(南京)科技有限公司 基于多层级曲率监督的图像增强方法及系统

Similar Documents

Publication Publication Date Title
WO2022126588A1 (zh) 基于双编码融合网络模型的pet-mri图像去噪方法、装置
CN110163048B (zh) 手部关键点的识别模型训练方法、识别方法及设备
CN110149541B (zh) 视频推荐方法、装置、计算机设备及存储介质
CN112651890A (zh) 基于双编码融合网络模型的pet-mri图像去噪方法、装置
CN108549863B (zh) 人体姿态预测方法、装置、设备及存储介质
JP7085062B2 (ja) 画像セグメンテーション方法、装置、コンピュータ機器およびコンピュータプログラム
US20210343041A1 (en) Method and apparatus for obtaining position of target, computer device, and storage medium
US20220036135A1 (en) Method and apparatus for determining image to be labeled and model training method and apparatus
CN110414631B (zh) 基于医学图像的病灶检测方法、模型训练的方法及装置
CN111091166B (zh) 图像处理模型训练方法、图像处理方法、设备及存储介质
CN113470029B (zh) 训练方法及装置、图像处理方法、电子设备和存储介质
CN108288032B (zh) 动作特征获取方法、装置及存储介质
WO2022126480A1 (zh) 基于Wasserstein生成对抗网络模型的高能图像合成方法、装置
CN107833219A (zh) 图像识别方法及装置
WO2023202285A1 (zh) 图像处理方法、装置、计算机设备及存储介质
CN112990053B (zh) 图像处理方法、装置、设备及存储介质
CN111915481B (zh) 图像处理方法、装置、电子设备及介质
CN108304506A (zh) 检索方法、装置及设备
CN114281956A (zh) 文本处理方法、装置、计算机设备及存储介质
CN111598896A (zh) 图像检测方法、装置、设备及存储介质
CN113257412B (zh) 信息处理方法、装置、计算机设备及存储介质
CN112037305B (zh) 对图像中的树状组织进行重建的方法、设备及存储介质
CN112287070A (zh) 词语的上下位关系确定方法、装置、计算机设备及介质
CN116955983A (zh) 脑电信号分析模型的训练方法、装置、设备及存储介质
CN113569822B (zh) 图像分割方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20965604

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20965604

Country of ref document: EP

Kind code of ref document: A1