CN113052840B - Processing method based on low signal-to-noise ratio PET image - Google Patents

Processing method based on low signal-to-noise ratio PET image Download PDF

Info

Publication number
CN113052840B
CN113052840B CN202110484347.8A CN202110484347A CN113052840B CN 113052840 B CN113052840 B CN 113052840B CN 202110484347 A CN202110484347 A CN 202110484347A CN 113052840 B CN113052840 B CN 113052840B
Authority
CN
China
Prior art keywords
image
layer
noise ratio
output
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110484347.8A
Other languages
Chinese (zh)
Other versions
CN113052840A (en
Inventor
李楠
王涛
史张珏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Sinogram Medical Technology Co ltd
Original Assignee
Jiangsu Sinogram Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Sinogram Medical Technology Co ltd filed Critical Jiangsu Sinogram Medical Technology Co ltd
Priority to CN202110484347.8A priority Critical patent/CN113052840B/en
Publication of CN113052840A publication Critical patent/CN113052840A/en
Application granted granted Critical
Publication of CN113052840B publication Critical patent/CN113052840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a processing method based on a low signal-to-noise ratio PET image, which comprises the following steps: acquiring a first image corresponding to a low signal-to-noise ratio to be processed; inputting the first image into a generator G for value training, and outputting a second image consistent with a high signal-to-noise ratio; the generator G includes: a synthetic network and a mapping network; the mapping network carries out code modulation processing on the input first image to obtain a style expression of the first image; the synthesis network expresses affine patterns into patterns corresponding to each layer structure by means of affine transformation of the mapping network, so that each layer structure carries out convolution output on the input first image to carry out adaptive adjustment, and a second image is obtained. The second image acquired by the method has richer texture information, can meet clinical analysis requirements, and provides better conditions for realizing accurate segmentation.

Description

Processing method based on low signal-to-noise ratio PET image
Technical Field
The invention relates to the field of medical imaging, in particular to a processing method based on a low signal-to-noise ratio PET image.
Background
The deep learning technology is utilized to segment some important parts (such as liver, bladder and the like) of the whole body PET image of the human body, so that the method has high clinical application value, and can be used for checking the metabolic condition of organs and tissues of the human body and whether metabolic abnormality exists. But the current segmentation technology for PET images is mainly focused on PET images with full dose tracers. The high image signal-to-noise ratio of the full dose tracer provides accurate metabolic information, but requires injection of sufficient amounts of the radiotracer into the patient and long-term radioactive scanning, which is potentially harmful to the patient. The image of the low-dose tracer can reduce the influence of radiation on human bodies compared with the full-dose image, but the image has low signal-to-noise ratio and high-frequency information loss, so that the image segmentation is not facilitated, and precise metabolic information cannot be provided.
For this reason, a method is needed that can process PET images corresponding to low dose tracers.
Disclosure of Invention
First, the technical problem to be solved
In view of the above-mentioned drawbacks and deficiencies of the prior art, the present invention provides a method for processing a PET image based on a low signal-to-noise ratio.
(II) technical scheme
In order to achieve the above purpose, the main technical scheme adopted by the invention comprises the following steps:
according to a first aspect of the present invention, an embodiment of the present invention provides a method for processing a PET image based on a low signal-to-noise ratio, including:
acquiring a first image with low signal-to-noise ratio to be processed;
inputting the first image into a generator G for value training, and outputting a second image matched with a high signal-to-noise ratio;
the generator G includes: a synthetic network and a mapping network;
the mapping network carries out code modulation processing on the input first image to obtain a style expression of the first image;
the synthesis network expresses affine patterns into patterns corresponding to each layer structure by means of affine transformation of the mapping network, so that each layer structure carries out convolution output on the input first image to achieve adaptive adjustment, and a second image is obtained.
Optionally, the method further comprises:
determining boundary information of a region of interest of a user in the second image based on the region of interest;
and dividing according to the boundary information to obtain divided information.
Optionally, the synthetic network includes:
a first convolution layer, a four-layer base module, and a last convolution layer;
the first convolution layer is used for receiving an input first image and carrying out convolution processing to obtain convolution characteristics;
all basic modules include: batch normalization, activation, convolution, noise module and instance normalization AdaIN;
the first layer basic module is used for sequentially processing the input convolution characteristics by the batch normalization, activation, convolution and noise modules, and inputting the input convolution characteristics to the AdaIN by combining the affine transformed first layer patterns to obtain the output of the first layer;
the input of the second layer basic module is a convolution feature, the output of which is the output of the second layer,
the input of the third layer basic module is convolution characteristic and the output of the first layer, and the output of the third layer basic module is the output of the third layer;
the input of the fourth layer basic module is convolution characteristic, the output of the first layer and the output of the second layer, and the output of the fourth layer basic module is the output of the fourth layer;
the last convolution layer is used for carrying out convolution processing on the convolution characteristics, the output of the first layer, the output of the second layer, the output of the third layer and the output of the fourth layer, and outputting a second image.
Alternatively, the process may be carried out in a single-stage,
wherein x is i Representing a characteristic map (gamma) ii ) Is a set of style adjustment parameters, μ (x i ) For the mean value of the channel dimensions of the ith feature map, σ (x i ) The standard deviation of the channel dimension is the i-th layer characteristic diagram, and i represents the index of the channel.
Optionally, the mapping network is used for sequentially performing style expression coding and affine transformation processing on the input first image;
the mapping network is used for receiving the input first image, generating a pattern expression code with low-frequency information of the input first image, and mapping the pattern expression code to form adjustment parameters corresponding to AdaIN in each layer of basic module, wherein the adjustment parameters are characteristic information with the input image.
Optionally, in the training stage, the random noise of the noise module is random noise conforming to Gaussian normal distribution;
in the use stage, the random noise of the noise module is 0.
Optionally, before acquiring the first image with low signal-to-noise ratio to be processed, the method further comprises:
acquiring a third image used for training G, namely a PET image with high signal to noise ratio, and a fourth image used for training G, namely a PET image with low signal to noise ratio;
inputting the fourth image into the established G, and outputting a fifth image;
inputting the third image and the fifth image into a discriminator D, judging and adjusting training parameters of G according to a loss function, and alternately training the G and the D to enable the PET image output by the G to be matched with the third image, so as to obtain a trained G;
wherein, the loss function L is:
L GAN (G, D) to generate countermeasures against losses;
L GAN (G,D)=-E x,y [D(x,y)]+E x [D(x,G(x)];
L 1 for suppressing imagesNoise and a loss function of low-frequency information are guaranteed;
L L1 (G)=E x,y [‖y-G(x)‖ 1 ];
-E x,y [D(x,y)]representing the expectation that the sample is the combination of the image with low signal-to-noise ratio and the image with high signal-to-noise ratio, D (x, y) represents the discrimination result of the discriminator on the truly measured image with high signal-to-noise ratio, and D (x, G (x)) represents the discrimination result of the discriminator on the image output by G;
L seg loss function used for segmentation:
t i a target value of 0 or 1 for segmentation; y is i Taking the predicted value of the network as a value between (0 and 1), wherein epsilon is a smoothing coefficient;
the third image is y; the fourth image is x; beta is a super parameter;
the third image is a PET image with high signal to noise ratio acquired and reconstructed by the PET acquisition equipment, the fourth image is a PET image with low signal to noise ratio which is obtained by extracting partial data reconstruction from the original data sequence of the reconstructed third image and is used as input in training.
In a second aspect, an embodiment of the present invention further provides an electronic device, including: the device comprises a memory, a processor and a bus, wherein the processor and the memory are connected through the bus;
the memory is used for storing a program, and the processor is used for running the program, wherein the program executes any processing method based on the low signal-to-noise ratio PET image in the first aspect when running.
In a third aspect, embodiments of the present invention also provide a PET system, comprising: the PET image reconstruction device and the electronic device are used for processing the PET image reconstructed by the PET image reconstruction device.
(III) beneficial effects
The beneficial effects of the invention are as follows: the method of the invention adopts the trained G to process the PET image with low signal-to-noise ratio, namely the low-dosage tracer, and obtains a second image consistent with the PET image with full-dosage tracer, namely the high signal-to-noise ratio, wherein the second image is the PET image with high contrast and complete structure.
The method of the embodiment can simultaneously complete the translation of the PET image from low dose to full dose and the image segmentation. The converted PET image with high signal-to-noise ratio can meet clinical requirements, has richer texture details, recovers the SUV value of the PET image, and can provide accurate metabolic information; meanwhile, the liver and bladder organs are segmented more accurately by the segmentation network, and convenience is provided for further case analysis. The method can realize pathological examination under the condition that the patient only needs to take low-dose radioactive tracer, and provides convenience for the patient and doctors.
Drawings
Fig. 1A and fig. 1B are schematic flow diagrams of a processing method based on a low signal-to-noise ratio PET image according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a generator according to an embodiment of the present invention;
FIG. 3 is a general architecture diagram including a generator and a discriminator provided in one embodiment of the invention;
FIG. 4 is a schematic diagram of a generator according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a mapping network according to an embodiment of the present invention;
FIG. 6 is a comparative schematic of the results of the process of the present invention and the prior art process.
Detailed Description
The invention will be better explained by the following detailed description of the embodiments with reference to the drawings.
In order that the above-described aspects may be better understood, exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The quality of the PET image acquired and reconstructed by current PET acquisition devices is mainly affected by two factors, the first being the amount of tracer and the second being the acquisition time. In order to obtain high quality (i.e., high signal-to-noise ratio) PET images, a PET acquisition device based on the high dose/full dose tracer acquires images for a preset duration to reconstruct a high signal-to-noise ratio PET image.
The low signal-to-noise ratio PET images mentioned in the following examples may include at least the following three types:
the PET acquisition equipment reconstructs a PET image based on the high-dose tracer and acquires an image with Z times of preset duration to be used as a PET image with low signal-to-noise ratio, wherein Z is a value smaller than 1 and larger than 0;
the PET acquisition equipment reconstructs PET images based on the low-dose tracer and acquiring images with preset duration as PET images with low signal-to-noise ratio,
the PET acquisition equipment reconstructs PET images based on the low-dose tracer and acquiring images with Z times of preset duration to obtain PET images with low signal-to-noise ratio.
The low dose tracer in the embodiment of the invention can be used for injecting the radioactive drug FDG with the dosage less than 0.08mCi/kg into the patient, for example, the radioactive drug FDG with the dosage less than 0.05mCi/kg can be used for injecting the radioactive drug FDG into the patient, so as to acquire the PET image of the current patient, and the PET image processing method in the embodiment of the invention is verified.
The high dose tracer in this embodiment may be an injection to the patient at a dose of 0.08mCi/kg or more; typically, the dose injected by the patient when acquiring the PET image of the patient in the PET system is 0.1mCi/kg, and the dose injected to the patient is increased, e.g. to 0.12mCi/kg-0.15mCi/kg, in order to improve the PET image quality in the prior art.
The method of the embodiment can be applied to both 2-dimensional PET images and 3-dimensional PET images, and mainly translates PET images with low signal to noise ratio into PET images with high signal to noise ratio.
Example 1
As shown in fig. 1A, an embodiment of the present invention provides a method for processing a PET image based on a low signal-to-noise ratio, where the main execution body of the method in this embodiment may be any electronic device or computer, and the method in this embodiment is generally integrated in a PET-CT device/PET device, so as to implement processing, such as translation processing, on the PET image with a low signal-to-noise ratio. The method of the present embodiment may include the steps of:
101. acquiring a first image with low signal-to-noise ratio to be processed;
102. the first image is input into a value trained generator G and a second image is output that matches the high signal to noise ratio.
The generator G includes: a synthetic network and a mapping network;
the mapping network carries out code modulation processing on the input first image to obtain a style expression of the first image;
the synthesis network expresses affine patterns into patterns corresponding to each layer structure by means of affine transformation of the mapping network, so that each layer structure carries out convolution output on the input first image to carry out adaptive adjustment, and a second image is obtained.
The method of this embodiment is to process a PET image of a low signal-to-noise ratio, i.e., a low dose tracer, with a trained G to obtain a second image consistent with a PET image of a high signal-to-noise ratio, i.e., a full dose tracer, the second image being a PET image with high contrast and structural integrity.
In an alternative implementation, as shown in fig. 1B, the above method may further include the following steps 103 and 104:
103. determining boundary information of a region of interest of a user in the second image based on the region of interest;
104. and dividing according to the boundary information to obtain divided information.
The method of the embodiment realizes the translation and segmentation of the PET image with low signal-to-noise ratio, and can effectively improve the segmentation accuracy of the PET image with low signal-to-noise ratio and repair the missing structure and function information in the PET image with low signal-to-noise ratio.
In the embodiment, the converted PET image with high signal-to-noise ratio can meet clinical requirements, has richer texture details, recovers the SUV value of the PET image, and can provide accurate metabolic information; meanwhile, the liver and bladder organs are segmented more accurately by the segmentation network, and convenience is provided for further case analysis.
Example two
Referring to fig. 2 to 5, the above-described generator G and discriminator D of the training generator G will be described in detail.
As shown in fig. 2 and 3, the generator G in this embodiment includes a synthesis network and a mapping network.
Wherein the synthetic network comprises: a first convolution layer, a 4-layer base module, and a last convolution layer;
the first convolution layer is used for receiving an input first image and carrying out convolution processing to obtain convolution characteristics;
all basic modules include: batch normalization, activation, convolution, noise module and instance normalization AdaIN;
the first layer basic module is used for sequentially processing the input convolution characteristics by the batch normalization, activation, convolution and noise modules, and inputting the input convolution characteristics to the AdaIN by combining the affine transformed first layer patterns to obtain the output of the first layer;
the input of the second layer basic module is a convolution feature, the output of which is the output of the second layer,
the input of the third layer basic module is convolution characteristic and the output of the first layer, and the output of the third layer basic module is the output of the third layer;
the input of the fourth layer basic module is convolution characteristic, the output of the first layer and the output of the second layer, and the output of the fourth layer basic module is the output of the fourth layer;
the last convolution layer is used for carrying out convolution processing on the convolution characteristics, the output of the first layer, the output of the second layer, the output of the third layer and the output of the fourth layer, and outputting a second image.
In the present embodiment of the present invention, in the present embodiment,
wherein x is i A characteristic diagram is shown which is a representation of the characteristic diagram,(γ ii ) Is a set of style adjustment parameters, μ (x i ) For the mean value of the channel dimensions of the ith feature map, σ (x i ) The standard deviation of the channel dimension is the i-th layer characteristic diagram, and i represents the index of the channel. The channels in this embodiment refer to the channels corresponding to the convolution.
The random noise of each layer of basic module noise module can be random noise conforming to Gaussian normal distribution in a training stage; during the use phase, the random noise of the noise module may be 0.
The other expression mode can be as follows: the main body of the generator is a densely connected network, and a style adjustment module is added in the embodiment. The convolution kernel of the densely connected network is 3*3 convolution kernel, and a batch normalization layer is added to accelerate the convergence of the generator, and the LeakyReLU activation function increases the nonlinearity of the generator. The input to each convolution operation is a splice of features from the previous convolution output, such a dense connection operation greatly reduces the parameters of the generator model, but adds more paths to the information flow.
The importance of the numerous feature maps generated during the gradual encoding of the densely connected networks in the generator is different, so that the added pattern expression is adjusted in this embodiment. See fig. 2 for an adaptive example normalization (AdaIN) in fig. 5 only.
That is, in a specific application, in order to accurately adjust each level of patterns, a pattern expression is first learned by using a mapping network, and then affine is expressed as a pattern through affine transformation to adjust convolution output of each layer.
In addition, in order to improve model robustness, a noise module is added in the training process, small disturbance is applied to network training, and noise is set to 0 in a test stage, namely a stage.
Referring to fig. 5, the mapping network is used for sequentially performing pattern expression coding and affine transformation processing on the input first image (as in a block a in the figure);
the mapping network is used for receiving the input first image, generating a pattern expression code with low-frequency information of the input first image, and mapping the pattern expression code to form adjustment parameters corresponding to AdaIN in each layer of basic module, wherein the adjustment parameters are characteristic information with the input image.
A normalization module for performing pixel normalization on the input first image,
a sampling module for four-stage downsampling; the sampling module inversely scales the number of channels by means of a convolution layer, reduces the size of a feature map by half by convolution, obtains a feature map layer with the number of channels being 16 and the size being 4 x 4 by an activation function, expands the feature map layer into a one-dimensional array, and obtains a style expression code with the number of channels being 1 and the size being 512 by processing a linear network and the activation function;
in addition, the mapping network is also used for mapping the pattern expression codes into 2 x 4 vectors and forming an adjustment parameter gamma corresponding to AdaIN in each layer of basic modules i 、β i 4 is the number of layers of the basic module in G; the feature map corresponding to each layer of basic module corresponds to a group of adjustment parameters.
In practical applications, before implementing the method of fig. 1A and 1B, the method further includes a step 100 not shown in the following figures:
100. acquiring a third image used for training G, namely a PET image with high signal to noise ratio, and a fourth image used for training G, namely a PET image with low signal to noise ratio;
inputting the fourth image into the established G, and outputting a fifth image;
inputting the third image and the fifth image into a discriminator D, judging and adjusting training parameters of G according to a loss function (namely an objective function), and alternately training the G and the D to enable the PET image finally output by the G to be matched with the third image, so as to obtain the trained G; the comparative graph of the results is shown in FIG. 6.
Wherein the objective function/loss function L is:
L GAN (G, D) to generate countermeasures against losses;
L GAN (G,D)=-E x,y [D(x,y)]+E x [D(x,G(x)];
L 1 a loss function for suppressing image noise and guaranteeing low frequency information; -E x,y [D(x,y)]Representing the expectation that the sample is the combination of the image with low signal-to-noise ratio and the image with high signal-to-noise ratio, D (x, y) represents the discrimination result of the discriminator on the truly measured image with high signal-to-noise ratio, and D (x, G (x)) represents the discrimination result of the discriminator on the image output by G;
L L1 (G)=E x,y [‖y-G(x)‖ 1 ];
L seg loss function used for segmentation:
t i a target value of 0 or 1 for segmentation; y is i Taking the predicted value of the network as a value between (0 and 1), wherein epsilon is a smoothing coefficient, so that loss and gradient can be smoothed;
the third image is y; the fourth image is x; beta is a super parameter;
the third image is a PET image with high signal to noise ratio acquired and reconstructed by the PET acquisition equipment, the fourth image is a PET image with low signal to noise ratio which is obtained by extracting partial data reconstruction from the original data sequence of the reconstructed third image and is used as input in training.
The trained generator in the embodiment can translate the image with low signal to noise ratio into the image with high signal to noise ratio, and simultaneously, the segmentation network is utilized to segment important parts of the image output by the generator.
In the training process, the generator generates a PET image with the high signal to noise ratio which is as close to the real image with the high signal to noise ratio as possible by utilizing the image with the low signal to noise ratio, so that the discriminator is deceived, the discriminator is enabled to score the generated image in a high degree, and meanwhile, the generated image also enables the segmentation of the segmentation network to be more accurate. The overall structure consists of a pattern-based generator, a discriminator and a partitioning network, each loss function being used to optimize these three network parameters during training.
In the embodiment, the identifier network structure of PatchGAN is adopted to distinguish the output image of the generator from the actually measured high signal-to-noise ratio image, and then the output image of the generator and the actually measured high signal-to-noise ratio image form a generated countermeasure model, so that the generator is enabled to generate more realistic images.
The segmentation network in this embodiment adopts a 3D Unet structure to segment the generated image.
The method of the embodiment comprises the following steps: acquiring a PET image with low signal-to-noise ratio to be processed; inputting the PET image with low signal-to-noise ratio into a training G, and outputting the PET image with the characteristic information of the PET image with high signal-to-noise ratio; and continuously inputting the generated PET image into a segmentation network to generate a segmentation result image. The definition of the PET image output by the generator is consistent with that of the real image with high signal-to-noise ratio; the segmented image is a label for the region of interest, outlining the segmented boundary along the organ boundary. The method of the embodiment can repair image information, further segment the generated image interested region, and facilitate clinical diagnosis.
According to another aspect of the embodiment of the present invention, there is also provided an electronic device including: the device comprises a memory, a processor and a bus, wherein the processor and the memory are connected through the bus;
the memory is used for storing a program, and the processor is used for running the program, wherein the processing method based on the low signal-to-noise ratio PET image according to any embodiment is executed when the program runs.
In a specific practical process, the above-mentioned electronic device may be a PET device or a PET-CT device, each of which has the program in the present embodiment integrated therein.
According to yet another aspect of the present embodiment, the present embodiment further provides a PET system, which includes: a PET image reconstruction device and the electronic device of any of the above embodiments, the PET image reconstructed by the PET reconstruction device being processed via the electronic device.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the terms first, second, third, etc. are for convenience of description only and do not denote any order. These terms may be understood as part of the component name.
Furthermore, it should be noted that in the description of the present specification, the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples," etc., refer to a specific feature, structure, material, or characteristic described in connection with the embodiment or example being included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art upon learning the basic inventive concepts. Therefore, the appended claims should be construed to include preferred embodiments and all such variations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, the present invention should also include such modifications and variations provided that they come within the scope of the following claims and their equivalents.

Claims (8)

1. A method for processing a PET image based on a low signal-to-noise ratio, comprising:
acquiring a first image with low signal-to-noise ratio to be processed;
inputting the first image into a training generator G, and outputting a second image matched with a high signal-to-noise ratio;
the generator G includes: a synthetic network and a mapping network;
the mapping network carries out code modulation processing on the input first image to obtain a style expression of the first image;
the synthesis network expresses affine patterns into patterns corresponding to each layer of structure by means of affine transformation of the mapping network, so that each layer of structure carries out convolution output on the input first image to realize adaptive adjustment, and a second image is obtained;
the synthetic network comprises:
a first convolution layer, a four-layer base module, and a last convolution layer;
the first convolution layer is used for receiving an input first image and carrying out convolution processing to obtain convolution characteristics;
all basic modules include: batch normalization, activation, convolution, noise module and instance normalization AdaIN;
the first layer basic module is subjected to batch normalization, activation, convolution and noise module sequentially to process the input convolution characteristics, and the input convolution characteristics are input to AdaIN by combining the affine transformed first layer patterns to obtain the output of the first layer;
the input of the second layer basic module is a convolution feature, the output of which is the output of the second layer,
the input of the third layer basic module is convolution characteristic and the output of the first layer, and the output of the third layer basic module is the output of the third layer;
the input of the fourth layer basic module is convolution characteristic, the output of the first layer and the output of the second layer, and the output of the fourth layer basic module is the output of the fourth layer;
the last convolution layer is used for carrying out convolution processing on the convolution characteristic, the output of the first layer, the output of the second layer, the output of the third layer and the output of the fourth layer, and outputting a second image.
2. The method according to claim 1, wherein the method further comprises:
determining boundary information of a region of interest of a user in the second image based on the region of interest;
and dividing according to the boundary information to obtain divided information.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
wherein x is i Representing a characteristic map (gamma) ii ) Is a set of style adjustment parameters, μ (x i ) For the mean value of the channel dimensions of the ith feature map, σ (x i ) The standard deviation of the channel dimension is the i-th layer characteristic diagram, and i represents the index of the channel.
4. A method according to claim 1 or 3, wherein the mapping network is configured to sequentially perform pattern expression encoding and affine transformation processing on the input first image;
the mapping network is used for receiving the input first image, generating a pattern expression code with low-frequency information of the input first image, and mapping the pattern expression code to form adjustment parameters corresponding to AdaIN in each layer of basic module, wherein the adjustment parameters are characteristic information with the input image.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
in the training stage, the random noise of the noise module is random noise conforming to Gaussian normal distribution;
in the use stage, the random noise of the noise module is 0.
6. A method according to any one of claims 1 to 3, wherein,
before acquiring the first image to be processed with a low signal-to-noise ratio, the method further comprises:
acquiring a third image used for training G, namely a PET image with high signal to noise ratio, and a fourth image used for training G, namely a PET image with low signal to noise ratio;
inputting the fourth image into the established G, and outputting a fifth image;
inputting the third image and the fifth image into a discriminator D, judging and adjusting training parameters of G according to a loss function, and alternately training the G and the D to enable the PET image output by the G to be matched with the third image, so as to obtain a trained G;
wherein, the loss function L is:
L GAN (G, D) to generate countermeasures against losses;
L GAN (G,D)=-E x,y [D(x,y)]+E x [D(x,G(x)];
L L1 (G) A loss function for suppressing image noise and guaranteeing low frequency information;
L L1 (G)=E x,y [||y-G(x)|| 1 ];
-E x,y [D(x,y)]representing the expectation that the sample is the combination of the image with low signal-to-noise ratio and the image with high signal-to-noise ratio, D (x, y) represents the discrimination result of the discriminator on the truly measured image with high signal-to-noise ratio, and D (x, G (x)) represents the discrimination result of the discriminator on the image output by G;
L seg loss function used for segmentation:
t i a target value of 0 or 1 for segmentation; y is i Taking the predicted value of the network as a value between (0 and 1), wherein epsilon is a smoothing coefficient;
the third image is y; the fourth image is x; beta is a super parameter;
the third image is a PET image with high signal to noise ratio acquired and reconstructed by the PET acquisition equipment, the fourth image is a PET image with low signal to noise ratio which is obtained by extracting partial data reconstruction from the original data sequence of the reconstructed third image and is used as input in training.
7. An electronic device, comprising: the device comprises a memory, a processor and a bus, wherein the processor and the memory are connected through the bus;
the memory is used for storing a program, and the processor is used for running the program, wherein the program runs to execute the processing method based on the low signal-to-noise ratio PET image as claimed in any one of claims 1 to 6.
8. A PET system, comprising: a PET image reconstruction device and an electronic device as claimed in claim 7, via which the PET image reconstructed by the PET image reconstruction device is processed.
CN202110484347.8A 2021-04-30 2021-04-30 Processing method based on low signal-to-noise ratio PET image Active CN113052840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110484347.8A CN113052840B (en) 2021-04-30 2021-04-30 Processing method based on low signal-to-noise ratio PET image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110484347.8A CN113052840B (en) 2021-04-30 2021-04-30 Processing method based on low signal-to-noise ratio PET image

Publications (2)

Publication Number Publication Date
CN113052840A CN113052840A (en) 2021-06-29
CN113052840B true CN113052840B (en) 2024-02-02

Family

ID=76517931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110484347.8A Active CN113052840B (en) 2021-04-30 2021-04-30 Processing method based on low signal-to-noise ratio PET image

Country Status (1)

Country Link
CN (1) CN113052840B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393551A (en) * 2021-06-30 2021-09-14 赛诺联合医疗科技(北京)有限公司 Image system based on cloud server

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493951A (en) * 2017-11-08 2019-03-19 上海联影医疗科技有限公司 For reducing the system and method for dose of radiation
CN110503654A (en) * 2019-08-01 2019-11-26 中国科学院深圳先进技术研究院 A kind of medical image cutting method, system and electronic equipment based on generation confrontation network
CN111489412A (en) * 2019-01-25 2020-08-04 辉达公司 Semantic image synthesis for generating substantially realistic images using neural networks
CN112085677A (en) * 2020-09-01 2020-12-15 深圳先进技术研究院 Image processing method, system and computer storage medium
KR20210025972A (en) * 2019-08-28 2021-03-10 가천대학교 산학협력단 System for reconstructing quantitative PET dynamic image using neural network and Complementary Frame Reconstruction and method therefor
CN112508175A (en) * 2020-12-10 2021-03-16 深圳先进技术研究院 Multi-task learning type generation type confrontation network generation method and system for low-dose PET reconstruction
CN112819914A (en) * 2021-02-05 2021-05-18 北京航空航天大学 PET image processing method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230129195A (en) * 2017-04-25 2023-09-06 더 보드 어브 트러스티스 어브 더 리랜드 스탠포드 주니어 유니버시티 Dose reduction for medical imaging using deep convolutional neural networks
US11576628B2 (en) * 2018-01-03 2023-02-14 Koninklijke Philips N.V. Full dose PET image estimation from low-dose PET imaging using deep learning
US11132792B2 (en) * 2018-02-22 2021-09-28 Siemens Healthcare Gmbh Cross domain medical image segmentation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493951A (en) * 2017-11-08 2019-03-19 上海联影医疗科技有限公司 For reducing the system and method for dose of radiation
CN111489412A (en) * 2019-01-25 2020-08-04 辉达公司 Semantic image synthesis for generating substantially realistic images using neural networks
CN110503654A (en) * 2019-08-01 2019-11-26 中国科学院深圳先进技术研究院 A kind of medical image cutting method, system and electronic equipment based on generation confrontation network
KR20210025972A (en) * 2019-08-28 2021-03-10 가천대학교 산학협력단 System for reconstructing quantitative PET dynamic image using neural network and Complementary Frame Reconstruction and method therefor
CN112085677A (en) * 2020-09-01 2020-12-15 深圳先进技术研究院 Image processing method, system and computer storage medium
CN112508175A (en) * 2020-12-10 2021-03-16 深圳先进技术研究院 Multi-task learning type generation type confrontation network generation method and system for low-dose PET reconstruction
CN112819914A (en) * 2021-02-05 2021-05-18 北京航空航天大学 PET image processing method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A Style-Based Generator Architecture for Generative Adversarial networks;Tero Karras等;《CVF Conference on Computer Vision and Pattern Recognition》;4401-4410 *
Ultra-low-dose PET reconstruction using generative adversarial network with feature matching and take specific perceptual loss;Jiahong Ouyang等;《American Association of Physicists in Medicine》;3555-3564 *
深度学习在医学影像中的应用综述;施俊;汪琳琳;王珊珊;陈艳霞;王乾;魏冬铭;梁淑君;彭佳林;易佳锦;刘盛锋;倪东;王明亮;张道强;沈定刚;;中国图象图形学报(10);7-35 *

Also Published As

Publication number Publication date
CN113052840A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
Emami et al. Generating synthetic CTs from magnetic resonance images using generative adversarial networks
Wang et al. 3D auto-context-based locality adaptive multi-modality GANs for PET synthesis
Chen et al. U‐net‐generated synthetic CT images for magnetic resonance imaging‐only prostate intensity‐modulated radiation therapy treatment planning
Chen et al. Fully automated multiorgan segmentation in abdominal magnetic resonance imaging with deep neural networks
CN107133549B (en) ECT motion gating signal acquisition method and ECT image reconstruction method
US20120302880A1 (en) System and method for specificity-based multimodality three- dimensional optical tomography imaging
US11250543B2 (en) Medical imaging using neural networks
CN111870245B (en) Cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method
Xue et al. LCPR-Net: low-count PET image reconstruction using the domain transform and cycle-consistent generative adversarial networks
CN111340903A (en) Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
Singh et al. Medical image generation using generative adversarial networks
CN109844815A (en) The image procossing based on feature is carried out using from the characteristic image of different iterative extractions
CN110415310A (en) Medical scanning imaging method, device, storage medium and computer equipment
Emami et al. Attention-guided generative adversarial network to address atypical anatomy in synthetic CT generation
Kläser et al. Deep boosted regression for MR to CT synthesis
WO2020113148A1 (en) Single or a few views computed tomography imaging with deep neural network
Sun et al. Double U-Net CycleGAN for 3D MR to CT image synthesis
CN110270015B (en) sCT generation method based on multi-sequence MRI
CN113052840B (en) Processing method based on low signal-to-noise ratio PET image
Amirkolaee et al. Development of a GAN architecture based on integrating global and local information for paired and unpaired medical image translation
Xie et al. Generation of contrast-enhanced CT with residual cycle-consistent generative adversarial network (Res-CycleGAN)
Lei et al. Generative adversarial network for image synthesis
Lee et al. Study on Optimal Generative Network for Synthesizing Brain Tumor‐Segmented MR Images
US11481934B2 (en) System, method, and computer-accessible medium for generating magnetic resonance imaging-based anatomically guided positron emission tomography reconstruction images with a convolutional neural network
Lei et al. Generative adversarial networks for medical image synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant