CN111489404B - Image reconstruction method, image processing device and device with storage function - Google Patents

Image reconstruction method, image processing device and device with storage function Download PDF

Info

Publication number
CN111489404B
CN111489404B CN202010203143.8A CN202010203143A CN111489404B CN 111489404 B CN111489404 B CN 111489404B CN 202010203143 A CN202010203143 A CN 202010203143A CN 111489404 B CN111489404 B CN 111489404B
Authority
CN
China
Prior art keywords
image
data
image data
convolution
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010203143.8A
Other languages
Chinese (zh)
Other versions
CN111489404A (en
Inventor
胡战利
杨永峰
薛恒志
郑海荣
梁栋
刘新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202010203143.8A priority Critical patent/CN111489404B/en
Publication of CN111489404A publication Critical patent/CN111489404A/en
Application granted granted Critical
Publication of CN111489404B publication Critical patent/CN111489404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image reconstruction method, an image processing device and a device with a storage function, wherein the image reconstruction method comprises the steps of obtaining an original image; obtaining input image data according to an original image; performing a first mapping operation on the input image data according to a first mapping function in the generator network to obtain output image data; a reconstructed image is formed from the output image data. According to the application, the first mapping operation is carried out on the input image data of the original image according to the first mapping function in the generator network to obtain the output image data, so that the image reconstruction is directly realized, the clear presentation of the image of the low-dose radioactive tracer can be realized, and the clinical diagnosis of doctors is facilitated; because the data volume that the mapping mode gathered is less for image reconstruction speed is faster, is favorable to improving work efficiency, and its required scanning time is also shorter, can avoid appearing the artifact, thereby improve image quality.

Description

Image reconstruction method, image processing device and device with storage function
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image reconstruction method, an image processing apparatus, and an apparatus having a storage function.
Background
Clinical examination imaging techniques in the field of nuclear medicine can be applied to various medical clinical examination fields, such as PET (Positron Emission Computed Tomography, positron emission tomography) techniques, and the like, and help doctors to diagnose the condition of a patient more accurately by injecting a radioactive tracer into the patient to achieve imaging.
The inventors of the present application have found in long-term research and development that the dose of the radiotracer is being more and more paid attention to because of the potential hazard of the radioactive radiation of the radiotracer to the human body. The large-dose radioactive tracer has the safety problem, the data volume required to be acquired for imaging is large, the image reconstruction speed is low, and the required scanning time is long, so that the patient can have unavoidable physiological movement, and further artifacts are caused. While reducing the dose of the radiotracer can reduce the radiation to the patient, the imaging effect of the image can be affected, which is unfavorable for the clinical diagnosis of doctors.
Disclosure of Invention
The application provides an image reconstruction method, an image processing device and a device with a storage function, which are used for solving the technical problem that a low-dose tracer influences the imaging effect of an image in the prior art.
In order to solve the technical problems, the application adopts a technical scheme that an image reconstruction method is provided, which comprises the following steps:
acquiring an original image;
obtaining input image data according to the original image, wherein the input image data is sinusoidal data;
performing a first mapping operation on the input image data according to a first mapping function in a generator network to obtain output image data;
and forming a reconstructed image according to the output image data.
In a specific embodiment, the first mapping operation includes encoding, converting, and decoding, and the encoding method includes:
sequentially performing a first convolution operation on the input image data to obtain first characteristic image data;
the number of the convolution layers in the first convolution operation is 14, wherein the step length of 10 convolution layers is 1, and the step length of 4 convolution layers is 2.
In a specific embodiment, the method of converting includes:
performing a second convolution operation on the first characteristic image data to obtain second characteristic image data;
the number of the convolution layers in the second convolution operation is 11, wherein the characteristic number of the 6 convolution layers is 512,5 and the characteristic number of the convolution layers is 1024.
In a specific embodiment, the method of decoding comprises:
sequentially performing up-sampling operation and third convolution operation on the second characteristic image data to obtain output image data;
wherein the number of convolution kernels of the last layer of convolution layer in the third convolution operation is 1.
In one embodiment, the batch normalization operation and the activation operation are performed after each of the first convolution operation, the second convolution operation, and the third convolution operation.
In a specific embodiment, the obtaining the output image data further includes:
processing the output image data through a discriminator network to obtain generated counterdamage data;
and identifying whether the reconstructed image meets the standard according to the generated counterdamage data.
In a specific embodiment, the method for processing the output image data through the discriminator network includes:
performing a fourth convolution operation on the output image data to obtain a third characteristic image;
performing full-connection layer processing on the third characteristic image to obtain the generated counterdamage data;
the number of layers of the convolution layers in the fourth convolution operation is 8, the step length of the convolution layers of the even layers is 2, the step length of the convolution layers of the odd layers is 1, and the number of layers of the full connection layers is 2.
In one embodiment, in the fourth convolution operation and full-link layer processing, an activation operation is performed after each convolution layer operation and after the first full-link layer processing.
In a specific embodiment, the obtaining the output image data further includes:
acquiring a first data set according to the output image data, and acquiring a second data set according to the target image data;
obtaining the mean square error loss function and the perception loss function according to the first data set and the second data set;
and judging whether the reconstructed image accords with a standard or not according to the mean square error loss function and the perception loss function.
In a specific embodiment, the obtaining the mean square error loss function and the perceptual loss function further includes:
performing optimization calculation on the mean square error loss function and the perception loss function;
and judging whether the reconstructed image accords with a standard or not according to the optimized and calculated mean square error loss function and the perception loss function.
In a specific embodiment, the obtaining the output image data includes:
acquiring a third data set according to the input image data;
extracting corresponding data from the third data set, the first data set and the second data set as input data, output data and a network tag respectively;
training the first mapping function according to the input data, the output data, the network tag, the generated countermeasures loss data, the mean square error loss function and the perceived loss function to obtain a second mapping function.
In order to solve the above technical problem, another technical solution adopted by the present application is to provide an image processing apparatus, including:
a receiver for acquiring an original image;
the processor is connected with the receiver and is used for obtaining input image data according to the original image; performing a first mapping operation on the input image data according to a first mapping function in a generator network to obtain output image data; wherein the input image data is sinusoidal data;
and the display is connected with the processor and is used for receiving the output image data and forming a reconstructed image according to the output image data.
In order to solve the above-mentioned technical problem, another technical solution adopted by the present application is to provide an apparatus with a storage function, in which program data is stored, the program data being executable to implement the method as described above.
According to the application, the first mapping operation is carried out on the input image data of the original image according to the first mapping function in the generator network to obtain the output image data, so that the image reconstruction is directly realized, the clear presentation of the image of the low-dose radioactive tracer can be realized, and the clinical diagnosis of doctors is facilitated; because the data volume that the mapping mode gathered is less for image reconstruction speed is faster, is favorable to improving work efficiency, and its required scanning time is also shorter, can avoid appearing the artifact, thereby improve image quality.
Drawings
For a clearer description of the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the description below are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art, wherein:
FIG. 1 is a flow chart of an embodiment of an image reconstruction method according to the present application;
FIG. 2 is a flow chart of another embodiment of the image reconstruction method of the present application;
FIG. 3 is a flow chart of another embodiment of the image reconstruction method of the present application;
FIG. 4 is a flowchart of a first mapping operation according to another embodiment of the image reconstruction method of the present application;
FIG. 5 is a flow chart of a discriminator network process in another embodiment of the image reconstruction method of the application;
FIG. 6 is a schematic view showing the structure of an embodiment of an image processing apparatus of the present application;
fig. 7 is a schematic structural diagram of an embodiment of the device with memory function of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to fall within the scope of the present application.
The terms "first" and "second" in the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. And the term "and/or" is merely an association relation describing the association object, and indicates that three relations may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Referring to fig. 1, an embodiment of an image reconstruction method of the present application includes:
s110, acquiring an original image.
In this embodiment, the original image is an image directly acquired by imaging the patient after injection of a low dose of tracer.
S120, obtaining input image data according to the original image.
In this embodiment, the input image data is sinusoidal data to achieve mapping.
S130, performing first mapping operation on the input image data according to the first mapping function in the generator network to obtain output image data.
In this embodiment, the first mapping function is a preset mapping function.
S140, forming a reconstructed image according to the output image data.
According to the embodiment of the application, the first mapping operation is carried out on the input image data of the original image according to the first mapping function in the generator network to obtain the output image data, so that the image reconstruction is directly realized, the clear presentation of the image of the low-dose radioactive tracer can be realized, and the clinical diagnosis of doctors is facilitated; because the data volume that the mapping mode gathered is less for image reconstruction speed is faster, is favorable to improving work efficiency, and its required scanning time is also shorter, can avoid appearing the artifact, thereby improve image quality.
The image reconstruction method in this embodiment may be applied to image reconstruction of PET (Positron Emission Computed Tomography, positron emission tomography), or may be applied to image reconstruction in technical fields such as CT (Computed Tomography, electron computed tomography) or SPECT (Single-Photon Emission Computed Tomography, single photon emission computed tomography) after being adjusted according to actual conditions, which is not limited herein.
Referring to fig. 2, another embodiment of the image reconstruction method of the present application includes:
s210, acquiring an original image.
S220, obtaining input image data according to the original image.
In the present embodiment, the input image data is 288×269 sinusoidal data.
S230, performing a first mapping operation on the input image data according to the first mapping function in the generator network to obtain output image data.
Referring to fig. 3 and fig. 4 together, the first mapping operation includes encoding, converting, and decoding, and specifically includes:
encoding: s231, carrying out first convolution operation on the input image data in sequence to obtain a first characteristic image number.
In this embodiment, the number of convolution layers in the first convolution operation is 14, where the step size of 10 convolution layers is 1, and the step size of 4 convolution layers is 2. Specifically, the first convolution operation includes 1 first convolution layer group 310 and 4 second convolution layer groups 320 that are sequentially arranged, the first convolution layer group 310 includes two first convolution layers 311, the step size of the first convolution layers 311 is 1, the size of the convolution kernel is 5*5, the second convolution layer group 320 includes 1 second convolution layer 321 and two third convolution layers 322, the step size of the second convolution layer 321 is 2, the size of the convolution kernel is 3*3, the step size of the third convolution layer 322 is 1, and the size of the convolution kernel is 3*3.
In this embodiment, the operation formula of the convolution layer is:
F out1 =(F in1down )↓ p (1)
wherein ,Fin1 For input data of convolutional layer, F out1 Alpha is the output data of the convolution layer down For convolution operations, p is a reduction multiple. Wherein p is 1 when the step size of the convolution layer is 1, indicating that the size of the input data and the output data is unchanged, and p is 2 when the step size of the convolution layer is 1, indicating that the output data is half the size of the input data.
Conversion: s232, performing a second convolution operation on the first characteristic image data to obtain second characteristic image data.
In this embodiment, the number of layers of the convolution layers in the second convolution operation is 11, where the number of features of the 6-layer convolution layers is 512,5 and the number of features of the 6-layer convolution layers is 1024. Specifically, the second convolution operation includes 11 third convolution groups 410 sequentially arranged, each third convolution group 410 includes 1 fourth convolution layer 411, a step size of the fourth convolution layer 411 is 1, a size of a convolution kernel is 3*3, a feature number of the first 3 fourth convolution layers 411 is 512, a feature number of the middle 4 fourth convolution layers 411 is 1024, and a feature number of the second 3 fourth convolution layers 411 is 512.
Decoding: s233, up-sampling operation and third convolution operation are sequentially carried out on the second characteristic image data so as to obtain output image data.
In this embodiment, the third convolution operation includes a plurality of fourth convolution groups 510 and fifth convolution groups 520 that are sequentially arranged. Specifically, the fourth convolution layer group 510 includes a 1-up sampling layer 511 and a 3-sixth convolution layer 512, where the step size of the sixth convolution layer 512 is 1 and the convolution kernel is 3*3. The number of convolution kernels of the fifth convolution layer 520, i.e., the last convolution layer in the third convolution operation, is 1.
In this embodiment, the operation formula of the upsampling layer is:
F out2 =(F in2up )↑ q (2)
wherein ,Fin2 For up-sampling layer input data, F out2 Alpha is the output data of the up-sampling layer up For convolution operation, q is the magnification factor, and the magnification factor is set to 2, which means that the input feature map is doubled.
In this embodiment, after each of the first convolution operation, the second convolution operation, and the third convolution operation, a batch normalization layer 312 and an activation layer 313 are sequentially provided for performing batch normalization operation and activation operation. The activation operation may be implemented by a ReLU (Rectified Linear Unit, linear rectification function) function, among others.
S240, processing the output image data through the discriminator network to obtain the generated counterdamage data.
Referring also to fig. 5, in the present embodiment, a method of processing output image data through a discriminator network includes:
s241, performing a fourth convolution operation on the output image data to obtain a third characteristic image.
In the present embodiment, the fourth convolution operation includes 8 convolution layers, wherein 4 layers are the seventh convolution layers 610 and the other 4 layers are the eighth convolution layers 620, and the seventh convolution layers 610 and the eighth convolution layers 620 are sequentially and alternately arranged.
In the present embodiment, the step size of the seventh convolution layer 610, i.e., the odd-layer convolution layer, is 1. The eighth convolution layer 620, i.e., the even-numbered convolution layer, has a step size of 2 to reduce the size of the input feature image and to double the number of convolution kernels.
In this embodiment, the number of convolution kernels of the 8-layer convolution layers in the fourth convolution operation is 32, 32, 64, 64, 128, 128, 256, 256 in order.
S242, performing full-connection layer processing on the third characteristic image to obtain generation of counterdamage data.
In this embodiment, the fourth convolution operation is followed by a 2-layer full connection layer 710.
In this embodiment, in the fourth convolution operation and the full-link layer processing, an active layer 630 is disposed after the operation of each convolution layer and after the processing of the first full-link layer 710, for performing the active operation. The activation operation may be implemented by a leak-corrected linear unit function, for example, by a leak-y ReLU (LeakyRectified Linear Unit).
In this embodiment, whether the reconstructed image meets the criterion can be discriminated from the generation of the loss countermeasure data.
S251, acquiring a first data set according to the output image data, and acquiring a second data set according to the target image data.
In this embodiment, the target image data is preset image data of a standard image corresponding to the original image, and is used for training the first mapping function.
And S252, obtaining a mean square error loss function and a perception loss function according to the first data set and the second data set.
In this embodiment, the operation method of the mean square error loss function is as follows:
wherein ,Lmse As a mean square error loss function, x i Y, which is an element of the first dataset i And m is the number of total pixels of the reconstructed image.
In this embodiment, the operation method of the perceptual loss function is as follows:
wherein ,Lvgg VGG (x) for perceptual loss function i VGG (G (Y)) for reconstructing a feature map of an image after passing through a VGG network (Visual Geometry Group Network ) i And n is the total number of pixels of the feature map of the target image after passing through the VGG network, and d is the number of the feature map of the reconstructed image after passing through the VGG network.
In this embodiment, whether the reconstructed image meets the standard can be determined according to the mean square error loss function and the perceptual loss function.
And S260, performing optimization calculation on the mean square error loss function and the perception loss function.
In this embodiment, adam (Adaptive Moment Estimation ) algorithm may be used to perform optimization calculation on the mean square error loss function and the perceptual loss function.
In this embodiment, whether the reconstructed image meets the standard may be determined according to the mean square error loss function and the perceptual loss function after the optimization calculation.
S271, a third data set is acquired from the input image data.
S272, extracting corresponding data from the third data set, the first data set and the second data set to serve as input data, output data and a network tag respectively;
s280, training the first mapping function according to the input data, the output data, the network label, the generated countermeasures loss data, the mean square error loss function and the perception loss function to obtain a second mapping function.
According to the method, the device and the system, the first mapping function is trained by generating the counterloss data, the mean square error loss function and the perception loss function, so that the problems of excessive smoothness and detail information loss easily occurring in a reconstructed image can be effectively solved, image details can be reserved, and the structure in the image is clearer.
Referring to fig. 6, the image processing apparatus of the present application includes a receiver 810, a processor 820, and a display 830, the receiver 810 being used to acquire an original image; the processor 820 is connected to the receiver 810 for obtaining input image data from the original image; performing a first mapping operation on the input image data according to a first mapping function in the generator network to obtain output image data; wherein the input image data is sinusoidal data; a display 830 is coupled to the processor 820 for receiving the output image data and forming a reconstructed image from the output image data.
The method for processing the input image data by the processor 820 is referred to the above-mentioned image reconstruction method embodiment, and is not described herein.
According to the embodiment of the application, the first mapping operation is carried out on the input image data of the original image according to the first mapping function in the generator network to obtain the output image data, so that the image reconstruction is directly realized, the clear presentation of the image of the low-dose radioactive tracer can be realized, and the clinical diagnosis of doctors is facilitated; because the data volume that the mapping mode gathered is less for image reconstruction speed is faster, is favorable to improving work efficiency, and its required scanning time is also shorter, can avoid appearing the artifact, thereby improve image quality.
Referring to fig. 7, an embodiment of the apparatus 90 with storage function of the present application stores program data 910, and the program data 910 can be executed to implement an image reconstruction method.
The image reconstruction method refers to the above embodiment of the image reconstruction method, and is not described herein.
According to the embodiment of the application, the first mapping operation is carried out on the input image data of the original image according to the first mapping function in the generator network to obtain the output image data, so that the image reconstruction is directly realized, the clear presentation of the image of the low-dose radioactive tracer can be realized, and the clinical diagnosis of doctors is facilitated; because the data volume that the mapping mode gathered is less for image reconstruction speed is faster, is favorable to improving work efficiency, and its required scanning time is also shorter, can avoid appearing the artifact, thereby improve image quality.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present application or directly or indirectly applied to other related technical fields are included in the scope of the present application.

Claims (10)

1. An image reconstruction method, comprising:
acquiring an original image; the original image is an image which is directly acquired by imaging after low-dose tracer injection;
obtaining input image data according to the original image, wherein the input image data is sinusoidal data;
performing a first mapping operation on the input image data according to a first mapping function in a generator network to obtain output image data; wherein the first mapping operation includes encoding, converting, and decoding;
forming a reconstructed image from the output image data;
wherein the output image data is processed through a discriminator network to obtain generated counterdamage data; acquiring a first data set according to the output image data, and acquiring a second data set according to the target image data; the target image data are preset image data of standard images corresponding to the original images; obtaining a mean square error loss function and a perception loss function according to the first data set and the second data set;
the mean square error loss function is:
wherein ,is a mean square error loss function->Is an element of the first data set, +.>Is an element of the second data set, +.>The number of total pixels of the reconstructed image;
the perceptual loss function is:
wherein ,for the perceptual loss function +.>For reconstructing a characteristic map of the image after passing through the VGG network,>is a characteristic diagram of the target image after passing through the VGG network,>for reconstructing the total number of pixels of the feature map of the image after the image has passed through the VGG network, < >>Is heavyThe number of feature images after the image is built and passes through the VGG network;
judging whether the reconstructed image accords with a standard or not according to the mean square error loss function and the perception loss function; and acquiring a third data set from the input image data; extracting corresponding data from the third data set, the first data set and the second data set as input data, output data and a network tag respectively; training the first mapping function according to the input data, the output data, the network tag, the generation of the countermeasures loss data, the mean square error loss function and the perception loss function to obtain a second mapping function.
2. The image reconstruction method according to claim 1, wherein the first mapping operation includes encoding, converting, and decoding, the encoding method including:
sequentially performing a first convolution operation on the input image data to obtain first characteristic image data;
the number of the convolution layers in the first convolution operation is 14, wherein the step length of 10 convolution layers is 1, and the step length of 4 convolution layers is 2.
3. The image reconstruction method according to claim 2, wherein the method of converting includes:
performing a second convolution operation on the first characteristic image data to obtain second characteristic image data;
the number of the convolution layers in the second convolution operation is 11, wherein the characteristic number of the 6 convolution layers is 512,5 and the characteristic number of the convolution layers is 1024.
4. The image reconstruction method according to claim 3, wherein the decoding method comprises:
sequentially performing up-sampling operation and third convolution operation on the second characteristic image data to obtain output image data;
wherein the number of convolution kernels of the last layer of convolution layer in the third convolution operation is 1.
5. The image reconstruction method according to claim 4, wherein a batch normalization operation and an activation operation are performed after the operation of each of the first convolution operation, the second convolution operation, and the third convolution operation.
6. The image reconstruction method according to claim 1, wherein the obtaining the output image data further comprises:
and identifying whether the reconstructed image meets the standard according to the generated counterdamage data.
7. The image reconstruction method according to claim 1, wherein the method of processing the output image data through a discriminator network comprises:
performing a fourth convolution operation on the output image data to obtain a third characteristic image;
performing full-connection layer processing on the third characteristic image to obtain the generated counterdamage data;
the number of layers of the convolution layers in the fourth convolution operation is 8, the step length of the convolution layers of the even layers is 2, the step length of the convolution layers of the odd layers is 1, and the number of layers of the full connection layers is 2.
8. The image reconstruction method according to claim 7, wherein in the fourth convolution operation and the full-link layer processing, an activation operation is performed after the operation of each convolution layer and after the first full-link layer processing.
9. The image reconstruction method according to claim 7, wherein the obtaining the mean square error loss function and the perceptual loss function further comprises:
performing optimization calculation on the mean square error loss function and the perception loss function;
and judging whether the reconstructed image accords with a standard or not according to the optimized and calculated mean square error loss function and the perception loss function.
10. An image processing apparatus for implementing the image reconstruction method according to any one of claims 1 to 9, comprising:
a receiver for acquiring an original image;
the processor is connected with the receiver and is used for obtaining input image data according to the original image; performing a first mapping operation on the input image data according to a first mapping function in a generator network to obtain output image data; wherein the input image data is sinusoidal data;
and the display is connected with the processor and is used for receiving the output image data and forming a reconstructed image according to the output image data.
CN202010203143.8A 2020-03-20 2020-03-20 Image reconstruction method, image processing device and device with storage function Active CN111489404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010203143.8A CN111489404B (en) 2020-03-20 2020-03-20 Image reconstruction method, image processing device and device with storage function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010203143.8A CN111489404B (en) 2020-03-20 2020-03-20 Image reconstruction method, image processing device and device with storage function

Publications (2)

Publication Number Publication Date
CN111489404A CN111489404A (en) 2020-08-04
CN111489404B true CN111489404B (en) 2023-09-05

Family

ID=71810726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010203143.8A Active CN111489404B (en) 2020-03-20 2020-03-20 Image reconstruction method, image processing device and device with storage function

Country Status (1)

Country Link
CN (1) CN111489404B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903356A (en) * 2019-05-13 2019-06-18 南京邮电大学 Missing CT data for projection estimation method based on the multiple parsing network of depth
CN110074813A (en) * 2019-04-26 2019-08-02 深圳大学 A kind of ultrasonic image reconstruction method and system
CN110211050A (en) * 2018-02-28 2019-09-06 通用电气公司 System and method for sparse image reconstruction
CN110288671A (en) * 2019-06-25 2019-09-27 南京邮电大学 The low dosage CBCT image rebuilding method of network is generated based on three-dimensional antagonism
CN110298804A (en) * 2019-07-01 2019-10-01 东北大学 One kind is based on generation confrontation network and the decoded medical image denoising method of 3D residual coding
CN110322403A (en) * 2019-06-19 2019-10-11 怀光智能科技(武汉)有限公司 A kind of more supervision Image Super-resolution Reconstruction methods based on generation confrontation network
CN110559009A (en) * 2019-09-04 2019-12-13 中山大学 Method, system and medium for converting multi-modal low-dose CT into high-dose CT based on GAN
CN110648376A (en) * 2019-08-20 2020-01-03 南京邮电大学 Limited angle CT reconstruction artifact removing method based on generation countermeasure network
CN110728727A (en) * 2019-09-03 2020-01-24 天津大学 Low-dose energy spectrum CT projection data recovery method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931179B (en) * 2016-04-08 2018-10-26 武汉大学 A kind of image super-resolution method and system of joint sparse expression and deep learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211050A (en) * 2018-02-28 2019-09-06 通用电气公司 System and method for sparse image reconstruction
CN110074813A (en) * 2019-04-26 2019-08-02 深圳大学 A kind of ultrasonic image reconstruction method and system
CN109903356A (en) * 2019-05-13 2019-06-18 南京邮电大学 Missing CT data for projection estimation method based on the multiple parsing network of depth
CN110322403A (en) * 2019-06-19 2019-10-11 怀光智能科技(武汉)有限公司 A kind of more supervision Image Super-resolution Reconstruction methods based on generation confrontation network
CN110288671A (en) * 2019-06-25 2019-09-27 南京邮电大学 The low dosage CBCT image rebuilding method of network is generated based on three-dimensional antagonism
CN110298804A (en) * 2019-07-01 2019-10-01 东北大学 One kind is based on generation confrontation network and the decoded medical image denoising method of 3D residual coding
CN110648376A (en) * 2019-08-20 2020-01-03 南京邮电大学 Limited angle CT reconstruction artifact removing method based on generation countermeasure network
CN110728727A (en) * 2019-09-03 2020-01-24 天津大学 Low-dose energy spectrum CT projection data recovery method
CN110559009A (en) * 2019-09-04 2019-12-13 中山大学 Method, system and medium for converting multi-modal low-dose CT into high-dose CT based on GAN

Also Published As

Publication number Publication date
CN111489404A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
CN111325686B (en) Low-dose PET three-dimensional reconstruction method based on deep learning
CN109785243B (en) Denoising method and computer based on unregistered low-dose CT of countermeasure generation network
Whiteley et al. FastPET: near real-time reconstruction of PET histo-image data using a neural network
US10867375B2 (en) Forecasting images for image processing
CN101917906A (en) Dose reduction and image enhancement in tomography through the utilization of the object&#39;s surroundings as dynamic constraints
US20230059132A1 (en) System and method for deep learning for inverse problems without training data
CN102024251A (en) System and method for multi-image based virtual non-contrast image enhancement for dual source CT
Xue et al. LCPR-Net: low-count PET image reconstruction using the domain transform and cycle-consistent generative adversarial networks
CN109741254B (en) Dictionary training and image super-resolution reconstruction method, system, equipment and storage medium
CN112435164A (en) Method for simultaneously super-resolution and denoising of low-dose CT lung image based on multi-scale generation countermeasure network
US20220114699A1 (en) Spatiotemporal resolution enhancement of biomedical images
CN113516586A (en) Low-dose CT image super-resolution denoising method and device
CN112419173A (en) Deep learning framework and method for generating CT image from PET image
CN114494479A (en) System and method for simultaneous attenuation correction, scatter correction, and denoising of low dose PET images using neural networks
US10013778B2 (en) Tomography apparatus and method of reconstructing tomography image by using the tomography apparatus
Tan et al. A selective kernel-based cycle-consistent generative adversarial network for unpaired low-dose CT denoising
Xie et al. Increasing angular sampling through deep learning for stationary cardiac SPECT image reconstruction
Vey et al. The role of generative adversarial networks in radiation reduction and artifact correction in medical imaging
CN114358285A (en) PET system attenuation correction method based on flow model
Whiteley et al. FastPET: Near real-time PET reconstruction from histo-images using a neural network
US20230386036A1 (en) Methods and systems for medical imaging
CN111489404B (en) Image reconstruction method, image processing device and device with storage function
CN111402358A (en) System and method for image reconstruction
CN112927318B (en) Noise reduction reconstruction method of low-dose PET image and computer readable storage medium
WO2021184389A1 (en) Image reconstruction method, image processing device, and device with storage function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant