CN116416328A - Image reconstruction method, image reconstruction device, computer equipment and storage medium - Google Patents

Image reconstruction method, image reconstruction device, computer equipment and storage medium Download PDF

Info

Publication number
CN116416328A
CN116416328A CN202111628840.9A CN202111628840A CN116416328A CN 116416328 A CN116416328 A CN 116416328A CN 202111628840 A CN202111628840 A CN 202111628840A CN 116416328 A CN116416328 A CN 116416328A
Authority
CN
China
Prior art keywords
image
module
feature extraction
reconstruction
input image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111628840.9A
Other languages
Chinese (zh)
Inventor
张阳
廖术
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202111628840.9A priority Critical patent/CN116416328A/en
Publication of CN116416328A publication Critical patent/CN116416328A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image reconstruction method, an image reconstruction device, computer equipment and a storage medium. The method comprises the following steps: acquiring an input image; wherein the input image is a low resolution image; and reconstructing the input image by using a preset image reconstruction model to obtain a target reconstruction image, wherein the image reconstruction model comprises a flat module and a detail module, and the resolution of the target reconstruction image is higher than that of the input image. By adopting the method, the image reconstruction effect can be improved.

Description

Image reconstruction method, image reconstruction device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image reconstruction method, an image reconstruction device, a computer device, and a storage medium.
Background
PET (Positron Emission Tomography), positron emission computed tomography, is an imaging technique for medical clinical examinations. Since PET imaging can exhibit high contrast in focal areas, PET imaging plays a vital role in early tumor detection, diagnosis, staging, treatment, and follow-up.
At present, due to the influence of the width of the detector and some physical parameters, the resolution of the PET image is low, and although many image reconstruction methods have been researched to reconstruct a high-resolution image, the reconstruction effect is not ideal.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image reconstruction method, apparatus, computer device, and storage medium capable of improving the reconstruction effect.
In a first aspect, the present application provides an image reconstruction method. The method comprises the following steps:
acquiring an input image; wherein the input image is a low resolution image;
and reconstructing the input image by using a preset image reconstruction model to obtain a target reconstructed image, wherein the image reconstruction model comprises a flattening module and a detail module, and the resolution of the target reconstructed image is higher than that of the input image.
In one embodiment, reconstructing the input image by using the preset image reconstruction model to obtain the target reconstructed image includes:
inputting an input image into a flat module for reconstruction, and obtaining a first feature image, a first reconstructed image and a first attention probability image which are output by the flat module;
inputting the first feature map to a detail module for reconstruction, and obtaining a second feature map, a second reconstructed image and a second attention probability map which are output by the detail module;
And obtaining a target reconstructed image according to the first reconstructed image and the first attention probability map and the second reconstructed image and the second attention probability map.
In one embodiment, the flat module and the detail module have the same structure and each comprise a first feature extraction sub-module, a feature combination sub-module and a second feature extraction sub-module which are sequentially connected;
the first feature extraction submodule and the second feature extraction submodule have the same structure and comprise residual blocks which are sequentially connected, and the first residual block is in jump connection with the nth residual block;
the feature combination submodule comprises multi-scale blocks which are connected in sequence, and the first multi-scale block is connected with the mth multi-scale block in a jumping mode.
In one embodiment, the residual block comprises at least two feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer;
and the residual block is used for carrying out feature extraction on the residual block input image at least twice, and fusing the extracted feature image with the residual block input image to obtain a residual block output image.
In one embodiment, the multi-scale block includes a plurality of feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer;
And the multi-scale block is used for extracting the characteristics of the multi-scale block input image, and fusing the characteristic images extracted by the multi-scale with the multi-scale block input image to obtain a multi-scale block output image.
In one embodiment, the method further comprises:
acquiring a plurality of sample images; wherein the sample image is a low resolution image;
acquiring gold standards corresponding to each sample image; gold standards include a full graph gold standard, a first gold standard, and a second gold standard;
training based on a plurality of sample images and gold standards corresponding to the sample images to obtain an image reconstruction model.
In one embodiment, the loss values during image reconstruction model training include a full-map loss value, a flat region loss value, and a detail region loss value.
In a second aspect, the present application also provides an image reconstruction apparatus. The device comprises:
the image acquisition module is used for acquiring an input image; wherein the input image is a low resolution image;
the image reconstruction module is used for reconstructing an input image by using a preset image reconstruction model to obtain a target reconstruction image, wherein the image reconstruction model comprises a flattening module and a detail module, and the resolution of the target reconstruction image is higher than that of the input image.
In one embodiment, the image reconstruction module is specifically configured to input an input image to the flattening module for reconstruction, so as to obtain a first feature map, a first reconstructed image and a first attention probability map output by the flattening module; inputting the first feature map to a detail module for reconstruction, and obtaining a second feature map, a second reconstructed image and a second attention probability map which are output by the detail module; and obtaining a target reconstructed image according to the first reconstructed image and the first attention probability map and the second reconstructed image and the second attention probability map.
In one embodiment, the flat module and the detail module have the same structure and each comprise a first feature extraction sub-module, a feature combination sub-module and a second feature extraction sub-module which are sequentially connected;
the first feature extraction submodule and the second feature extraction submodule have the same structure and comprise residual blocks which are sequentially connected, and the first residual block is in jump connection with the nth residual block;
the feature combination submodule comprises multi-scale blocks which are connected in sequence, and the first multi-scale block is connected with the mth multi-scale block in a jumping mode.
In one embodiment, the residual block comprises at least two feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer;
And the residual block is used for carrying out feature extraction on the residual block input image at least twice, and fusing the extracted feature image with the residual block input image to obtain a residual block output image.
In one embodiment, the multi-scale block includes a plurality of feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer;
and the multi-scale block is used for extracting the characteristics of the multi-scale block input image, and fusing the characteristic images extracted by the multi-scale with the multi-scale block input image to obtain a multi-scale block output image.
In one embodiment, the apparatus further comprises:
the sample acquisition module is used for acquiring a plurality of sample images; wherein the sample image is a low resolution image;
the gold standard acquisition module is used for acquiring gold standards corresponding to the sample images; gold standards include a full graph gold standard, a first gold standard, and a second gold standard;
and the training module is used for training based on a plurality of sample images and gold standards corresponding to the sample images to obtain an image reconstruction model.
In one embodiment, the loss values during image reconstruction model training include a full-map loss value, a flat region loss value, and a detail region loss value.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring an input image; wherein the input image is a low resolution image;
and reconstructing the input image by using a preset image reconstruction model to obtain a target reconstructed image, wherein the image reconstruction model comprises a flattening module and a detail module, and the resolution of the target reconstructed image is higher than that of the input image.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring an input image; wherein the input image is a low resolution image;
and reconstructing the input image by using a preset image reconstruction model to obtain a target reconstructed image, wherein the image reconstruction model comprises a flattening module and a detail module, and the resolution of the target reconstructed image is higher than that of the input image.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
Acquiring an input image; wherein the input image is a low resolution image;
and reconstructing the input image by using a preset image reconstruction model to obtain a target reconstructed image, wherein the image reconstruction model comprises a flattening module and a detail module, and the resolution of the target reconstructed image is higher than that of the input image.
The image reconstruction method, the image reconstruction device, the computer equipment and the storage medium acquire an input image; and reconstructing the input image by using a preset image reconstruction model to obtain a target reconstructed image. In the embodiment of the disclosure, the image reconstruction model comprises a flat module and a detail module, the flat module can reconstruct a flat area in an input image, the detail module can reconstruct a detail area in the input image, and the two modules complement each other, so that not only can the flat part in the image be prevented from being lost, but also the detail part in the image can be greatly reserved, and the reconstruction effect is improved.
Drawings
FIG. 1 is a diagram of an application environment for an image reconstruction method in one embodiment;
FIG. 2 is a flow chart of an image reconstruction method in one embodiment;
FIG. 3 is a flowchart illustrating a reconstruction procedure of an input image using a predetermined image reconstruction model according to an embodiment;
FIG. 4 is a schematic diagram of the structure of an image reconstruction model in one embodiment;
FIG. 5a is a schematic diagram of the structure of a flat module and a detail module in one embodiment;
FIG. 5b is a schematic diagram of a first feature extraction sub-module in one embodiment;
FIG. 5c is a schematic diagram of a feature combination sub-module in one embodiment;
FIG. 5d is a schematic diagram of a residual block in one embodiment;
FIG. 5e is a schematic diagram of the structure of a multi-scale block in one embodiment;
FIG. 6 is a flow chart of an image reconstruction model training step in one embodiment;
FIG. 7 is a flow chart of an image reconstruction method according to another embodiment;
FIG. 8 is a block diagram of an image reconstruction apparatus in one embodiment;
fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The image reconstruction method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. The application environment may include a terminal 101 and a medical scanning apparatus 102. Wherein the terminal 101 may communicate with the medical scanning apparatus 102 via a network. The terminal 101 may be, but is not limited to, various personal computers, notebook computers, and tablet computers, and the medical scanning apparatus 102 may be, but is not limited to, a CT (Computed Tomography, i.e., electron computer tomography) apparatus, a PET (Positron Emission Computed Tomography, positron emission tomography) -CT apparatus, and an MR (Magnetic Resonance ) apparatus.
The application environment may further include a PACS (Picture Archiving and Communication Systems, image archiving and communication system) server 103, and the terminal 101 and the medical scanning apparatus 102 may each communicate with the PACS server 103 via a network. The PACS server 103 may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, an image reconstruction method is provided, and the method is applied to the terminal in fig. 1 for illustration, and includes the following steps:
in step 201, an input image is acquired.
Wherein the input image is a low resolution image.
The terminal may acquire the input image from the medical scanning apparatus. For example, a PET image and a CT image are acquired from a PET-CT apparatus. The terminal may also obtain the input image from the PACS server. The terminal may also acquire a medical scan image from the medical scanning apparatus or the PACS server, and then process the acquired medical scan image to obtain an input image. The embodiment of the disclosure does not limit the acquisition mode of the input image.
Step 202, reconstructing an input image by using a preset image reconstruction model to obtain a target reconstructed image.
The image reconstruction model comprises a flattening module and a detail module, and the resolution of the target reconstruction image is higher than that of the input image.
The existing image reconstruction mode does not distinguish different areas in the image for reconstruction, a large number of flat areas are fully learned, and detail areas of the image, such as edge areas and corner areas, are not well learned, so that partial details of the reconstructed image are lost. The embodiment of the disclosure provides an image reconstruction model capable of improving a detail loss problem, the image reconstruction model comprises a flat module and a detail module, the image reconstruction module is used for reconstructing an input image, the flat module enables a flat area to be fully learned, and the detail module enables the detail area to be fully learned, so that a target reconstruction image obtained through reconstruction is higher in resolution than the input image, the detail area is not lost, and the reconstruction effect is improved.
In the image reconstruction method, an input image is acquired; and reconstructing the input image by using a preset image reconstruction model to obtain a target reconstructed image. In this embodiment of the present disclosure, the image reconstruction model includes a flat module and a detail module, where the flat module can reconstruct a flat area in an input image, and the detail module can reconstruct a detail area in the input image, so that not only a flat portion in the image can be avoided being lost, but also a detail portion in the image can be greatly reserved, thereby improving a reconstruction effect.
In one embodiment, as shown in fig. 3, the process of reconstructing an input image by using a preset image reconstruction model to obtain a target reconstructed image may include the following steps:
step 301, inputting the input image to a flattening module for reconstruction, and obtaining a first feature map, a first reconstructed image and a first attention probability map output by the flattening module.
The flat module and the detail module in the image reconstruction model are connected in sequence. In the reconstruction process, an input image is input into a flat module for reconstruction, and the flat module outputs a first feature map, a first reconstructed image and a first attention probability map.
The first feature map is a feature map obtained by extracting features of the input image by the flat module, the first reconstructed image is a reconstructed image obtained by reconstructing the input image by the flat module, and the first attention probability map is a weight occupied by a flat area after the flat module is reconstructed.
And step 302, inputting the first feature map to a detail module for reconstruction, and obtaining a second feature map, a second reconstructed image and a second attention probability map which are output by the detail module.
The second feature map is a feature map obtained by feature extraction of the first feature map by the detail module, the second reconstructed image is a reconstructed image obtained by reconstruction of the detail module according to the first feature map, and the second attention probability map is a weight occupied by a detail area after reconstruction of the detail module.
After the first feature map is output by the flattening module, the first feature map is input to the detail module for reconstruction, and the detail module outputs a second feature map, a second reconstructed image and a second attention probability map.
In practical application, the detail module may include an edge module and a corner module, the second feature map includes a feature map output by the edge module and a feature map output by the corner module, the second reconstructed image includes a reconstructed image output by the edge module and a reconstructed image output by the corner module, and the second attention probability map includes an attention probability map output by the edge module and an attention probability map output by the corner module.
As shown in fig. 4, after the first feature map is output by the flat module, the first feature map is input to the edge module for reconstruction, so as to obtain a feature map, a reconstructed image and an attention probability map output by the edge module; and inputting the feature image output by the edge module into the corner module for reconstruction, and obtaining the feature image, the reconstructed image and the attention probability image output by the corner module.
The embodiment of the disclosure does not limit the detail module, and can be set according to actual conditions.
Step 303, obtaining a target reconstructed image according to the first reconstructed image and the first attention probability map and the second reconstructed image and the second attention probability map.
After obtaining the first reconstructed image and the first attention probability map output by the flattening module and the second reconstructed image and the second attention probability map output by the detail module, the terminal may perform weighted summation processing according to the first reconstructed image and the first attention probability map and the second reconstructed image and the second attention probability map to obtain the target reconstructed image. Specifically, the product of the first reconstructed image and the first attention probability image and the product of the second reconstructed image and the second attention probability image are calculated respectively, and then the sum of the two products is calculated to obtain the target reconstructed image.
In practical application, the detail module comprises an edge module and a corner module, and in the weighted summation processing, the product of the first reconstructed image and the first attention probability image, the product of the reconstructed image output by the edge module and the attention probability image, the product of the reconstructed image output by the corner module and the attention probability image are calculated respectively, and then the sum of the three products is calculated to obtain the target reconstructed image.
In the above embodiment, an input image is input to a flat module for reconstruction, so as to obtain a first feature map, a first reconstructed image and a first attention probability map output by the flat module; inputting the first feature map to a detail module for reconstruction, and obtaining a second feature map, a second reconstructed image and a second attention probability map which are output by the detail module; and obtaining a target reconstructed image according to the first reconstructed image and the first attention probability map and the second reconstructed image and the second attention probability map. According to the embodiment of the disclosure, the duty ratio of the flat area and the detail area in the target reconstruction image can be adaptively adjusted through the attention probability map, so that the detail area is more focused on the image reconstruction model, information such as details, complex textures and the like is better recovered, and the image reconstruction effect is further improved.
In one embodiment, as shown in fig. 5a, the flat module and the detail module have the same structure and each include a first feature extraction sub-module, a feature combination sub-module and a second feature extraction sub-module which are sequentially connected; as shown in fig. 5b, the first feature extraction submodule and the second feature extraction submodule have the same structure, each include a residual block connected in sequence, and the first residual block is connected with the nth residual block in a jumping manner; as shown in fig. 5c, the feature combination submodule includes multi-scale blocks connected in sequence, and the first multi-scale block is connected with the mth multi-scale block in a jumping manner.
Taking n as 3 for illustration, the first feature extraction submodule includes 3 sequentially connected residual blocks, and the first residual block is in skip connection with the 3 rd residual block. The second feature extraction sub-module also includes 3 sequentially connected residual blocks, and the first residual block is skip-connected with the 3 rd residual block.
For example, with m being 3, the feature combination submodule includes 3 multi-scale blocks connected in sequence, and the first multi-scale block is hopped with the 3 rd multi-scale block.
In one embodiment, the flattening module and the detail module further include a convolution layer, an activation layer and a prediction layer, so as to convolve and predict the feature map output by the second feature extraction submodule to obtain a reconstructed image, or convolve and activate the feature map output by the second feature extraction submodule to obtain an attention probability map. The embodiments of the present disclosure are not limited to the structures of the flat module and the detail module. The flattening module outputs a first reconstructed image and a first attention probability map, and the detail module outputs a second reconstructed image and a second attention probability map, so that weighted summation can be performed according to the first reconstructed image and the first attention probability map and the second reconstructed image and the second attention probability map, and a target reconstructed image is obtained. The attention probability map can adaptively adjust the duty ratio of the flat area and the detail area in the target reconstructed image, so that the image reconstruction model can pay more attention to the detail area, and information such as details, complex textures and the like can be better recovered.
As can be appreciated, the first feature extraction sub-module and the second feature extraction sub-module can well extract feature information, thereby restoring image details; meanwhile, the residual blocks in the feature extraction submodule are connected in a jumping mode, and the problem of gradient disappearance can be relieved. And the feature combination sub-module can enlarge the receptive field, reserve high-frequency details in the image as much as possible, and generate a high-quality image.
In one embodiment, as shown in fig. 5d, the residual block comprises at least two feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer; and the residual block is used for carrying out feature extraction on the residual block input image at least twice, and fusing the extracted feature image with the residual block input image to obtain a residual block output image.
Specifically, the residual block comprises a first feature extraction module and a second feature extraction module, an input image of the residual block is input into the first feature extraction module for first feature extraction, and a feature map output by the first feature extraction module is obtained; inputting the feature map into a second feature extraction module for carrying out second feature extraction to obtain a feature map output by the second feature module; and carrying out fusion processing on the feature map output by the second feature extraction module and the residual block input image to obtain a residual block output image.
As shown in fig. 5d, the residual block input image is input to the feature extraction module 1, and the feature extraction module 1 performs a first feature extraction through the convolution layer, the batch processing layer, and the activation layer. The extracted feature map is then input to the feature extraction module 2, and the feature extraction module 2 performs a second feature extraction through the convolution layer, the batch processing layer, and the activation layer. And then, the residual block performs fusion processing on the feature map obtained by the second extraction and the residual block input image to obtain a residual block output image.
The above-described fusion processing may be a sum processing or other processing, which is not limited by the embodiment of the present disclosure.
In one embodiment, as shown in FIG. 5e, the multi-scale block includes a plurality of feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer; and the multi-scale block is used for extracting the characteristics of the multi-scale block input image, and fusing the characteristic images extracted by the multi-scale with the multi-scale block input image to obtain a multi-scale block output image.
Taking 12 feature extraction modules as an example, the 12 feature extraction modules perform three-scale feature extraction on the input image of the multi-scale block. The first scale comprises 6 feature extraction modules which are connected in sequence, the second scale comprises 3 feature extraction modules, and the third scale comprises 2 feature extraction modules.
For the first scale, the multi-scale block input image is input to the first-scale feature extraction module 11, and then the 6 feature extraction modules sequentially perform feature extraction once through the convolution layer, the batch processing layer and the activation layer and output corresponding feature maps.
For the second scale, downsampling is performed on the multi-scale block input image by 2 times, and the downsampled image is input into the feature extraction module 21 of the second scale for feature extraction, so that a feature map output by the feature extraction module 21 is obtained. Then, the feature image output by the feature extraction module 12 in the first scale is downsampled by 2 times, and the downsampled image is fused with the feature image output by the feature extraction module 21 in the second scale. The feature map obtained by the fusion processing is input into a feature extraction module 22 of a second scale for feature extraction, and the feature map output by the feature extraction module 22 is obtained. Then, the feature map output by the feature extraction module 14 in the first scale is downsampled by 2 times, and the downsampled image is fused with the feature map output by the feature extraction module 22 in the second scale. And inputting the feature map obtained by the fusion processing into a feature extraction module 23 of a second scale to perform feature extraction, so as to obtain a feature map output by the feature extraction module 23.
For the third scale, downsampling the multi-scale block input image by 4 times, and inputting the downsampled image into the feature extraction module 31 of the third scale for feature extraction, so as to obtain a feature map output by the feature extraction module 31. Then, the feature map output by the feature extraction module 14 in the first scale is subjected to 4 times downsampling, the feature map output by the feature extraction module 22 in the second scale is subjected to 2 times downsampling, and the 2 images obtained by the downsampling are fused with the feature map output by the feature extraction module 31 in the third scale. The feature map obtained by the fusion processing is input to a feature extraction module 32 of a third scale for feature extraction, and the feature map output by the feature extraction module is obtained.
Finally, the feature image output by the feature extraction module 23 in the second scale is up-sampled by 2 times, and the feature image output by the feature extraction module 32 in the third scale is up-sampled by 4 times; and then carrying out fusion processing on the feature image output by the feature extraction module 16 in the first scale and the two images obtained by up-sampling, carrying out feature extraction on the feature image obtained by the fusion processing, and outputting a multi-scale block output image.
In the above embodiment, the first feature extraction sub-module and the second feature extraction sub-module are composed of a plurality of residual blocks, and the residual blocks can better extract information in the image, and meanwhile, jump connection of the residual blocks can also alleviate the gradient vanishing problem. The feature combination sub-module can extract the features of a plurality of scales, and fuse the features of the plurality of scales so as to enlarge the receptive field, and can enhance the learning capacity of the feature combination sub-module, so that the high-frequency detail part is reserved as much as possible, and a high-quality image is generated. The use of the residual block and the multi-scale block improves the reconstruction effect of the image reconstruction model.
In one embodiment, as shown in fig. 6, the disclosed embodiment further includes a training process of the image reconstruction model, such as the following steps:
in step 401, a plurality of sample images are acquired.
Wherein the sample image is a low resolution image.
Before model training, a plurality of sample images are acquired, wherein the sample images can be PET images, spliced images of the PET images and CT images, or spliced images of the PET images and MR images. The embodiment of the disclosure does not limit the sample image.
The acquiring process of the spliced image may include: the method comprises the steps of acquiring an original PET image and a reference image (CT image or MR image), sequentially carrying out resampling and standardization on the original PET image and the reference image, carrying out random clipping on the standardized PET image and the standardized reference image, and carrying out splicing processing on the clipped PET image and the reference image to obtain a spliced image.
The resampling process described above resamples the image to the same resolution. The normalization process is to process the pixel value of each pixel point in the image to a preset range.
Taking the standardization processing of the PET image after resampling processing as an example, calculating the pixel average value mu and the pixel standard deviation sigma of the PET image after resampling processing; for each pixel point in the PET image after resampling, calculating a difference value between a pixel value I and the pixel average value mu, and then calculating a ratio between the difference value and the pixel standard deviation sigma, wherein the ratio is used as a processed pixel value I'; and determining the PET image after the standardization processing according to the pixel value after the processing of each pixel point.
The normalization process described above can be referred to by the following formula: i' = (I- μ)/σ.
Wherein I' is the pixel value after processing, I is the pixel value before processing, μ is the pixel average value of the image, and σ is the pixel standard deviation of the image.
The random cropping needs to meet the model requirement and any fixed size of the image dimension, for example, the fixed size is [64, 64, 16], when the sample image is a stitched image, it may be represented as [64, 64, 32], and the corresponding golden standard of the sample image may be represented as [64, 64, 16].
Step 402, acquiring gold standards corresponding to each sample image.
Wherein the gold standard includes a full-pattern gold standard, a first gold standard, and a second gold standard.
The process of obtaining the gold standard may include: for each sample image, acquiring an initial gold standard corresponding to the sample image; resampling the initial gold standard to obtain a full-image gold standard; calculating a gradient map corresponding to the full-image gold standard, and taking the gradient map as a first gold standard; and calculating a difference diagram between the full-image standard and the gradient diagram, and/or a corner diagram corresponding to the full-image standard, and taking the difference diagram and/or the corner diagram as a second gold standard.
It will be appreciated that for the edge region, a difference map may be employed as the corresponding second gold standard due to the larger edge gradient; for corner regions, corner figures may be employed as corresponding second gold criteria.
The process of calculating the gradient map corresponding to the full-image standard may include: and calculating a gradient map corresponding to the full-image gold standard by using a 3d sobel operator.
The process of calculating the angle map corresponding to the full-pattern standard may include: and calculating a corner map corresponding to the full-pattern standard by using a corner detection algorithm.
The embodiment of the disclosure does not limit the calculation modes of the gradient map and the angle map, and in practical application, other modes can be adopted for calculation.
In one embodiment, if the sample image is subjected to the normalization processing, the normalization processing is also required to be performed on the gold standard corresponding to each sample image, and the specific processing process may refer to the above process of performing the normalization processing on the resampled PET image, which is not described herein in detail.
And step 403, training based on a plurality of sample images and gold standards corresponding to the sample images to obtain an image reconstruction model.
The model training process may include: sequentially inputting a plurality of sample images into a deep learning model for multiple training to obtain a plurality of training results sequentially output by the deep learning model; the learning rate of the deep learning model is sequentially increased from the initial learning rate according to a preset step length in multiple training. And then, determining a loss value corresponding to each training according to the training results and the gold standard corresponding to each sample image, and determining the target learning rate of the deep learning model according to the loss values. Then, a plurality of unused sample images are sequentially input into a deep learning model with a learning rate set as a target learning rate for training, and an image reconstruction model is obtained.
In one embodiment, the loss values during image reconstruction model training include a full-map loss value, a flat region loss value, and a detail region loss value.
For example, one sample image is input into a deep learning model for training, and a training result output by the deep learning model is obtained. Calculating a loss value between the training result and the full graph standard to obtain a full graph loss value; calculating a loss value between the training result and a first gold standard to obtain a flat area loss value; and calculating a loss value between the training result and the second gold standard to obtain a detail area loss value. And finally, summing the total loss value, the flat area loss value and the detail area loss value to obtain a total loss value.
And under the condition that the detail module comprises an edge module and a corner module, calculating a loss value between the training result and a second gold standard corresponding to the edge area and a loss value between the training result and the second gold standard corresponding to the corner area to obtain the loss value of the edge area and the loss value of the corner area. And then, summing the total graph loss value, the flat area loss value, the edge area loss value and the corner area loss value to obtain a total loss value.
For example, the total loss value includes:
loss=Loss(whole)+Loss(flat)+Loss(edge)+Loss(corner)
where Loss (white) is a Loss function of the whole graph, loss (flat) is a Loss function of the flat region, loss (edge) is a Loss function of the edge region, and Loss (corner) is a Loss function of the corner region. And, loss () may be L1 Loss, as follows:
loss(x i ,y i )=|x i -y i |
wherein x is i For training results,y i The pixel number is the pixel number, i is the gold standard. L1 loss is the pixel-by-pixel difference between the training result and the gold standard.
In the above embodiment, a plurality of sample images are acquired; acquiring gold standards corresponding to each sample image; training based on a plurality of sample images and gold standards corresponding to the sample images to obtain an image reconstruction model. According to the embodiment of the invention, the image reconstruction model capable of realizing super-resolution reconstruction can be trained, and the flat region loss value and the detail region loss value are utilized for training, so that the image reconstruction model can reconstruct the detail region better, the reconstructed image retains the detail part, and the quality of the reconstructed image is improved.
In one embodiment, as shown in fig. 7, an image reconstruction method is provided, which is illustrated by applying to the terminal shown in fig. 1, and may include the following steps:
Step 501, an original image is acquired.
For example, the terminal acquires an original PET image from the PET-CT apparatus, or acquires an original PET image and CT image from the PET-CT apparatus.
Step 502, resampling the original image, normalizing the resampled image, and determining the input image according to the normalization.
For example, the terminal performs resampling processing on an original PET image, performs normalization processing on the resampled PET image, and determines the normalized PET image as an input image. Or the terminal carries out resampling treatment on the original PET image and CT image, and then carries out standardization treatment on the resampled PET image and CT image; and then, performing stitching processing on the standardized PET image and the standardized CT image to obtain an input image.
The above process of normalizing the original PET image may refer to the description in the above embodiment, and the embodiments of the disclosure are not repeated herein.
And step 503, reconstructing the input image by using a preset image reconstruction model to obtain a target reconstructed image.
The method comprises the steps of inputting an input image into a flat module for reconstruction, and obtaining a first feature image, a first reconstructed image and a first attention probability image which are output by the flat module; inputting the first feature map to a detail module for reconstruction, and obtaining a second feature map, a second reconstructed image and a second attention probability map which are output by the detail module; and obtaining a target reconstructed image according to the first reconstructed image and the first attention probability map and the second reconstructed image and the second attention probability map.
And 504, carrying out reduction processing on the target reconstructed image according to the standardization processing to obtain a restored reconstructed image.
The process of the reduction treatment may include: calculating the product of the target reconstructed image and the pixel standard deviation of the full-image standard, and then calculating the sum of the product and the pixel average value of the full-image standard to obtain a primary restored reconstructed image; and then resampling the preliminarily restored reconstructed image back to the resolution space of the original image to obtain a restored reconstructed image. In practical applications, other standardized processing manners and restoring manners may also be adopted, which are not limited by the embodiments of the present disclosure.
In the above embodiment, the original image is acquired; resampling the original image, standardizing the resampled image, and determining an input image according to the standardization; reconstructing an input image by using a preset image reconstruction model to obtain a target reconstructed image; and carrying out reduction processing on the target reconstructed image according to the standardized processing to obtain a restored reconstructed image. According to the embodiment of the disclosure, before the image reconstruction is carried out by using the image reconstruction model, resampling and standardization processing are carried out on the obtained original image so as to adapt to the image reconstruction model, and the image reconstruction efficiency can be improved; after the target reconstructed image is obtained, the target reconstructed image is restored according to the standardized processing process, so that the reconstructed image corresponding to the original image can be obtained, and the user requirement is met.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an image reconstruction device for realizing the image reconstruction method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation in the embodiments of the image reconstruction apparatus provided in the following may be referred to the limitation of the image reconstruction method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 8, there is provided an image reconstruction apparatus including:
an image acquisition module 601, configured to acquire an input image; wherein the input image is a low resolution image;
the image reconstruction module 602 is configured to reconstruct an input image using a preset image reconstruction model to obtain a target reconstructed image, where the image reconstruction model includes a flattening module and a detail module, and a resolution of the target reconstructed image is higher than a resolution of the input image.
In one embodiment, the image reconstruction module 602 is specifically configured to input an input image to the flattening module for reconstruction, so as to obtain a first feature map, a first reconstructed image, and a first attention probability map output by the flattening module; inputting the first feature map to a detail module for reconstruction, and obtaining a second feature map, a second reconstructed image and a second attention probability map which are output by the detail module; and obtaining a target reconstructed image according to the first reconstructed image and the first attention probability map and the second reconstructed image and the second attention probability map.
In one embodiment, the flat module and the detail module have the same structure and each comprise a first feature extraction sub-module, a feature combination sub-module and a second feature extraction sub-module which are sequentially connected;
The first feature extraction submodule and the second feature extraction submodule have the same structure and comprise residual blocks which are sequentially connected, and the first residual block is in jump connection with the nth residual block;
the feature combination submodule comprises multi-scale blocks which are connected in sequence, and the first multi-scale block is connected with the mth multi-scale block in a jumping mode.
In one embodiment, the residual block comprises at least two feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer;
and the residual block is used for carrying out feature extraction on the residual block input image at least twice, and fusing the extracted feature image with the residual block input image to obtain a residual block output image.
In one embodiment, the multi-scale block includes a plurality of feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer;
and the multi-scale block is used for extracting the characteristics of the multi-scale block input image, and fusing the characteristic images extracted by the multi-scale with the multi-scale block input image to obtain a multi-scale block output image.
In one embodiment, the apparatus further comprises:
the sample acquisition module is used for acquiring a plurality of sample images; wherein the sample image is a low resolution image;
The gold standard acquisition module is used for acquiring gold standards corresponding to the sample images; gold standards include a full graph gold standard, a first gold standard, and a second gold standard;
and the training module is used for training based on a plurality of sample images and gold standards corresponding to the sample images to obtain an image reconstruction model.
In one embodiment, the loss values during image reconstruction model training include a full-map loss value, a flat region loss value, and a detail region loss value.
The respective modules in the above-described image reconstruction apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 9. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of image reconstruction. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 9 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring an input image; wherein the input image is a low resolution image;
and reconstructing the input image by using a preset image reconstruction model to obtain a target reconstructed image, wherein the image reconstruction model comprises a flattening module and a detail module, and the resolution of the target reconstructed image is higher than that of the input image.
In one embodiment, the processor when executing the computer program further performs the steps of:
inputting an input image into a flat module for reconstruction, and obtaining a first feature image, a first reconstructed image and a first attention probability image which are output by the flat module;
Inputting the first feature map to a detail module for reconstruction, and obtaining a second feature map, a second reconstructed image and a second attention probability map which are output by the detail module;
and obtaining a target reconstructed image according to the first reconstructed image and the first attention probability map and the second reconstructed image and the second attention probability map.
In one embodiment, the flat module and the detail module have the same structure and each comprise a first feature extraction sub-module, a feature combination sub-module and a second feature extraction sub-module which are sequentially connected;
the first feature extraction submodule and the second feature extraction submodule have the same structure and comprise residual blocks which are sequentially connected, and the first residual block is in jump connection with the nth residual block;
the feature combination submodule comprises multi-scale blocks which are connected in sequence, and the first multi-scale block is connected with the mth multi-scale block in a jumping mode.
In one embodiment, the residual block includes at least two feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer;
and the residual block is used for carrying out feature extraction on the residual block input image at least twice, and fusing the extracted feature image with the residual block input image to obtain a residual block output image.
In one embodiment, the multi-scale block includes a plurality of feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer;
and the multi-scale block is used for extracting the characteristics of the multi-scale block input image, and fusing the characteristic images extracted by the multi-scale with the multi-scale block input image to obtain a multi-scale block output image.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring a plurality of sample images; wherein the sample image is a low resolution image;
acquiring gold standards corresponding to each sample image; gold standards include a full graph gold standard, a first gold standard, and a second gold standard;
training based on a plurality of sample images and gold standards corresponding to the sample images to obtain an image reconstruction model.
In one embodiment, the loss values during image reconstruction model training include a full-map loss value, a flat region loss value, and a detail region loss value.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an input image; wherein the input image is a low resolution image;
And reconstructing the input image by using a preset image reconstruction model to obtain a target reconstructed image, wherein the image reconstruction model comprises a flattening module and a detail module, and the resolution of the target reconstructed image is higher than that of the input image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting an input image into a flat module for reconstruction, and obtaining a first feature image, a first reconstructed image and a first attention probability image which are output by the flat module;
inputting the first feature map to a detail module for reconstruction, and obtaining a second feature map, a second reconstructed image and a second attention probability map which are output by the detail module;
and obtaining a target reconstructed image according to the first reconstructed image and the first attention probability map and the second reconstructed image and the second attention probability map.
In one embodiment, the flat module and the detail module have the same structure and each comprise a first feature extraction sub-module, a feature combination sub-module and a second feature extraction sub-module which are sequentially connected;
the first feature extraction submodule and the second feature extraction submodule have the same structure and comprise residual blocks which are sequentially connected, and the first residual block is in jump connection with the nth residual block;
The feature combination submodule comprises multi-scale blocks which are connected in sequence, and the first multi-scale block is connected with the mth multi-scale block in a jumping mode.
In one embodiment, the residual block includes at least two feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer;
and the residual block is used for carrying out feature extraction on the residual block input image at least twice, and fusing the extracted feature image with the residual block input image to obtain a residual block output image.
In one embodiment, the multi-scale block includes a plurality of feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer;
and the multi-scale block is used for extracting the characteristics of the multi-scale block input image, and fusing the characteristic images extracted by the multi-scale with the multi-scale block input image to obtain a multi-scale block output image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a plurality of sample images; wherein the sample image is a low resolution image;
acquiring gold standards corresponding to each sample image; gold standards include a full graph gold standard, a first gold standard, and a second gold standard;
Training based on a plurality of sample images and gold standards corresponding to the sample images to obtain an image reconstruction model.
In one embodiment, the loss values during image reconstruction model training include a full-map loss value, a flat region loss value, and a detail region loss value.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
acquiring an input image; wherein the input image is a low resolution image;
and reconstructing the input image by using a preset image reconstruction model to obtain a target reconstructed image, wherein the image reconstruction model comprises a flattening module and a detail module, and the resolution of the target reconstructed image is higher than that of the input image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting an input image into a flat module for reconstruction, and obtaining a first feature image, a first reconstructed image and a first attention probability image which are output by the flat module;
inputting the first feature map to a detail module for reconstruction, and obtaining a second feature map, a second reconstructed image and a second attention probability map which are output by the detail module;
And obtaining a target reconstructed image according to the first reconstructed image and the first attention probability map and the second reconstructed image and the second attention probability map.
In one embodiment, the flat module and the detail module have the same structure and each comprise a first feature extraction sub-module, a feature combination sub-module and a second feature extraction sub-module which are sequentially connected;
the first feature extraction submodule and the second feature extraction submodule have the same structure and comprise residual blocks which are sequentially connected, and the first residual block is in jump connection with the nth residual block;
the feature combination submodule comprises multi-scale blocks which are connected in sequence, and the first multi-scale block is connected with the mth multi-scale block in a jumping mode.
In one embodiment, the residual block includes at least two feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer;
and the residual block is used for carrying out feature extraction on the residual block input image at least twice, and fusing the extracted feature image with the residual block input image to obtain a residual block output image.
In one embodiment, the multi-scale block includes a plurality of feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer;
And the multi-scale block is used for extracting the characteristics of the multi-scale block input image, and fusing the characteristic images extracted by the multi-scale with the multi-scale block input image to obtain a multi-scale block output image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a plurality of sample images; wherein the sample image is a low resolution image;
acquiring gold standards corresponding to each sample image; gold standards include a full graph gold standard, a first gold standard, and a second gold standard;
training based on a plurality of sample images and gold standards corresponding to the sample images to obtain an image reconstruction model.
In one embodiment, the loss values during image reconstruction model training include a full-map loss value, a flat region loss value, and a detail region loss value.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of image reconstruction, the method comprising:
acquiring an input image; wherein the input image is a low resolution image;
and reconstructing the input image by using a preset image reconstruction model to obtain a target reconstruction image, wherein the image reconstruction model comprises a flat module and a detail module, and the resolution of the target reconstruction image is higher than that of the input image.
2. The method according to claim 1, wherein reconstructing the input image using a predetermined image reconstruction model results in a target reconstructed image, comprising:
inputting the input image into the flattening module for reconstruction, and obtaining a first feature map, a first reconstructed image and a first attention probability map which are output by the flattening module;
inputting the first feature map to the detail module for reconstruction, and obtaining a second feature map, a second reconstructed image and a second attention probability map which are output by the detail module;
and obtaining the target reconstructed image according to the first reconstructed image and the first attention probability map and the second reconstructed image and the second attention probability map.
3. The method of claim 2, wherein the flat module and the detail module are identical in structure and each comprise a first feature extraction sub-module, a feature combination sub-module and a second feature extraction sub-module which are sequentially connected;
the first feature extraction submodule and the second feature extraction submodule have the same structure and comprise residual blocks which are sequentially connected, and the first residual block is in jump connection with the nth residual block;
The feature combination submodule comprises multi-scale blocks which are connected in sequence, and the first multi-scale block is connected with the mth multi-scale block in a jumping mode.
4. A method according to claim 3, wherein the residual block comprises at least two feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer;
and the residual block is used for carrying out feature extraction on the residual block input image at least twice, and fusing the extracted feature map with the residual block input image to obtain a residual block output image.
5. A method according to claim 3, wherein the multi-scale block comprises a plurality of feature extraction modules; each feature extraction module consists of a convolution layer, a batch processing layer and an activation layer;
the multi-scale block is used for extracting the characteristics of multiple scales from the multi-scale block input image, and fusing the characteristic images extracted by the multiple scales with the multi-scale block input image to obtain a multi-scale block output image.
6. The method according to claim 1, wherein the method further comprises:
acquiring a plurality of sample images; wherein the sample image is a low resolution image;
Acquiring gold standards corresponding to the sample images; the gold standards include a full graph gold standard, a first gold standard, and a second gold standard;
training based on the plurality of sample images and gold standards corresponding to the sample images to obtain the image reconstruction model.
7. The method of claim 6, wherein the loss values during the image reconstruction model training process include a full-map loss value, a flat region loss value, and a detail region loss value.
8. An image reconstruction apparatus, the apparatus comprising:
the image acquisition module is used for acquiring an input image; the input image is a low resolution image;
the image reconstruction module is used for reconstructing the input image by utilizing a preset image reconstruction model to obtain a target reconstructed image; wherein the image reconstruction model comprises a flattening module and a detail module, and the resolution of the target reconstruction image is higher than the resolution of the input image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202111628840.9A 2021-12-28 2021-12-28 Image reconstruction method, image reconstruction device, computer equipment and storage medium Pending CN116416328A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111628840.9A CN116416328A (en) 2021-12-28 2021-12-28 Image reconstruction method, image reconstruction device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111628840.9A CN116416328A (en) 2021-12-28 2021-12-28 Image reconstruction method, image reconstruction device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116416328A true CN116416328A (en) 2023-07-11

Family

ID=87053092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111628840.9A Pending CN116416328A (en) 2021-12-28 2021-12-28 Image reconstruction method, image reconstruction device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116416328A (en)

Similar Documents

Publication Publication Date Title
CN109993726B (en) Medical image detection method, device, equipment and storage medium
CN111192356B (en) Method, device, equipment and storage medium for displaying region of interest
US8401265B2 (en) Processing of medical image data
US20230036359A1 (en) Image reconstruction method, device,equipment, system, and computer-readable storage medium
CN111161269B (en) Image segmentation method, computer device, and readable storage medium
US11869120B2 (en) System and method for image reconstruction
CN109242771B (en) Super-resolution image reconstruction method and device, computer readable storage medium and computer equipment
US11393139B2 (en) System and method for MPR streak reduction
CN110211205B (en) Image processing method, device, equipment and storage medium
Zhu et al. Residual dense network for medical magnetic resonance images super-resolution
CN112365413A (en) Image processing method, device, equipment, system and computer readable storage medium
CN115526857A (en) PET image denoising method, terminal device and readable storage medium
CN113487536A (en) Image segmentation method, computer device and storage medium
CN111091604B (en) Training method and device of rapid imaging model and server
US11494972B2 (en) Image processing apparatus, image processing method, and image processing system
CN116416328A (en) Image reconstruction method, image reconstruction device, computer equipment and storage medium
CN114723723A (en) Medical image processing method, computer device and storage medium
CN115272250A (en) Method, device, computer equipment and storage medium for determining focus position
CN114299010A (en) Method and device for segmenting brain tumor image, computer equipment and storage medium
US10347014B2 (en) System and method for image reconstruction
Muhammad et al. Role of deep learning in medical image super-resolution
CN110751627B (en) Image processing method, device, computer equipment and storage medium
Liu et al. Medical CT image super-resolution via cyclic feature concentration network
CN116486090B (en) Lung cancer spine metastasis image processing method, device, equipment and storage medium
CN113961124B (en) Medical image display method, medical image display device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination