CN113034642B - Image reconstruction method and device and training method and device of image reconstruction model - Google Patents

Image reconstruction method and device and training method and device of image reconstruction model Download PDF

Info

Publication number
CN113034642B
CN113034642B CN202110341995.8A CN202110341995A CN113034642B CN 113034642 B CN113034642 B CN 113034642B CN 202110341995 A CN202110341995 A CN 202110341995A CN 113034642 B CN113034642 B CN 113034642B
Authority
CN
China
Prior art keywords
images
image
sequence
determining
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110341995.8A
Other languages
Chinese (zh)
Other versions
CN113034642A (en
Inventor
于朋鑫
孙晶华
王少康
陈宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infervision Medical Technology Co Ltd
Original Assignee
Infervision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Medical Technology Co Ltd filed Critical Infervision Medical Technology Co Ltd
Priority to CN202110341995.8A priority Critical patent/CN113034642B/en
Publication of CN113034642A publication Critical patent/CN113034642A/en
Application granted granted Critical
Publication of CN113034642B publication Critical patent/CN113034642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/416Exact reconstruction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image reconstruction method and device and an image reconstruction model training method and device, wherein the image reconstruction method comprises the following steps: determining a first image sequence based on the thick-layer image sequence, wherein the signal-to-noise ratio of the first image sequence is higher than that of the thick-layer image sequence, the thick-layer image sequence comprises a plurality of original images, the first image sequence comprises a plurality of first images, and the first images are in one-to-one spatial correspondence with the original images; determining two second images based on each first image in the plurality of first images to obtain a second image sequence, wherein the second image sequence comprises a plurality of second images, and the first images are located between the two second images; a sequence of thin-layer images is determined based on the first image sequence and the second image sequence. The technical scheme of the application can reconstruct a thin-layer image sequence with high resolution and high precision.

Description

Image reconstruction method and device and training method and device of image reconstruction model
Technical Field
The application relates to the technical field of image processing, in particular to an image reconstruction method and device and an image reconstruction model training method and device.
Background
Computed Tomography (CT) technology has the characteristics of being non-invasive, high-contrast, high-resolution, and multi-planar (e.g., coronal and sagittal planes), and thus has wide applications in medical fields, such as medical diagnosis, image-guided surgery, and radiation therapy.
Thin layer CT images have higher spatial resolution due to their smaller layer spacing than thick layer CT images. I.e. more information is contained in the thin layer CT image. More accurate diagnosis can be achieved based on thin layer CT images. However, the thin-layer CT image has a huge data size, which poses a great challenge to both data transmission and storage.
Therefore, it is desirable to provide a method for conveniently acquiring thin-layer CT images.
Disclosure of Invention
In view of this, embodiments of the present application provide an image reconstruction method and apparatus, and an image reconstruction model training method and apparatus, which can reconstruct a thin-layer image sequence with high resolution and high accuracy.
In a first aspect, an embodiment of the present application provides an image reconstruction method, including: determining a first image sequence based on the thick-layer image sequence, wherein the signal-to-noise ratio of the first image sequence is higher than that of the thick-layer image sequence, the thick-layer image sequence comprises a plurality of original images, the first image sequence comprises a plurality of first images, and the first images are in one-to-one spatial correspondence with the original images; determining two second images based on each first image in the plurality of first images to obtain a second image sequence, wherein the second image sequence comprises a plurality of second images, and the first images are located between the two second images; a sequence of thin-layer images is determined based on the first image sequence and the second image sequence.
In some embodiments of the present application, determining a sequence of thin-layer images based on a first sequence of images and a second sequence of images comprises: removing the second image positioned outside the first image sequence to obtain an adjusted second image sequence; determining a thin-layer image sequence based on the first image sequence and the adjusted second image sequence.
In some embodiments of the present application, determining the first image sequence based on the thick layer image sequence includes: performing feature extraction on the plurality of original images to obtain first feature maps of the plurality of original images; performing image reconstruction based on first feature maps of a plurality of original images to obtain a plurality of first images, wherein two second images are determined based on each first image in the plurality of first images to obtain a second image sequence, and the method comprises the following steps: performing characteristic acquisition on the plurality of first images to obtain second characteristic maps of the plurality of first images; combining the first feature maps of the multiple original images and the second feature maps of the multiple first images to obtain a first splicing feature map; and carrying out image reconstruction on the first splicing characteristic diagram to obtain a plurality of second images.
In some embodiments of the present application, the image reconstruction method further includes: determining a third picture sequence based on the second picture sequence, wherein the third picture sequence comprises a plurality of third images, the first image corresponds to two third images in the plurality of third images, the two third images are positioned on two sides of the first image and respectively positioned on one sides of the two second images far away from the first image, and wherein determining the thin-layer picture sequence based on the first picture sequence and the second picture sequence comprises: a sequence of thin-layer images is determined based on the first, second, and third image sequences.
In some embodiments of the present application, determining the first image sequence based on the thick layer image sequence includes: performing feature extraction on the plurality of original images to obtain first feature maps of the plurality of original images; performing image reconstruction based on first feature maps of a plurality of original images to obtain a plurality of first images, wherein two second images are determined based on each first image in the plurality of first images to obtain a second image sequence, and the method comprises the following steps: performing feature extraction on the plurality of first images to obtain second feature maps of the plurality of first images; combining the first feature maps of the multiple original images and the second feature maps of the multiple first images to obtain a first splicing feature map; carrying out image reconstruction on the first splicing characteristic diagram to obtain a plurality of second images, wherein a third image sequence is determined based on the second image sequence, and the image reconstruction method comprises the following steps: performing feature extraction on the plurality of second images to obtain third feature maps of the plurality of second images; combining the first feature maps of the multiple original images, the second feature maps of the multiple first images and the third feature maps of the multiple second images to obtain a second splicing feature map; and carrying out image reconstruction on the second splicing characteristics to obtain a plurality of third images.
In some embodiments of the present application, determining a sequence of thin-layer images based on a first image sequence, a second image sequence, and a third image sequence comprises: removing the second image and the third image which are positioned outside the first image sequence to obtain an adjusted second image sequence and an adjusted third image sequence; determining a thin-layer image sequence based on the first image sequence, the adjusted second image sequence and the adjusted third image sequence.
In a second aspect, an embodiment of the present application provides a method for training an image reconstruction model, including: determining a first image sequence based on the sample thick-layer image sequence, wherein the signal-to-noise ratio of the first image sequence is higher than that of the sample thick-layer image sequence, the sample thick-layer image sequence comprises a plurality of original images, the first image sequence comprises a plurality of first images, and the plurality of first images are in one-to-one spatial correspondence with the plurality of original images; determining two second images based on each first image in the plurality of first images to obtain a second image sequence, wherein the second image sequence comprises a plurality of second images, and the first images are located between the two second images; determining a predicted thin-layer image sequence based on the first image sequence and the second image sequence; determining a first content feature map based on the predicted thin-layer image sequence; determining a loss function based on the first content feature map; and adjusting parameters of the deep learning model based on the loss function to obtain an image reconstruction model.
In some embodiments of the present application, determining a loss function based on the first content profile includes: determining a first similarity between a first position and a second position on the first content feature map; a loss function is determined based on the first similarity.
In some embodiments of the present application, the method for training an image reconstruction model further includes: determining a second similarity between the first location and a third location on the first content feature map, wherein determining the loss function based on the first similarity comprises: a loss function is determined based on a difference between the first similarity and the second similarity.
In some embodiments of the present application, the training method of the image reconstruction model further includes: determining a second content feature map based on the sample thin-layer image sequence, and determining a first position, a second position and a third position on the second content feature map, wherein the similarity between the first position and the second position on the second content feature map is greater than the similarity between the first position and the third position, wherein determining a loss function based on the difference between the first similarity and the second similarity comprises: when the first similarity is less than or equal to the second similarity, a loss function is determined based on a difference between the first similarity and the second similarity.
In a third aspect, an embodiment of the present application provides a training method for an image reconstruction model, including: inputting the sample thick-layer image sequence into a deep learning model to obtain a predicted thin-layer image sequence; determining a first content feature map based on the predicted thin-layer image sequence, and determining a first similarity between a first position and a second position on the first content feature map; determining a loss function based on the first similarity; and adjusting parameters of the deep learning model based on the loss function to obtain an image reconstruction model.
In a fourth aspect, an embodiment of the present application provides an image reconstruction apparatus, including: the image processing device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining a first image sequence based on a thick-layer image sequence, the signal-to-noise ratio of the first image sequence is higher than that of the thick-layer image sequence, the thick-layer image sequence comprises a plurality of original images, the first image sequence comprises a plurality of first images, and the first images are in one-to-one spatial correspondence with the original images; a second determining module, configured to determine two second images based on each of the plurality of first images to obtain a second video sequence, where the second video sequence includes the plurality of second images, and the first image is located between the two second images; a third determining module for determining a thin-layer image sequence based on the first image sequence and the second image sequence.
In a fifth aspect, an embodiment of the present application provides a training apparatus for an image reconstruction model, including: the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining a first image sequence based on a sample thick-layer image sequence, the signal-to-noise ratio of the first image sequence is higher than that of the sample thick-layer image sequence, the sample thick-layer image sequence comprises a plurality of original images, the first image sequence comprises a plurality of first images, and the first images are in one-to-one spatial correspondence with the original images; a second determining module, configured to determine two second images based on each of the plurality of first images to obtain a second video sequence, where the second video sequence includes the plurality of second images, and the first image is located between the two second images; a third determination module for determining a predicted thin-layer image sequence based on the first image sequence and the second image sequence; a fourth determination module for determining a first content profile based on the predicted thin-layer image sequence; a fifth determining module for determining a loss function based on the first content feature map; and the adjusting module is used for adjusting the parameters of the deep learning model based on the loss function so as to obtain an image reconstruction model.
In a sixth aspect, an embodiment of the present application provides a training apparatus for an image reconstruction model, including: the input module is used for inputting the sample thick-layer image sequence into the deep learning model so as to obtain a prediction thin-layer image sequence; a first determining module, configured to determine a first content feature map based on the predicted thin-layer image sequence, and determine a first similarity between a first location and a second location on the first content feature map; a second determination module to determine a loss function based on the first similarity; and the adjusting module is used for adjusting the parameters of the deep learning model based on the loss function so as to obtain an image reconstruction model.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program is used to execute the above-mentioned image reconstruction method or training method of an image reconstruction model.
In an eighth aspect, an embodiment of the present application provides an electronic device, including: a processor; a memory for storing processor executable instructions, wherein the processor is configured to perform the image reconstruction method or the training method of the image reconstruction model described above.
The embodiment of the application provides an image reconstruction method and device and an image reconstruction model training method and device. In addition, the first images corresponding to the original image space positions in a one-to-one mode are obtained, and the second images on two sides of the first images are obtained on the basis of any one first image in a near-to-far mode, so that the space consistency and continuity inside the thick-layer image sequence can be better utilized, and the thin-layer image sequence with high resolution and high precision can be reconstructed.
Drawings
Fig. 1 is a schematic diagram illustrating an implementation environment provided by an embodiment of the present application.
Fig. 2 is a schematic flowchart illustrating an image reconstruction method according to an exemplary embodiment of the present application.
Fig. 3 is a flowchart illustrating an image reconstruction method according to another exemplary embodiment of the present application.
Fig. 4a is a schematic network structure diagram of an image reconstruction model according to an exemplary embodiment of the present application.
Fig. 4b is a schematic diagram illustrating a structure between two adjacent Match layers in a reconstructed thin-layer image sequence according to an exemplary embodiment of the present application.
Fig. 5 is a flowchart illustrating a training method of an image reconstruction model according to an exemplary embodiment of the present application.
Fig. 6 is a flowchart illustrating a training method for an image reconstruction model according to another exemplary embodiment of the present application.
Fig. 7 is a flowchart illustrating a training method of an image reconstruction model according to another exemplary embodiment of the present application.
Fig. 8 is a schematic structural diagram of an image reconstruction apparatus according to an exemplary embodiment of the present application.
Fig. 9 is a schematic structural diagram of a training apparatus for an image reconstruction model according to an exemplary embodiment of the present application.
Fig. 10 is a schematic structural diagram of a training apparatus for an image reconstruction model according to another exemplary embodiment of the present application.
Fig. 11 is a block diagram illustrating an electronic device for performing an image reconstruction method or a training method of an image reconstruction model according to an exemplary embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Summary of the application
The thin-layer CT image is suitable for observing small-size focuses due to the characteristic of high resolution. Thinner CT images may allow for smaller lesion sizes to be observed. However, since thin layer CT images occupy a large amount of storage space, in practical use, thin layer CT images are often used for clinical diagnosis, while thick layer CT images are used for long-term storage of patient data.
However, the disadvantage of thick layer CT images is evident, i.e. there is a significant difference in resolution between layers and within layers. The in-layer resolution of thick layer CT scans is typically around 1mm, but the layer thickness of thick layer CT can reach 5-10 mm. This means that each voxel in the thick layer CT image is anisotropic and very different. This is detrimental to the task of analyzing based on the region of interest and may also greatly affect the effect of the three-dimensional reconstruction of the medical image.
The embodiment of the application provides an image reconstruction method and device, which can generate a thin CT image based on a thick CT image, thereby comprehensively utilizing the advantages of the thick CT image and the thin CT image.
Exemplary System
Fig. 1 is a schematic diagram illustrating an implementation environment provided by an embodiment of the present application. The implementation environment includes a computer device 110 and a CT scanner 120.
The CT scanner 120 is used for scanning the human tissue to obtain a CT image of the human tissue. The computer device 110 may acquire thick layer CT images from the CT scanner 120. The computer device 110 may perform feature extraction on the thick layer CT image and generate a thin layer CT image based on the extracted features.
The computer device 110 may be a general-purpose computer or a computer device composed of an application-specific integrated circuit, and the like, which is not limited in this embodiment. For example, the Computer device 110 may be a mobile terminal device such as a tablet Computer, or may be a Personal Computer (PC), such as a laptop portable Computer and a desktop Computer. One skilled in the art will appreciate that the number of computer devices 110 described above may be one or more, and that the types may be the same or different. The number and the type of the computer devices 110 are not limited in the embodiments of the present application.
In some embodiments, the computer device 110 may be a server, i.e., the CT scanner 120 is directly communicatively connected to the server.
In other embodiments, the computer device 110 may be communicatively connected to the CT scanner 120 and the server, respectively, and transmit the thick layer CT image acquired from the CT scanner 120 to the server, so that the server performs an image reconstruction method based on the thick layer CT image.
Exemplary method
Fig. 2 is a schematic flowchart illustrating an image reconstruction method according to an exemplary embodiment of the present application. The method of fig. 2 may be performed by an electronic device, for example, by the computer device or server of fig. 1. As shown in fig. 2, the image reconstruction method includes the following.
210: the method comprises the steps of determining a first image sequence based on a thick-layer image sequence, wherein the signal-to-noise ratio of the first image sequence is higher than that of the thick-layer image sequence, the thick-layer image sequence comprises a plurality of original images, the first image sequence comprises a plurality of first images, and the first images are in one-to-one spatial correspondence with the original images.
The thick-layer image sequence may be a three-dimensional image composed of multiple layers of two-dimensional images, which may be an original image sequence acquired based on a CT scanning technique, or an original image sequence acquired based on other photographic techniques. That is, the present application does not limit the acquisition method of the thick layer image sequence and the type of the thick layer image sequence, as long as the thick layer image sequence is a three-dimensional image composed of multiple layers of two-dimensional images.
For convenience of description, the image reconstruction method provided in the embodiment of the present application is described in detail below by taking a thick-layer image sequence as a thick-layer CT image as an example.
Specifically, the thick-layer image sequence includes a plurality of original images, each of which is a two-dimensional image. The plurality of original images are arranged to form a thick-layer image sequence. The layer spacing of the plurality of original images may be a fixed value d.
The first picture sequence includes a plurality of first images, each of which is also a two-dimensional image. The plurality of first images are arranged to form a first video sequence. The plurality of first images spatially correspond to the plurality of original images one to one, that is, the interlayer distance of the plurality of first images is also d.
The number of original images in the thick-layer image sequence is N, that is, during CT scanning, all data obtained by scanning is compressed in the thick-layer image sequence of N layers, so that the noise of the thick-layer image sequence is large, the resolution is low, and the displayed image is blurred.
The first image sequence with high signal-to-noise ratio is obtained through the thick-layer image sequence, so that the noise of the image sequence can be reduced, and the thin-layer image sequence with high resolution ratio can be conveniently obtained subsequently. In other words, the information compressed in the first image is less than the information compressed in the original image, and thus the first image has low noise and high resolution.
Here, the acquisition of the first image sequence may be acquired by a machine learning model or a deep learning model. For example, the thick-layer image sequence is input into a machine learning model or a deep learning model, the machine learning model or the deep learning model extracts features based on the thick-layer image sequence, and generates a first image sequence based on the extracted features. The deep learning model (or machine learning model) can be obtained by training, and the application does not limit the type of the deep learning model.
220: based on each of the plurality of first images, two second images are determined to obtain a second picture sequence, the second picture sequence comprising the plurality of second images, the first image being located between the two second images.
Specifically, the number of first images is N, and two second images are determined on the basis of each first image, that is, the number of second images is 2N. The 2N second images may constitute a second picture sequence.
Two second images corresponding to the first image are respectively positioned at two sides of the first image, so that two second images are included between two adjacent first images. The present embodiment generates two second images to the outside of the first image based on the first image, that is, the second images are acquired in a near-to-far (progressive) manner.
Two second images between two first images may be acquired based on features and/or trends of change of the features of the two adjacent first images.
230: a sequence of thin-layer images is determined based on the first image sequence and the second image sequence.
The first image sequence and the second image sequence are sequenced to enable the first image to be located between two second images corresponding to the first image, the distance between the images of adjacent layers can be reduced, and therefore the thin-layer image sequence can be obtained. The number of layers of the thin-layer image sequence is 3N, that is, all data obtained by CT scanning is compressed in the image sequence of 3N layers, so that the resolution of the thin-layer image sequence is higher than that of the thick-layer image sequence.
The image reconstruction method provided by the embodiment of the application can be executed through a machine learning model or a deep learning model. Since the deep learning model has higher robustness, the deep learning model can be preferably used for execution.
The embodiment of the application provides an image reconstruction method, a plurality of first images with high signal to noise ratio are obtained through a plurality of original images based on a thick-layer image sequence to form a first image sequence, a plurality of second images are obtained based on the first images to form a second image sequence, and then the thin-layer image sequence with high resolution can be obtained based on the first image sequence and the second image sequence. In addition, the first images corresponding to the original image space positions in a one-to-one mode are obtained, and the second images on two sides of the first images are obtained on the basis of any one first image in a near-to-far mode, so that the space consistency and continuity inside the thick-layer image sequence can be better utilized, and the thin-layer image sequence with high resolution and high precision can be reconstructed.
According to an embodiment of the present application, determining a thin-layer image sequence based on a first image sequence and a second image sequence includes: removing the second image positioned outside the first image sequence to obtain an adjusted second image sequence; determining a thin-layer image sequence based on the first image sequence and the adjusted second image sequence.
Specifically, the second image located outside the first image sequence refers to the second image located outside the first image located at the outermost side of the first image sequence, that is, the second image located at the edge of the two ends of the first image sequence. The number of layers of the thin-layer image sequence obtained after removing the second image outside the first image sequence is 3N-2.
In this embodiment, the second image is less realistic because the generation of the second image lacks sufficient contextual information. Thin-layer image sequences with high authenticity, high accuracy and high resolution can be obtained by removing the second images outside the first image sequence.
According to an embodiment of the present application, determining a first image sequence based on a thick layer image sequence includes: performing feature extraction on the plurality of original images to obtain first feature maps of the plurality of original images; performing image reconstruction based on first feature maps of a plurality of original images to obtain a plurality of first images, wherein two second images are determined based on each first image in the plurality of first images to obtain a second image sequence, and the method comprises the following steps: performing characteristic acquisition on the plurality of first images to obtain second characteristic maps of the plurality of first images; combining the first feature maps of the multiple original images and the second feature maps of the multiple first images to obtain a first splicing feature map; and carrying out image reconstruction on the first splicing characteristic diagram to obtain a plurality of second images.
Specifically, feature extraction may be performed on the thick-layer image sequence by using a deep learning model to obtain a first feature map. The first feature map may be composed of a plurality of feature vectors, for example, the first feature map may be a matrix composed of a plurality of feature vectors, each point in the matrix representing one feature vector. The deep learning model may generate a plurality of first images based on the first feature map to obtain a first imagery sequence.
Similarly, the deep learning model may perform feature extraction on the generated first image sequence to obtain a second feature map, and generate a plurality of second images based on the second feature map to obtain a second image sequence.
Optionally, the deep learning model may combine the first feature map and the second feature map to obtain a first stitched feature map, and then generate a plurality of second images based on the first stitched feature map to obtain a second image sequence.
In this embodiment, by combining the first feature map corresponding to the thick-layer image sequence and the second feature map corresponding to the first image sequence, the original image data (the thick-layer image sequence) and the image data (the first image sequence) matched with the original image position can be fully utilized to obtain the image data (the second image sequence) of the adjacent layer, so that the accuracy and the authenticity of the second image sequence can be improved, the continuity between the first image sequence and the second image sequence is improved, and the authenticity and the resolution of the whole thin-layer image sequence are further improved.
In one embodiment, the specific process of feature extraction can be seen in fig. 4 a. Fig. 4a is a schematic network structure diagram of an image reconstruction model according to an exemplary embodiment of the present application. The image reconstruction model can be obtained by training a deep learning model, and the network structure of the image reconstruction model comprises two modules: the device comprises a feature extraction module and an image reconstruction module.
The characteristic extraction module is used for extracting the characteristics of the thick-layer image sequence. The size of the thick-layer image sequence may be 1 × D × H × W, 1 is the number of channels, D is the number of layers, H is the height, and W is the width. In this embodiment, the size of the thick-layer image sequence is 1 × 8 × 128. The feature extraction module may be an encoder-decoder architecture. Specifically, the feature extraction module includes two downsampling and two upsampling, and a skip connection (concatenation) between the encoder and the decoder. A skip connection between the encoder and decoder is used to merge the low-level features and the high-level features.
As shown in fig. 4a, the thick-layer image sequence with size 1 × 8 × 128 is input to the feature extraction module of the image reconstruction model, and a feature map with size 32 × 8 × 128 is obtained by planar convolution with output channel 32. A signature of size 32 x 8 x 128 was obtained by a planar convolution of output channel 32 with step (1, 2, 2) and a planar convolution of output channel 32 with a signature of size 32 x 8 x 64. The process of obtaining the feature map with the size of 32 × 8 × 64 from the feature map with the size of 32 × 8 × 128 may be regarded as a first downsampling process. A signature of size 32 x 8 x 64 was obtained by a planar convolution of output channel 32 with step (1, 2, 2) and a planar convolution of output channel 32 with a signature of size 32 x 8 x 32. The process of obtaining a feature map of size 32 x 8 x 32 from a feature map of size 32 x 8 x 64 may be regarded as a second downsampling process.
A signature of size 32 x 8 x 32 was obtained by convolution of output channel 32 and deconvolution of output channel 32 with steps (1, 2, 2) to size 32 x 8 x 64. The process of obtaining a feature map with a size of 32 × 8 × 64 from a feature map with a size of 32 × 8 × 32 may be regarded as a first upsampling process. Stitching the result of the first upsampling with the result of the first downsampling yields a feature map with size 64 x 8 x 64.
A signature of size 64 x 8 x 64 was obtained by convolution of output channel 32 and deconvolution of output channel 32 with steps (1, 2, 2) to size 32 x 8 x 128. The process of obtaining a feature map of size 32 x 8 x 128 from a feature map of size 64 x 8 x 64 may be considered as a second upsampling process. Stitching the result of the second upsampling with the result of the second downsampling yields a feature map of size 64 x 8 x 128, which is the first feature map described above.
The features obtained by downsampling are low-level features (or shallow features), and the features obtained by upsampling are high-level features (or deep features). The feature map obtained by combining the low-level features and the high-level features can have rich semantic information and higher resolution at the same time, so that an image obtained based on the feature map has higher reality and resolution.
The image reconstruction module is used for reconstructing each layer image in the thin layer image sequence based on the first characteristic diagram of the thick layer image sequence. The reconstruction process performed by the image reconstruction module includes a first stage reconstruction and a second stage reconstruction.
In the first stage reconstruction, the first feature map of size 64 × 8 × 128 was subjected to 1 × 1 convolution (output channel is 1) to obtain a feature map of size 1 × 8 × 128, and the first image was obtained based on the feature map of size 1 × 8 × 128. That is, a plurality of first images (1 × 8 × 128 feature maps can be directly used as a plurality of first images) can be obtained through the first-stage reconstruction, and since the plurality of first images are spatially in one-to-one correspondence with the plurality of original images, the first images can be referred to as Match layers, and the process of reconstructing the first images can be referred to as Match reconstruction.
In the second stage reconstruction, the feature map with the size of 1 × 8 × 128 is subjected to planar convolution with the output channel of 32 to obtain a second feature map, and the second feature map is spliced with the first feature map with the size of 64 × 8 × 128 to obtain a first spliced feature map. The first stitched feature was passed through a planar convolution with output channel 32 and 1 x 1 convolution (output channel 2) to obtain a feature with size 2 x 8 x 128.
The second image may be obtained based on the 2 x 8 x 128 feature map. That is, a plurality of second images (2 × 8 × 128 feature maps can be directly used as the plurality of second images) can be obtained through the second-stage reconstruction. Since each first image corresponds to and is located between two second images, the second images may be referred to as Near layers and the reconstruction process of the second images may be referred to as Near reconstruction.
And reordering the plurality of first images and the plurality of second images according to the spatial position sequence, and removing the second image positioned at the outermost side to obtain the thin-layer image sequence.
According to an embodiment of the present application, the image reconstruction method further includes: determining a third picture sequence based on the second picture sequence, wherein the third picture sequence comprises a plurality of third images, the first image corresponds to two third images in the plurality of third images, the two third images are positioned on two sides of the first image and respectively positioned on one sides of the two second images far away from the first image, and wherein determining the thin-layer picture sequence based on the first picture sequence and the second picture sequence comprises: a sequence of thin-layer images is determined based on the first, second, and third image sequences.
Specifically, each first image corresponds to two second images and two third images, the two second images are respectively located at two sides of the first image, the two third images are also respectively located at two sides of the first image, and the third images are located at one side of the second image far away from the first image. Two second images and two third images are included between two adjacent first images. The number of the first images is N, the number of the second images is 2N, and the number of the third images is 2N. The 2N third images may constitute a third picture sequence.
In this embodiment, two second images are generated to the outer side of the first image based on the first image, and then a third image is continuously generated to the outer side of the first image based on the second images, that is, the second image and the third image are sequentially acquired from near to far. For example, two third images between two second images may be acquired based on the features and/or the trend of change of the features of the two adjacent second images; alternatively, the first image and the second image corresponding to the first image may be regarded as one set of images, and two third images between the two second images may be acquired based on the features and/or the variation trends of the features of the two adjacent sets of images.
In this embodiment, the inter-layer distance or the number of layers of the thin-layer image sequence may be predetermined, and the step of determining the k +1 th image sequence based on the k-th image sequence may be repeatedly performed according to the predetermined inter-layer distance or the number of layers of the thin-layer image sequence, so as to determine the required thin-layer image sequence based on the first image sequence to the k +1 th image sequence. Here, k is an integer of 1 or more, and the larger k is, the smaller the layer pitch of the acquired thin-layer image sequence is, the larger the number of layers is, i.e., the higher the resolution of the thin-layer image sequence is.
According to an embodiment of the present application, determining a first image sequence based on a thick layer image sequence includes: performing feature extraction on the plurality of original images to obtain first feature maps of the plurality of original images; performing image reconstruction based on first feature maps of a plurality of original images to obtain a plurality of first images, wherein two second images are determined based on each first image in the plurality of first images to obtain a second image sequence, and the method comprises the following steps: performing feature extraction on the plurality of first images to obtain second feature maps of the plurality of first images; combining the first feature maps of the multiple original images and the second feature maps of the multiple first images to obtain a first splicing feature map; carrying out image reconstruction on the first splicing characteristic diagram to obtain a plurality of second images, wherein a third image sequence is determined based on the second image sequence, and the image reconstruction method comprises the following steps: performing feature extraction on the plurality of second images to obtain third feature maps of the plurality of second images; combining the first feature maps of the multiple original images, the second feature maps of the multiple first images and the third feature maps of the multiple second images to obtain a second splicing feature map; and carrying out image reconstruction on the second splicing characteristics to obtain a plurality of third images.
Specifically, feature extraction may be performed on the thick-layer image sequence by using a deep learning model to obtain a first feature map. The first feature map may be composed of a plurality of feature vectors, for example, the first feature map may be a matrix composed of a plurality of feature vectors, each point in the matrix representing one feature vector. The deep learning model may generate a plurality of first images based on the first feature map to obtain a first imagery sequence.
The deep learning model can perform feature extraction on the generated first image sequence to obtain a second feature map, obtain a first splicing feature map by combining the first feature map and the second feature map, and further generate a plurality of second images based on the first splicing feature map to obtain a second image sequence.
Further, the deep learning model may perform feature extraction on the generated second image sequence to obtain a third feature map, obtain a second stitched feature map by combining the first feature map, the second feature map, and the third feature map, and generate a plurality of third images based on the second stitched feature map to obtain a third image sequence.
In this embodiment, by combining the first feature map corresponding to the thick-layer image sequence and the second feature map corresponding to the first image sequence, the original image data (the thick-layer image sequence) and the image data (the first image sequence) matched with the original image position can be fully utilized to obtain the image data (the second image sequence) of the adjacent layer, so that the accuracy and the authenticity of the second image sequence can be improved, and the continuity between the first image sequence and the second image sequence can be improved. Furthermore, by combining the first feature map corresponding to the thick-layer image sequence, the second feature map corresponding to the first image sequence, and the third feature map corresponding to the second image sequence, the original image data (the thick-layer image sequence), the image data (the first image sequence) matched with the original image position, and the image data (the second image sequence) of the adjacent layer can be fully utilized to obtain the image data (the third image sequence) of the next adjacent layer, so that the accuracy and the authenticity of the third image sequence can be improved, the continuity among the first image sequence, the second image sequence, and the third image sequence is improved, and the authenticity and the resolution of the whole thin-layer image sequence are further improved.
In one embodiment, the specific process of feature extraction can be seen in fig. 4 a. The extraction process of the first feature map, the second feature map and the first stitched feature map, and the reconstruction process of the first image (Match layer) and the second image (Near layer) may be referred to the description in the above embodiments, and in order to avoid repetition, only the differences are described here.
In this embodiment, the reconstruction process performed by the image reconstruction module further includes a third-stage reconstruction.
In the third-stage reconstruction, the feature map with the size of 2 × 8 × 128 is subjected to planar convolution with the output channel of 32 to obtain a third feature map, and the third feature map is spliced with the first spliced feature map to obtain a second spliced feature map (which is equivalent to the second spliced feature map obtained by splicing the third feature map with the second feature map and the first feature map). The second stitched signature was subjected to planar convolution with output channel 32 and 1 x 1 convolution (output channel 2) to obtain a signature with size 2 x 8 x 128.
A third image may be obtained based on the 2 x 8 x 128 feature map. That is, a plurality of third images (2 × 8 × 128 feature maps can be directly used as the plurality of third images) can be obtained by the third-stage reconstruction. Since each first image corresponds to two third images, and the two third images are respectively located at the outer sides of the second images corresponding to the first images, the third images can be called Far layers, and the reconstruction process of the third images can be called Far reconstruction.
And reordering the plurality of first images, the plurality of second images and the plurality of third images according to the spatial position sequence, and removing the second images and the third images which are positioned at the outermost sides of the first image sequence to obtain the thin-layer image sequence. The structure between two adjacent Match layers in the thin-layer image sequence is shown in FIG. 4 b.
The number of times and parameters of each convolution and deconvolution provided in this embodiment are only exemplary, and may be set according to actual needs as long as the network structure formed by the parameters can achieve acquisition of a thin-layer image sequence with high resolution.
The image reconstruction method provided by the embodiment sequentially acquires the Match layer, the Near layer and the Far layer of the thin-layer image sequence based on the thick-layer image sequence, and the thin-layer image sequence is reconstructed in a Near-to-Far manner, so that information in an original image can be fully utilized, and the spatial continuity of the thin-layer image sequence and the spatial consistency of the thin-layer image sequence and the thick-layer image sequence are ensured. Further, a CT image of 5mm layer thickness and a CT image of 1mm layer thickness are thick layer CT image data and thin layer CT image data, respectively, which are representative in clinical practice. By processing the CT image with the thickness of 5mm by using the image reconstruction method provided by the embodiment of the application, a Match layer, a Near layer and a Far layer can be reconstructed, and then the CT image with the thickness of 1mm can be obtained. Therefore, the image reconstruction method provided by the embodiment of the application has high practicability.
Of course, according to the method for reconstructing a thin-layer image sequence from near to far provided in the embodiment of the present application, the reconstruction process of the thin-layer image sequence may include four or more reconstruction stages.
According to an embodiment of the present application, determining a thin-layer image sequence based on a first image sequence, a second image sequence, and a third image sequence includes: removing the second image and the third image which are positioned outside the first image sequence to obtain an adjusted second image sequence and an adjusted third image sequence; determining a thin-layer image sequence based on the first image sequence, the adjusted second image sequence and the adjusted third image sequence.
Specifically, the second image and the third image located outside the first picture sequence refer to the second image and the third image located outside the first image located at the outermost side of the first picture sequence, that is, the second image and the third image located on the edges at both ends of the first picture sequence. The number of layers of the thin-layer image sequence obtained after the second image and the third image outside the first image sequence are removed is 5N-4.
In this embodiment, the second image and the third image on the edges at both ends of the first picture sequence are less realistic because the generation of the second image and the third image lacks sufficient context information. By removing the second image and the third image outside the first image sequence, a thin-layer image sequence with high authenticity, high precision and high resolution can be obtained.
Fig. 3 is a flowchart illustrating an image reconstruction method according to another exemplary embodiment of the present application. FIG. 3 is an example of the embodiment of FIG. 2, and the same parts are not repeated herein, and the differences are mainly described here. As shown in fig. 3, the image reconstruction method includes the following.
310: the method comprises the steps of extracting features of a plurality of original images of a thick-layer image sequence to obtain first feature maps of the original images, and reconstructing the images based on the first feature maps of the original images to obtain a first image sequence.
The first image sequence comprises a plurality of first images, and the plurality of first images are in one-to-one spatial correspondence with the plurality of original images. The first image sequence has a higher signal-to-noise ratio than the thick layer image sequence.
320: and performing characteristics on the first image sequence to obtain a second characteristic diagram.
330: and combining the first characteristic diagram and the second characteristic diagram to obtain a first splicing characteristic diagram, and carrying out image reconstruction on the first splicing characteristic diagram to obtain a second image sequence.
The second image sequence comprises a plurality of second images. Each first image corresponds to two second images, and the first image is located between the two second images.
340: and performing feature extraction on the second image sequence to obtain a third feature map.
350: and combining the first characteristic diagram, the second characteristic diagram and the third characteristic diagram to obtain a second splicing characteristic diagram, and performing image reconstruction on the second splicing characteristic diagram to obtain a third image sequence.
The third image sequence comprises a plurality of third images, the first image corresponds to two third images in the plurality of third images, and the two third images are positioned on two sides of the first image and respectively positioned on one sides of the two second images far away from the first image.
360: and removing the second image and the third image which are positioned outside the first image sequence to obtain an adjusted second image sequence and an adjusted third image sequence.
370: determining a thin-layer image sequence based on the first image sequence, the adjusted second image sequence and the adjusted third image sequence.
The reconstruction process of the first image sequence, the second image sequence and the third image sequence may refer to the description in the embodiment of fig. 2, and is not repeated herein to avoid repetition.
Fig. 5 is a flowchart illustrating a training method of an image reconstruction model according to an exemplary embodiment of the present application. As shown in fig. 5, the training method of the image reconstruction model includes the following steps.
510: the method comprises the steps of determining a first image sequence based on a sample thick-layer image sequence, wherein the signal-to-noise ratio of the first image sequence is higher than that of the sample thick-layer image sequence, the sample thick-layer image sequence comprises a plurality of original images, the first image sequence comprises a plurality of first images, and the first images are in one-to-one spatial correspondence with the original images.
520: based on each of the plurality of first images, two second images are determined to obtain a second picture sequence, the second picture sequence comprising the plurality of second images, the first image being located between the two second images.
530: a predicted thin-layer image sequence is determined based on the first image sequence and the second image sequence.
Specifically, the deep learning model may be trained using the sample data to obtain an image reconstruction model. The sample data may be a sample thick layer image sequence and a corresponding sample thin layer image sequence.
The structure of the deep learning model can be seen in fig. 4 a. The sample thick-layer image sequence is input into a deep learning model, and the deep learning model can determine a first image sequence based on the sample thick-layer image sequence, determine a second image sequence based on the first image sequence, and further determine a predicted thin-layer image sequence based on the first image sequence and the second image sequence.
In this embodiment, for a specific process of determining the first image sequence, the second image sequence and the predicted thin-layer image sequence based on the sample thick-layer image sequence, reference may be made to the description of determining the thin-layer image sequence based on the thick-layer image sequence in the image reconstruction method, and details are not repeated here to avoid repetition.
540: a first content feature map is determined based on the sequence of predicted thin-layer images.
550: a loss function is determined based on the first content feature map.
560: and adjusting parameters of the deep learning model based on the loss function to obtain an image reconstruction model.
Specifically, the corresponding content feature map may be determined based on the sample thin-layer image sequence, and the content feature map corresponding to the predicted thin-layer image sequence may be compared with the content feature map corresponding to the sample thin-layer image sequence, for example, a similarity between two content feature maps may be determined, and then the loss function may be determined based on the similarity. And continuously adjusting parameters of the deep learning model through a loss function to obtain an image reconstruction model.
Optionally, the loss function may be a preset normal loss function, the dependent variable of the normal loss function may be a similarity between the two content feature maps, a value of the loss function is determined by the similarity, and then a parameter of the deep learning model is adjusted according to the value of the loss function.
The embodiment of the application provides a training method of an image reconstruction model, a plurality of first images with high signal to noise ratio are obtained through a plurality of original images based on a thick-layer image sequence to form a first image sequence, a plurality of second images are obtained based on the first images to form a second image sequence, and then the thin-layer image sequence with high resolution can be obtained based on the first image sequence and the second image sequence. In addition, the first images corresponding to the original image space positions in a one-to-one mode are obtained, and the second images on two sides of the first images are obtained on the basis of any one first image in a near-to-far mode, so that the space consistency and continuity inside the thick-layer image sequence can be better utilized, and the thin-layer image sequence with high resolution and high precision can be reconstructed.
According to an embodiment of the application, determining a loss function based on a first content feature map comprises: determining a first similarity between a first position and a second position on the first content feature map; a loss function is determined based on the first similarity.
In particular, a first similarity S1 between the first position P1 and the second position P2 on the first content feature map (e.g., a similarity between feature vectors corresponding to the first position P1 and the second position P2) may be determined. Determining a loss function based on S1, where the loss function may be determining whether S1 satisfies a preset condition, for example, whether the loss function is greater than a preset threshold, and if the loss function is greater than the preset threshold, adaptively adjusting parameters of the deep learning model. Alternatively, a second content feature map may be determined based on the sample thin-layer image sequence, and a first similarity S2 between the first position P1 and the second position P2 on the second content feature map may be determined. The loss function may be determined based on the difference of S1 and S2, for example, directly taking the difference of S1 and S2 as the loss function, or taking the product of the difference of S1 and S2 and a preset coefficient as the loss function.
In this embodiment, the optimization direction can be specified by determining the loss function based on the first similarity, so that the problems that a plurality of optimization directions exist in the optimization process of the conventional loss function, oscillation occurs in the optimization process, and the optimization speed is low are solved.
According to an embodiment of the present application, the training method of the image reconstruction model further includes: determining a second similarity between the first location and a third location on the first content feature map, wherein determining the loss function based on the first similarity comprises: a loss function is determined based on a difference between the first similarity and the second similarity.
Specifically, a second similarity between the first position P1 and the third position P3 on the first content feature map may be determined S3, and a loss function may be determined based on S1 and S3 to adjust parameters of the deep learning model. The loss function may be a difference between S1 and S3, and it may be determined whether the difference between S1 and S3 satisfies a predetermined condition, for example, whether the difference is greater than a predetermined threshold, and if the difference is greater than the predetermined threshold, the parameters of the deep learning model are adaptively adjusted.
According to an embodiment of the present application, the training method of the image reconstruction model further includes: determining a second content feature map based on the sample thin-layer image sequence, and determining a first position, a second position and a third position on the second content feature map, wherein the similarity between the first position and the second position on the second content feature map is greater than the similarity between the first position and the third position, wherein determining a loss function based on the difference between the first similarity and the second similarity comprises: when the first similarity is less than or equal to the second similarity, a loss function is determined based on a difference between the first similarity and the second similarity.
In particular, a first similarity S2 between a first position P1 and a second position P2 on the second content feature map, and a second similarity S4 between a first position P1 and a third position P3 on the second content feature map may be determined. Here, S2 is greater than S4. If the first similarity S1 between the first position P1 and the second position P2 on the first content feature map is smaller than the second similarity S3 between the first position P1 and the third position P3 on the first content feature map, it indicates that the reconstruction result of the deep learning model is not ideal, not accurate enough, and has a large difference from the real sample thin-layer image sequence. Therefore, the difference between the first similarity S1 and the second similarity S3 can be used as a loss function, and the parameters for adjusting the deep learning model can be properly adjusted based on the difference between S1 and S3, so that the accuracy of the deep learning model is improved.
In an embodiment, a similarity between any two positions on the second content feature map may be determined, and for any position P, two positions Pi and Pj having the highest similarity with the position P may be determined, where the similarity between P and Pi is higher than the similarity between P and Pj. If the accuracy of the deep learning model is high, the similarity of P and Pi should be higher than the similarity of P and Pj for the corresponding positions P, Pi and Pj on the first content feature map. Therefore, if the similarity between P and Pi is less than or equal to the similarity between P and Pj, the parameters of the deep learning model are adjusted appropriately by using the difference between the similarity between P and Pi and the similarity between P and Pj as a loss function.
In this embodiment, if a certain position on the second content feature map is not similar to other positions, the loss of the position is not calculated.
In one embodiment, the content feature map may be extracted by a pre-trained model. In particular, all layers before a certain layer in the pre-trained model may be taken as content feature extractors. For any input, the content feature extractor may extract content features. For example, the top 10 layers using VGG-19 pre-trained on natural images may be employed as the content feature extractor.
Content loss, which is generally used in an image reconstruction task, is often based on a model pre-trained on a natural image, and a difference between the natural image and a medical image may affect the performance of a reconstruction algorithm. The embodiment of the application determines the loss function through the sequencing of the similarity (such as the difference between S1 and S3), and converts the absolute difference of the content similarity into the relative difference of the non-local content similarity sequencing inside the image, thereby avoiding the influence of the introduced natural image on the thin-layer CT image reconstruction algorithm.
Of course, the loss function based on content similarity ranking in this embodiment uses the top 10 layers of VGG-19 pre-trained on natural images as the content feature extractor, which is just an example, and the network structure of the content feature extractor may be any other suitable type.
The training method for the image reconstruction model provided by the embodiment of the application can adjust the parameters of the model by using the loss function determined based on the similarity ranking and the conventional loss function (such as the above-mentioned common loss function). Due to the adoption of the loss function determined based on similarity sequencing, the optimization direction can be specified, and the problems that a plurality of optimization directions exist in the optimization process of the traditional loss function, oscillation occurs in the optimization process, and the optimization speed is low are solved.
Fig. 6 is a flowchart illustrating a training method for an image reconstruction model according to another exemplary embodiment of the present application. As shown in fig. 5, the training method of the image reconstruction model includes the following steps.
610: and inputting the sample thick-layer image sequence into a deep learning model to obtain a predicted thin-layer image sequence.
620: a first content feature map is determined based on the predicted thin-layer image sequence, and a first similarity between a first location and a second location on the first content feature map is determined.
630: a loss function is determined based on the first similarity.
640: and adjusting parameters of the deep learning model based on the loss function to obtain an image reconstruction model.
The deep learning model may be trained using the sample data to obtain an image reconstruction model. The sample data may be a sample thick layer image sequence and a corresponding sample thin layer image sequence.
In particular, a first similarity S1 between the first position P1 and the second position P2 on the first content feature map (e.g., a similarity between feature vectors corresponding to the first position P1 and the second position P2) may be determined. Determining a loss function based on S1, where the loss function may be determining whether S1 satisfies a preset condition, for example, whether the loss function is greater than a preset threshold, and if the loss function is greater than the preset threshold, adaptively adjusting parameters of the deep learning model. Alternatively, a second content feature map may be determined based on the sample thin-layer image sequence, and a first similarity S2 between the first position P1 and the second position P2 on the second content feature map may be determined. The loss function may be determined based on the difference of S1 and S2, for example, directly taking the difference of S1 and S2 as the loss function, or taking the product of the difference of S1 and S2 and a preset coefficient as the loss function.
The embodiment of the application provides a training method of an image reconstruction model, which can specify an optimization direction by determining a loss function based on first similarity, avoid the problems that a plurality of optimization directions exist in the optimization process of the traditional loss function, the optimization process oscillates and the optimization speed is slow, improve the training speed of the image reconstruction model and improve the stability of the image reconstruction model.
According to an embodiment of the present application, the training method of the image reconstruction model further includes: determining a second similarity between the first location and a third location on the first content feature map, wherein determining the loss function based on the first similarity comprises: a loss function is determined based on a difference between the first similarity and the second similarity.
Specifically, a second similarity between the first position P1 and the third position P3 on the first content feature map may be determined S3, and a loss function may be determined based on S1 and S3 to adjust parameters of the deep learning model. The loss function may be a difference between S1 and S3, and it may be determined whether the difference between S1 and S3 satisfies a predetermined condition, for example, whether the difference is greater than a predetermined threshold, and if the difference is greater than the predetermined threshold, the parameters of the deep learning model are adaptively adjusted.
According to an embodiment of the present application, the training method of the image reconstruction model further includes: determining a second content feature map based on the sample thin-layer image sequence, and determining a first position, a second position and a third position on the second content feature map, wherein the similarity between the first position and the second position on the second content feature map is greater than the similarity between the first position and the third position, wherein determining a loss function based on the difference between the first similarity and the second similarity comprises: when the first similarity is less than or equal to the second similarity, a loss function is determined based on a difference between the first similarity and the second similarity.
In particular, a first similarity S2 between a first position P1 and a second position P2 on the second content feature map, and a second similarity S4 between a first position P1 and a third position P3 on the second content feature map may be determined. Here, S2 is greater than S4. If the first similarity S1 between the first position P1 and the second position P2 on the first content feature map is smaller than the second similarity S3 between the first position P1 and the third position P3 on the first content feature map, it indicates that the reconstruction result of the deep learning model is not ideal, not accurate enough, and has a large difference from the real sample thin-layer image sequence. Therefore, the difference between the first similarity S1 and the second similarity S3 can be used as a loss function, and the parameters for adjusting the deep learning model can be properly adjusted based on the difference between S1 and S3, so that the accuracy of the deep learning model is improved.
In an embodiment, a similarity between any two positions on the second content feature map may be determined, and for any position P, two positions Pi and Pj having the highest similarity with the position P may be determined, where the similarity between P and Pi is higher than the similarity between P and Pj. If the accuracy of the deep learning model is high, the similarity of P and Pi should be higher than the similarity of P and Pj for the corresponding positions P, Pi and Pj on the first content feature map. Therefore, if the similarity between P and Pi is less than or equal to the similarity between P and Pj, the parameters of the deep learning model are adjusted appropriately by using the difference between the similarity between P and Pi and the similarity between P and Pj as a loss function.
In this embodiment, if a certain position on the second content feature map is not similar to other positions, the loss of the position is not calculated.
In one embodiment, the content feature map may be extracted by a pre-trained model. In particular, all layers before a certain layer in the pre-trained model may be taken as content feature extractors. For any input, the content feature extractor may extract content features. For example, the top 10 layers using VGG-19 pre-trained on natural images may be employed as the content feature extractor.
The content loss commonly used in the image reconstruction task is often based on a model pre-trained on a natural image, and the difference between the natural image and the medical image may affect the performance of the reconstruction algorithm. The embodiment of the application determines the loss function through the sequencing of the similarity (such as the difference between S1 and S3), and converts the absolute difference of the content similarity into the relative difference of the non-local content similarity sequencing inside the image, thereby avoiding the influence of the introduced natural image on the thin-layer CT image reconstruction algorithm.
Of course, the loss function based on content similarity ranking in this embodiment uses the top 10 layers of VGG-19 pre-trained on natural images as the content feature extractor, which is just an example, and the network structure of the content feature extractor may be any other suitable type.
The training method for the image reconstruction model provided by the embodiment of the application can adjust the parameters of the model by using the loss function determined based on the similarity ranking and the conventional loss function (such as the above-mentioned common loss function). Due to the adoption of the loss function determined based on similarity sequencing, the optimization direction can be specified, and the problems that a plurality of optimization directions exist in the optimization process of the traditional loss function, oscillation occurs in the optimization process, and the optimization speed is low are solved.
Fig. 7 is a flowchart illustrating a training method for an image reconstruction model according to another exemplary embodiment of the present application. FIG. 7 is an example of the embodiment of FIG. 6, and the same parts are not repeated herein, and the differences are mainly described here. As shown in fig. 7, the image reconstruction method includes the following.
710: and inputting the sample thick-layer image sequence into a deep learning model to obtain a predicted thin-layer image sequence.
720: and determining a second content feature map based on the sample thin-layer image sequence, and determining a first position, a second position and a third position on the second content feature map, wherein the similarity between the first position and the second position on the second content feature map is greater than the similarity between the first position and the third position.
730: a first content feature map is determined based on the predicted thin-layer image sequence, and a first similarity between a first location and a second location on the first content feature map is determined.
740: a second similarity of the first location to the third location on the first content feature map is determined.
750: when the first similarity is less than or equal to the second similarity, a loss function is determined based on a difference between the first similarity and the second similarity.
760: and adjusting parameters of the deep learning model based on the loss function to obtain an image reconstruction model.
Exemplary devices
Fig. 8 is a schematic structural diagram of an image reconstruction apparatus 800 according to an exemplary embodiment of the present application. As shown in fig. 8, the image reconstruction apparatus 800 includes: a first determination module 810, a second determination module 820, and a third determination module 830.
The first determining module 810 is configured to determine a first video sequence based on a thick-layer video sequence, where a signal-to-noise ratio of the first video sequence is higher than that of the thick-layer video sequence, the thick-layer video sequence includes a plurality of original images, the first video sequence includes a plurality of first images, and the plurality of first images spatially correspond to the plurality of original images one to one. The second determining module 820 is configured to determine two second images based on each of the plurality of first images to obtain a second video sequence, where the second video sequence includes the plurality of second images, and the first image is located between the two second images. The third determining module 830 is configured to determine a thin-layer image sequence based on the first image sequence and the second image sequence.
The embodiment of the application provides an image reconstruction device, which is characterized in that a plurality of first images with high signal to noise ratio are obtained through a plurality of original images based on a thick-layer image sequence to form a first image sequence, a plurality of second images are obtained based on the plurality of first images to form a second image sequence, and then the thin-layer image sequence with high resolution can be obtained based on the first image sequence and the second image sequence. In addition, the first images corresponding to the original image space positions in a one-to-one mode are obtained, and the second images on two sides of the first images are obtained on the basis of any one first image in a near-to-far mode, so that the space consistency and continuity inside the thick-layer image sequence can be better utilized, and the thin-layer image sequence with high resolution and high precision can be reconstructed.
According to an embodiment of the present application, the third determining module 830 is configured to: removing the second image positioned outside the first image sequence to obtain an adjusted second image sequence; determining a thin-layer image sequence based on the first image sequence and the adjusted second image sequence.
According to an embodiment of the present application, the first determining module 810 is configured to: performing feature extraction on the plurality of original images to obtain first feature maps of the plurality of original images; and carrying out image reconstruction based on the first characteristic maps of the plurality of original images to obtain a plurality of first images. The second determining module 820 is configured to: performing characteristic acquisition on the plurality of first images to obtain second characteristic maps of the plurality of first images; combining the first feature maps of the multiple original images and the second feature maps of the multiple first images to obtain a first splicing feature map; and carrying out image reconstruction on the first splicing characteristic diagram to obtain a plurality of second images.
According to an embodiment of the present application, the image reconstruction apparatus 800 further includes a fourth determining module 840 for: and determining a third image sequence based on the second image sequence, wherein the third image sequence comprises a plurality of third images, the first image corresponds to two third images in the plurality of third images, and the two third images are positioned at two sides of the first image and are respectively positioned at one side of the two second images far away from the first image. The third determining module 830 is configured to determine a thin-layer image sequence based on the first image sequence, the second image sequence and the third image sequence.
According to an embodiment of the present application, the first determining module 810 is configured to: performing feature extraction on the plurality of original images to obtain first feature maps of the plurality of original images; and carrying out image reconstruction based on the first characteristic maps of the plurality of original images to obtain a plurality of first images. The second determining module 820 is configured to: performing feature extraction on the plurality of first images to obtain second feature maps of the plurality of first images; combining the first feature maps of the multiple original images and the second feature maps of the multiple first images to obtain a first splicing feature map; and carrying out image reconstruction on the first splicing characteristic diagram to obtain a plurality of second images. The fourth determining module 840 is configured to: performing feature extraction on the plurality of second images to obtain third feature maps of the plurality of second images; combining the first feature maps of the multiple original images, the second feature maps of the multiple first images and the third feature maps of the multiple second images to obtain a second splicing feature map; and carrying out image reconstruction on the second splicing characteristics to obtain a plurality of third images.
According to an embodiment of the present application, the third determining module 830 is configured to: removing the second image and the third image which are positioned outside the first image sequence to obtain an adjusted second image sequence and an adjusted third image sequence; determining a thin-layer image sequence based on the first image sequence, the adjusted second image sequence and the adjusted third image sequence.
It should be understood that, for the operations and functions of the first determining module 810, the second determining module 820, the third determining module 830 and the fourth determining module 840 in the above embodiments, reference may be made to the description of the image reconstruction method provided in the above embodiment of fig. 2 or fig. 3, and in order to avoid repetition, the description is not repeated here.
Fig. 9 is a schematic structural diagram illustrating an apparatus 900 for training an image reconstruction model according to an exemplary embodiment of the present application. As shown in fig. 9, the training apparatus 900 includes: a first determination module 910, a second determination module 920, a third determination module 930, a fourth determination module 940, a fifth determination module 950, and an adjustment module 960.
The first determining module 910 is configured to determine a first image sequence based on a sample thick-layer image sequence, where a signal-to-noise ratio of the first image sequence is higher than that of the sample thick-layer image sequence, the sample thick-layer image sequence includes a plurality of original images, the first image sequence includes a plurality of first images, and the plurality of first images are in one-to-one spatial correspondence with the plurality of original images. The second determining module 920 is configured to determine two second images based on each of the plurality of first images to obtain a second video sequence, where the second video sequence includes the plurality of second images, and the first image is located between the two second images. The third determining module 930 is configured to determine a predicted thin-layer image sequence based on the first image sequence and the second image sequence. The fourth determination module 940 is configured to determine a first content feature map based on the predicted thin-layer image sequence. The fifth determining module 950 is for determining a loss function based on the first content feature map. The adjusting module 960 is configured to adjust parameters of the deep learning model based on the loss function to obtain an image reconstruction model.
The embodiment of the application provides a training device of an image reconstruction model, a plurality of first images with high signal to noise ratio are obtained through a plurality of original images based on a thick-layer image sequence to form a first image sequence, a plurality of second images are obtained based on the first images to form a second image sequence, and then the thin-layer image sequence with high resolution can be obtained based on the first image sequence and the second image sequence. In addition, the first images corresponding to the original image space positions in a one-to-one mode are obtained, and the second images on two sides of the first images are obtained on the basis of any one first image in a near-to-far mode, so that the space consistency and continuity inside the thick-layer image sequence can be better utilized, and the thin-layer image sequence with high resolution and high precision can be reconstructed.
According to an embodiment of the present application, the fifth determining module 950 is configured to: determining a first similarity between a first position and a second position on the first content feature map; a loss function is determined based on the first similarity.
According to an embodiment of the present application, the fifth determining module 950 is further configured to: a second similarity of the first location to the third location on the first content feature map is determined. The fifth determining module 950 is for determining a loss function based on a difference between the first similarity and the second similarity.
According to an embodiment of the present application, the fourth determining module 940 is further configured to: and determining a second content feature map based on the sample thin-layer image sequence, and determining a first position, a second position and a third position on the second content feature map, wherein the similarity between the first position and the second position on the second content feature map is greater than the similarity between the first position and the third position. The fifth determining module 950 is configured to: when the first similarity is less than or equal to the second similarity, a loss function is determined based on a difference between the first similarity and the second similarity.
It should be understood that, in the above embodiment, operations and functions of the first determining module 910, the second determining module 920, the third determining module 930, the fourth determining module 940, the fifth determining module 950, and the adjusting module 960 may refer to the description in the training method of the image reconstruction model provided in the above embodiment of fig. 5, and are not repeated herein for avoiding repetition.
Fig. 10 is a schematic structural diagram of a training apparatus 1000 for an image reconstruction model according to another exemplary embodiment of the present application. As shown in fig. 10, the training apparatus 1000 includes: an input module 1010, a first determination module 1020, a second determination module 1030, and an adjustment module 1040.
The input module 1010 is configured to input the sample thick-layer image sequence into the deep learning model to obtain a predicted thin-layer image sequence. The first determining module 1020 is configured to determine a first content feature map based on the predicted thin-layer image sequence and determine a first similarity between a first location and a second location on the first content feature map. The second determining module 1030 is configured to determine a loss function based on the first similarity. The adjusting module 1040 is configured to adjust parameters of the deep learning model based on the loss function to obtain an image reconstruction model.
The embodiment of the application provides a training device of an image reconstruction model, and through determining a loss function based on a first similarity, an optimization direction can be specified, so that the problems that a plurality of optimization directions exist in the optimization process of a traditional loss function, the optimization process oscillates and the optimization speed is slow are solved, the training speed of the image reconstruction model is increased, and the stability of the image reconstruction model is improved.
According to an embodiment of the present application, the first determining module 1020 is further configured to determine a second similarity between the first position and the third position on the first content feature map. The second determining module 1030 is configured to determine a loss function based on a difference between the first similarity and the second similarity.
According to an embodiment of the present application, the first determining module 1020 is further configured to: and determining a second content feature map based on the sample thin-layer image sequence, and determining a first position, a second position and a third position on the second content feature map, wherein the similarity between the first position and the second position on the second content feature map is greater than the similarity between the first position and the third position. The second determining module 1030 is configured to: when the first similarity is less than or equal to the second similarity, a loss function is determined based on a difference between the first similarity and the second similarity.
It should be understood that the operations and functions of the input module 1010, the first determining module 1020, the second determining module 1030, and the adjusting module 1040 in the above embodiments may refer to the description of the training method of the image reconstruction model provided in the above embodiments of fig. 6 or fig. 7, and are not repeated herein to avoid repetition.
Fig. 11 is a block diagram illustrating an electronic device 1100 for performing an image reconstruction method or a training method of an image reconstruction model according to an exemplary embodiment of the present disclosure.
Referring to fig. 11, the electronic device 1100 includes a processing component 1110 that further includes one or more processors, and memory resources, represented by memory 1120, for storing instructions, such as application programs, that are executable by the processing component 1110. The application programs stored in memory 1120 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1110 is configured to execute instructions to perform the above-described method of modifying annotations of a three-dimensional image.
The electronic device 1100 may also include a power component configured to perform power management of the electronic device 1100, a wired or wireless network interface configured to connect the electronic device 1100 to a network, and an input-output (I/O) interface. The electronic device 1100 may be operated based on an operating system stored in the memory 1120, such as Windows ServerTM,Mac OSXTM,UnixTM,LinuxTM,FreeBSDTMOr the like.
A non-transitory computer readable storage medium, wherein instructions of the storage medium, when executed by a processor of the electronic device 1100, enable the electronic device 1100 to perform an image reconstruction method or a training method of an image reconstruction model. The image reconstruction method comprises the following steps: determining a first image sequence based on the thick-layer image sequence, wherein the signal-to-noise ratio of the first image sequence is higher than that of the thick-layer image sequence, the thick-layer image sequence comprises a plurality of original images, the first image sequence comprises a plurality of first images, and the plurality of first images are in one-to-one spatial correspondence with the plurality of original images; determining two second images based on each first image in the plurality of first images to obtain a second image sequence, wherein the second image sequence comprises a plurality of second images, and the first images are located between the two second images; a sequence of thin-layer images is determined based on the first image sequence and the second image sequence. The training method of the image reconstruction model comprises the following steps: determining a first image sequence based on the sample thick-layer image sequence, wherein the signal-to-noise ratio of the first image sequence is higher than that of the sample thick-layer image sequence, the sample thick-layer image sequence comprises a plurality of original images, the first image sequence comprises a plurality of first images, and the plurality of first images are in one-to-one spatial correspondence with the plurality of original images; determining two second images based on each first image in the plurality of first images to obtain a second image sequence, wherein the second image sequence comprises a plurality of second images, and the first images are located between the two second images; determining a predicted thin-layer image sequence based on the first image sequence and the second image sequence; determining a first content feature map based on the predicted thin-layer image sequence; determining a loss function based on the first content feature map; and adjusting parameters of the deep learning model based on the loss function to obtain an image reconstruction model. Or, the training method of the image reconstruction model comprises the following steps: inputting the sample thick-layer image sequence into a deep learning model to obtain a predicted thin-layer image sequence; determining a first content feature map based on the predicted thin-layer image sequence, and determining a first similarity between a first position and a second position on the first content feature map; determining a loss function based on the first similarity; and adjusting parameters of the deep learning model based on the loss function to obtain an image reconstruction model.
All the above optional technical solutions can be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program check codes, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in the description of the present application, the terms "first", "second", "third", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified.
The above description is only a preferred embodiment of the present application and should not be taken as limiting the present application, and any modifications, equivalents and the like that are within the spirit and scope of the present application should be included.

Claims (16)

1. An image reconstruction method, comprising:
determining a first video sequence based on a thick-layer video sequence, wherein a signal-to-noise ratio of the first video sequence is higher than that of the thick-layer video sequence, the thick-layer video sequence comprises a plurality of original images, the first video sequence comprises a plurality of first images, the plurality of first images are in one-to-one spatial correspondence with the plurality of original images, and each first image in the plurality of first images has the same size as each original image in the plurality of original images;
determining two second images based on each first image in the plurality of first images to obtain a second video sequence, wherein the second video sequence comprises a plurality of second images, and the first images are located between the two second images;
determining a sequence of thin-layer images based on the first image sequence and the second image sequence.
2. The image reconstruction method according to claim 1,
the determining a sequence of thin-layer images based on the first image sequence and the second image sequence comprises:
removing a second image positioned at the outer side of the first image sequence to obtain an adjusted second image sequence;
determining the thin-layer image sequence based on the first image sequence and the adjusted second image sequence.
3. The image reconstruction method of claim 1, wherein determining the first picture sequence based on the thick layer picture sequence comprises:
performing feature extraction on the plurality of original images to obtain first feature maps of the plurality of original images;
performing image reconstruction based on the first feature maps of the plurality of original images to obtain a plurality of first images,
determining two second images based on each of the plurality of first images to obtain a second video sequence, including:
performing feature acquisition on the plurality of first images to obtain second feature maps of the plurality of first images;
combining the first feature maps of the multiple original images and the second feature maps of the multiple first images to obtain a first splicing feature map;
and carrying out image reconstruction on the first splicing characteristic diagram to obtain a plurality of second images.
4. The image reconstruction method according to claim 1, further comprising:
determining a third picture sequence based on the second picture sequence, wherein the third picture sequence comprises a plurality of third images, the first image corresponds to two third images of the plurality of third images, the two third images are located on two sides of the first image and respectively located on one sides of the two second images far away from the first image, wherein,
the determining a sequence of thin-layer images based on the first sequence of images and the second sequence of images comprises:
determining the sequence of thin-layer images based on the first, second, and third image sequences.
5. The image reconstruction method of claim 4, wherein determining the first picture sequence based on the thick layer picture sequence comprises:
performing feature extraction on the plurality of original images to obtain first feature maps of the plurality of original images;
performing image reconstruction based on the first feature maps of the plurality of original images to obtain a plurality of first images, wherein determining two second images based on each of the plurality of first images to obtain a second image sequence includes:
performing feature extraction on the plurality of first images to obtain second feature maps of the plurality of first images;
combining the first feature maps of the multiple original images and the second feature maps of the multiple first images to obtain a first splicing feature map;
performing image reconstruction on the first mosaic characteristic map to obtain a plurality of second images, wherein,
said determining a third picture sequence based on said second picture sequence comprises:
performing feature extraction on the plurality of second images to obtain third feature maps of the plurality of second images;
combining the first feature maps of the multiple original images, the second feature maps of the multiple first images and the third feature maps of the multiple second images to obtain a second splicing feature map;
and performing image reconstruction on the second splicing characteristics to obtain a plurality of third images.
6. The image reconstruction method of claim 4, wherein the determining the sequence of thin-layer pictures based on the first, second, and third picture sequences comprises:
removing the second image and the third image which are positioned outside the first image sequence to obtain an adjusted second image sequence and an adjusted third image sequence;
determining the thin-layer image sequence based on the first image sequence, the adjusted second image sequence, and the adjusted third image sequence.
7. A training method of an image reconstruction model is characterized by comprising the following steps:
determining a first picture sequence based on a sample thick-layer picture sequence, wherein a signal-to-noise ratio of the first picture sequence is higher than that of the sample thick-layer picture sequence, the sample thick-layer picture sequence comprises a plurality of original images, the first picture sequence comprises a plurality of first images, the plurality of first images are in one-to-one spatial correspondence with the plurality of original images, and each first image in the plurality of first images is the same as each original image in the plurality of original images in size;
determining two second images based on each first image in the plurality of first images to obtain a second video sequence, wherein the second video sequence comprises a plurality of second images, and the first images are located between the two second images;
determining a predicted thin-layer image sequence based on the first image sequence and the second image sequence;
determining a first content feature map based on the sequence of predicted thin-layer images;
determining a loss function based on the first content feature map;
and adjusting parameters of the deep learning model based on the loss function to obtain an image reconstruction model.
8. The method for training an image reconstruction model according to claim 7, wherein the determining a loss function based on the first content feature map comprises:
determining a first similarity between a first position and a second position on the first content feature map;
determining the loss function based on the first similarity.
9. The method for training an image reconstruction model according to claim 8, further comprising:
determining a second similarity of the first location to a third location on the first content feature map,
wherein the determining the loss function based on the first similarity comprises:
determining the loss function based on a difference of the first similarity and the second similarity.
10. The method for training an image reconstruction model according to claim 9, further comprising:
determining a second content feature map based on the sample thin layer image sequence, and determining a first position, a second position and a third position on the second content feature map, wherein the similarity between the first position and the second position on the second content feature map is greater than the similarity between the first position and the third position,
wherein the determining the loss function based on the difference of the first similarity and the second similarity comprises:
determining the loss function based on a difference between the first similarity and the second similarity when the first similarity is less than or equal to the second similarity.
11. A training method of an image reconstruction model is characterized by comprising the following steps:
inputting the sample thick-layer image sequence into a deep learning model to obtain a predicted thin-layer image sequence;
determining a first content feature map based on the predicted thin-layer image sequence, and determining a first similarity between a first location and a second location on the first content feature map;
determining a loss function based on the first similarity;
and adjusting parameters of the deep learning model based on the loss function to obtain an image reconstruction model.
12. An image reconstruction apparatus, characterized by comprising:
a first determining module, configured to determine a first video sequence based on a thick-layer video sequence, where a signal-to-noise ratio of the first video sequence is higher than that of the thick-layer video sequence, the thick-layer video sequence includes a plurality of original images, the first video sequence includes a plurality of first images, the plurality of first images spatially correspond to the plurality of original images one-to-one, and each of the plurality of first images has a same size as each of the plurality of original images;
a second determining module, configured to determine two second images based on each of the plurality of first images to obtain a second video sequence, where the second video sequence includes a plurality of second images, and the first image is located between the two second images;
a third determination module to determine a thin-layer image sequence based on the first image sequence and the second image sequence.
13. An apparatus for training an image reconstruction model, comprising:
a first determining module, configured to determine a first picture sequence based on a sample thick-layer picture sequence, where a signal-to-noise ratio of the first picture sequence is higher than that of the sample thick-layer picture sequence, the sample thick-layer picture sequence includes a plurality of original images, the first picture sequence includes a plurality of first images, the plurality of first images spatially correspond to the plurality of original images one-to-one, and each first image in the plurality of first images has a same size as each original image in the plurality of original images;
a second determining module, configured to determine two second images based on each of the plurality of first images to obtain a second video sequence, where the second video sequence includes a plurality of second images, and the first image is located between the two second images;
a third determination module configured to determine a predicted thin-layer image sequence based on the first image sequence and the second image sequence;
a fourth determination module for determining a first content profile based on the predicted thin-layer image sequence;
a fifth determining module for determining a loss function based on the first content feature map;
and the adjusting module is used for adjusting the parameters of the deep learning model based on the loss function so as to obtain an image reconstruction model.
14. An apparatus for training an image reconstruction model, comprising:
the input module is used for inputting the sample thick-layer image sequence into the deep learning model so as to obtain a prediction thin-layer image sequence;
a first determining module, configured to determine a first content feature map based on the predicted thin-layer image sequence, and determine a first similarity between a first location and a second location on the first content feature map;
a second determination module to determine a loss function based on the first similarity;
and the adjusting module is used for adjusting the parameters of the deep learning model based on the loss function so as to obtain an image reconstruction model.
15. A computer-readable storage medium, in which a computer program is stored, the computer program being adapted to perform the image reconstruction method of any one of the preceding claims 1 to 6 or the training method of the image reconstruction model of any one of the claims 7 to 11.
16. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions,
wherein the processor is configured to execute the image reconstruction method of any one of claims 1 to 6 or the training method of the image reconstruction model of any one of claims 7 to 11.
CN202110341995.8A 2021-03-30 2021-03-30 Image reconstruction method and device and training method and device of image reconstruction model Active CN113034642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110341995.8A CN113034642B (en) 2021-03-30 2021-03-30 Image reconstruction method and device and training method and device of image reconstruction model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110341995.8A CN113034642B (en) 2021-03-30 2021-03-30 Image reconstruction method and device and training method and device of image reconstruction model

Publications (2)

Publication Number Publication Date
CN113034642A CN113034642A (en) 2021-06-25
CN113034642B true CN113034642B (en) 2022-05-27

Family

ID=76452923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110341995.8A Active CN113034642B (en) 2021-03-30 2021-03-30 Image reconstruction method and device and training method and device of image reconstruction model

Country Status (1)

Country Link
CN (1) CN113034642B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629816A (en) * 2018-05-09 2018-10-09 复旦大学 The method for carrying out thin layer MR image reconstruction based on deep learning
CN110807821A (en) * 2019-10-12 2020-02-18 上海联影医疗科技有限公司 Image reconstruction method and system
CN110880196A (en) * 2019-11-11 2020-03-13 哈尔滨工业大学(威海) Tumor photoacoustic image rapid reconstruction method and device based on deep learning
WO2020135630A1 (en) * 2018-12-26 2020-07-02 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for image reconstruction
CN111461973A (en) * 2020-01-17 2020-07-28 华中科技大学 Super-resolution reconstruction method and system for image
CN111489406A (en) * 2020-03-26 2020-08-04 深圳先进技术研究院 Training and generating method, device and storage medium for generating high-energy CT image model
CN111783774A (en) * 2020-06-22 2020-10-16 联想(北京)有限公司 Image processing method, apparatus and storage medium
CN111833251A (en) * 2020-07-13 2020-10-27 北京安德医智科技有限公司 Three-dimensional medical image super-resolution reconstruction method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109300167B (en) * 2017-07-25 2023-10-10 清华大学 Method and apparatus for reconstructing CT image and storage medium
US11126914B2 (en) * 2017-10-11 2021-09-21 General Electric Company Image generation using machine learning
WO2019173452A1 (en) * 2018-03-07 2019-09-12 Rensselaer Polytechnic Institute Deep neural network for ct metal artifact reduction
CN110047138A (en) * 2019-04-24 2019-07-23 复旦大学 A kind of magnetic resonance thin layer image rebuilding method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629816A (en) * 2018-05-09 2018-10-09 复旦大学 The method for carrying out thin layer MR image reconstruction based on deep learning
WO2020135630A1 (en) * 2018-12-26 2020-07-02 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for image reconstruction
CN110807821A (en) * 2019-10-12 2020-02-18 上海联影医疗科技有限公司 Image reconstruction method and system
CN110880196A (en) * 2019-11-11 2020-03-13 哈尔滨工业大学(威海) Tumor photoacoustic image rapid reconstruction method and device based on deep learning
CN111461973A (en) * 2020-01-17 2020-07-28 华中科技大学 Super-resolution reconstruction method and system for image
CN111489406A (en) * 2020-03-26 2020-08-04 深圳先进技术研究院 Training and generating method, device and storage medium for generating high-energy CT image model
CN111783774A (en) * 2020-06-22 2020-10-16 联想(北京)有限公司 Image processing method, apparatus and storage medium
CN111833251A (en) * 2020-07-13 2020-10-27 北京安德医智科技有限公司 Three-dimensional medical image super-resolution reconstruction method and device

Also Published As

Publication number Publication date
CN113034642A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
Kaji et al. Overview of image-to-image translation by use of deep neural networks: denoising, super-resolution, modality conversion, and reconstruction in medical imaging
US12016717B2 (en) CT image generation method and apparatus, computer device, and computer-readable storage medium
CN109035169B (en) Unsupervised/semi-supervised CT image reconstruction depth network training method
JP5394598B2 (en) Medical image compression apparatus, medical image compression method, and prediction knowledge database creation apparatus
US12039699B2 (en) Method and system for simulating and constructing original medical images from one modality to other modality
CN114863225B (en) Image processing model training method, image processing model generation device, image processing model equipment and image processing model medium
WO2020168698A1 (en) Vrds 4d medical image-based vein ai endoscopic analysis method and product
Sander et al. Unsupervised super-resolution: creating high-resolution medical images from low-resolution anisotropic examples
Sander et al. Autoencoding low-resolution MRI for semantically smooth interpolation of anisotropic MRI
Ren et al. Realistic medical image super-resolution with pyramidal feature multi-distillation networks for intelligent healthcare systems
EP3917404A1 (en) System for harmonizing medical image presentation
KR102084138B1 (en) Apparatus and method for processing image
US20240078643A1 (en) Systems and methods for denoising medical images
Harb et al. Diffusion-based generation of Histopathological Whole Slide Images at a Gigapixel scale
CN113034642B (en) Image reconstruction method and device and training method and device of image reconstruction model
CN117152026A (en) Intravascular ultrasound image processing method, device and equipment
CN110570355B (en) Multi-scale automatic focusing super-resolution processing system and method
Tian et al. Retinal fundus image superresolution generated by optical coherence tomography based on a realistic mixed attention GAN
Aldemir et al. Chain code strategy for lossless storage and transfer of segmented binary medical data
WO2012069833A1 (en) Process and apparatus for data registration
CN116563551A (en) Medical image segmentation method and system based on self-attention and information bottleneck
CN111325758A (en) Lung image segmentation method and device and training method of image segmentation model
CN115965785A (en) Image segmentation method, device, equipment, program product and medium
KR102488676B1 (en) Method and apparatus for improving z-axis resolution of ct images based on deep learning
Michelini et al. Mgbpv2: Scaling up multi-grid back-projection networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant