CN112967379A - Three-dimensional medical image reconstruction method for generating confrontation network based on perception consistency - Google Patents

Three-dimensional medical image reconstruction method for generating confrontation network based on perception consistency Download PDF

Info

Publication number
CN112967379A
CN112967379A CN202110235474.4A CN202110235474A CN112967379A CN 112967379 A CN112967379 A CN 112967379A CN 202110235474 A CN202110235474 A CN 202110235474A CN 112967379 A CN112967379 A CN 112967379A
Authority
CN
China
Prior art keywords
image
dimensional
model
dimensional image
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110235474.4A
Other languages
Chinese (zh)
Other versions
CN112967379B (en
Inventor
夏勇
潘永生
黄静玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Shenzhen Institute of Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Shenzhen Institute of Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University, Shenzhen Institute of Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110235474.4A priority Critical patent/CN112967379B/en
Publication of CN112967379A publication Critical patent/CN112967379A/en
Application granted granted Critical
Publication of CN112967379B publication Critical patent/CN112967379B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a three-dimensional medical image reconstruction method for generating a countermeasure Network based on perception consistency, and provides a method for reconstructing a three-dimensional image by using three views of a two-dimensional image, wherein a perception-consistency constraint is fused to generate a countermeasure Network (SGAN) to learn potential perception information from coarse to fine. Firstly, unfolding each slice in a group of three views of two-dimensional images along the projection direction, and then connecting the slices to form a three-channel three-dimensional image with the same shape corresponding to the actual three-dimensional image; then, providing potential coarse-to-fine perception information needed in the process of training a generating model by adopting perception consistency constraint, and training the SGAN model; and finally, finishing training to obtain a final SGAN model, and converting the two-dimensional image into a real three-dimensional image. The invention can reconstruct a three-dimensional image from a group of three-view two-dimensional images, and solves the UR task to a certain extent.

Description

Three-dimensional medical image reconstruction method for generating confrontation network based on perception consistency
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to a three-dimensional medical image reconstruction method.
Background
Three-dimensional image reconstruction techniques form the basis of common imaging modalities, such as CT, MRI and PET, which are very useful in medical image analysis. These techniques typically require a series of two-dimensional slices/slices from the relative motion to provide sufficient three-dimensional information. However, how to effectively acquire the most useful three-dimensional information to reduce radiation dose or imaging time has not been well studied. For example, locating abnormal objects such as medical implants or lesions in a person's body requires the three-dimensional space provided by the three-dimensional image to accomplish this task. One solution that is feasible with the prior art is to scan a three-dimensional image with a sequence of two-dimensional slices. However, these two-dimensional slices provide a large amount of redundant information that is not useful for localization and to some extent, is a waste of resources. Since a set of orthogonal multi-view two-dimensional images can provide three-dimensional spatial information, the present invention proposes an extreme Reconstruction task (UR) task — reconstructing a three-dimensional image from only a set of orthogonal three-view two-dimensional images. The current state of the art does not achieve this task.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a three-dimensional medical image reconstruction method for generating a countermeasure Network based on perception consistency, provides a method for reconstructing a three-dimensional image by using three views of a two-dimensional image, and generates a countermeasure Network (SGAN) by fusing perception-consistency general adaptive Network (SGAN) of perception consistency constraint to learn potential perception information from coarse to fine. Firstly, unfolding each slice in a group of three views of two-dimensional images along the projection direction, and then connecting the slices to form a three-channel three-dimensional image with the same shape corresponding to the actual three-dimensional image; then, providing potential coarse-to-fine perception information needed in the process of training a generating model by adopting perception consistency constraint, and training the SGAN model; and finally, finishing training to obtain a final SGAN model, and converting the two-dimensional image into a real three-dimensional image. The invention can reconstruct a three-dimensional image from a group of three-view two-dimensional images, and solves the UR task to a certain extent.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step 1: constructing an SGAN model adopting perception consistency constraint;
the SGAN model comprises a generation model adopting a UNet neural network and a discrimination model with two branches, wherein the two branches of the discrimination model are respectively formed by two identical five-layer convolutional neural networks;
step 2: preprocessing a two-dimensional image;
the source image is a set of orthogonal three-view two-dimensional images, and the front view X is divided intofExtending D copies in the elevation direction, left view XlExtending H copies in left-view direction, top view XtExtending the W copies along the overlooking direction to generate a three-dimensional image with the size of H multiplied by W multiplied by D; stacking the generated three-dimensional images according to the channel dimension to form a 3-channel 3D image represented as
Figure BDA0002959840280000021
The size is H multiplied by W multiplied by D multiplied by 3; scaling the three-dimensional image to have the same spatial resolution in each direction; cutting the three-dimensional image into a plurality of image blocks with the size of NxNxN in three directions in a sliding window mode;
after all source images are processed, the processed source images and real three-dimensional images corresponding to the source images form an image data set;
and step 3: training an SGAN model;
taking the image data set formed in the step 2 as a sample, cutting out a plurality of image blocks with the size of NxNxN from the processed source image, inputting the image blocks into a generation model, and outputting the generation model as a reconstructed three-dimensional target image;
respectively inputting a three-dimensional target image generated by a source image and a real three-dimensional image corresponding to the source image into two branches of a discrimination model, calculating difference scores of output feature maps of the same layer of the two branches, and obtaining the similarity of the output feature maps of the same layer of the two branches as a perception consistency constraint adjustment network parameter; judging whether the final output of the model is the result of whether the three-dimensional target image is true or not;
the generation model and the discrimination model are learned in a mutual confrontation mode, and are trained in an alternate iteration mode; finishing training to obtain a final SGAN model;
and 4, step 4: and (3) preprocessing the three-view two-dimensional image to be processed in the step (2), inputting the preprocessed three-view two-dimensional image into the final SGAN model generation network obtained in the step (3), and outputting the preprocessed three-view two-dimensional image into a reconstructed real three-dimensional image.
Preferably, the generation model is composed of three parts, namely encoding, migration and decoding, the encoding part realizes the function of extracting information from a source image, the migration model is responsible for migrating the information from the source image to a target image, and the decoding part realizes the reconstruction of the target image;
preferably, the coding part of the generative model is constructed by three convolutional layers of 8 channels, 16 channels and 32 channels respectively, the migration part comprises 6 residual network blocks, the decoding part comprises 2 deconvolution layers of 16 channels and 32 channels respectively and a single-channel convolutional layer, and the sizes of two deconvolution layer convolutional cores are both 3 × 3 × 3;
the input of the deconvolution layer of the decoding part 32 channel is formed by the connection of the feature mapping of the migration part and the feature mapping of the convolution layer of the encoding part 32 channel; the input of the deconvolution layer of the decoding part 16 channel is formed by the connection of the feature mapping of the deconvolution layer of the decoding part 32 channel and the feature mapping of the encoding part 16 channel convolution layer; the input of the convolutional layer of the single channel of the decoding part is composed of the feature map of the deconvolution layer of the 16 channels of the decoding part and the feature map of the convolutional layer of the 8 channels of the encoding part.
Preferably, the two identical five-layer convolutional neural networks of the discriminant model are composed of five convolutional layers with channel sizes of 16, 32, 64, 128 and 1 in sequence.
Preferably, N is 128.
The invention has the following beneficial effects:
1. the method has important significance for reconstructing the three-dimensional image, reconstructs the three-dimensional image from a group of orthogonal three-view two-dimensional images, avoids resource waste caused by a large amount of redundant information to a great extent, effectively acquires the most useful three-dimensional information, can reduce radiation dose or imaging time, and has a certain positive effect on human health.
2. Three-dimensional image reconstruction techniques play an important role in medical image analysis. The multi-plane three-dimensional image reconstruction is helpful for doctors to observe the overall shape of the focus and the relation between the focus and the surrounding structure from multiple directions and multiple angles, the illness state of the patients can be diagnosed more easily according to the image after the three-dimensional reconstruction, and meanwhile, the early screening precision of a plurality of original two-dimensional images is expected to be improved by using the low-cost rapid three-dimensional reconstruction.
Drawings
Fig. 1 is a schematic diagram of a SGAN model network structure according to the method of the present invention.
FIG. 2 is a schematic diagram of visualization of the perceptual consistency constraint of the method of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
As shown in fig. 1, a three-dimensional medical image reconstruction method for generating a countermeasure network based on perceptual consistency includes the following steps:
step 1: constructing an SGAN model adopting perception consistency constraint;
the SGAN model comprises a generation model adopting a UNet neural network and a discrimination model with two branches, wherein the two branches of the discrimination model are respectively formed by two identical five-layer convolutional neural networks;
the generating model formed by the UNet neural network consists of three parts of encoding, migration and decoding. The coding part mainly realizes the function of extracting information from a source image, the migration model is responsible for migrating the information from the source image to a target image, and the decoding part realizes the reconstruction of the target image. The input to each deconvolution layer of the decoding section is a concatenation of the feature map of the preceding feature map and the feature map of the convolution layer of the corresponding encoding section. Such a jump-connection gives the generative model the ability to learn texture information from coarse to fine.
The discrimination model is mainly used for automatically learning perception description information from data, and is input into a real three-dimensional image and a three-dimensional target image generated by extending a source image, and is output as a binary result of whether a predicted image is the real image or not. The resulting model is then trained taking into account the differences at different locations under different constraints, each constraint being dependent on two branches of the discriminative model, which input a pair of real and target images, and output a difference score at each layer to indicate similarity. In the discriminant model, the step size of each layer is gradually increased, which is a constraint from coarse to fine to enhance the perceptual consistency of the synthesized image and the corresponding real image. By the method, the perception representation learned in the isomorphic model can be enhanced, and the image synthesis is facilitated, so that perception information can be better transferred.
Step 2: preprocessing a two-dimensional image;
the source image is a group of orthogonal three-view two-dimensional images, and before the images are input into the SGAN model, because the input and output are required to have the same dimension by the discrimination model, a certain difference still exists between the current image and the input image, namely the dimension between a two-dimensional slice and a three-dimensional image is inconsistent. To solve this problem, a strategy to extend the 2D slice dimensions is employed. Front view XfExtending D copies in the elevation direction, left view XlExtending H copies in left-view direction, top view XtExtending the W copies along the overlooking direction to generate a three-dimensional image with the size of H multiplied by W multiplied by D; stacking the generated three-dimensional images according to the channel dimension to form a 3-channel 3D image represented as
Figure BDA0002959840280000041
The size is H multiplied by W multiplied by D multiplied by 3; scaling the three-dimensional image to have the same spatial resolution in each direction; cutting the three-dimensional image into a plurality of image blocks with the size of NxNxN in three directions in a sliding window mode;
after all source images are processed, the processed source images and real three-dimensional images corresponding to the source images form an image data set;
and step 3: training an SGAN model;
taking the image data set formed in the step 2 as a sample, cutting out a plurality of image blocks with the size of NxNxN from the processed source image, inputting the image blocks into a generation model, and outputting the generation model as a reconstructed three-dimensional target image;
respectively inputting a three-dimensional target image generated by a source image and a real three-dimensional image corresponding to the source image into two branches of a discrimination model, calculating difference scores of output feature maps of the same layer of the two branches, and obtaining the similarity of the output feature maps of the same layer of the two branches as a perception consistency constraint adjustment network parameter; judging whether the final output of the model is the result of whether the three-dimensional target image is true or not;
the generation model and the discrimination model are learned in a mutual confrontation mode, and are trained in an alternate iteration mode; finishing training to obtain a final SGAN model;
and 4, step 4: and (3) preprocessing the three-view two-dimensional image to be processed in the step (2), inputting the preprocessed three-view two-dimensional image into the final SGAN model generation network obtained in the step (3), and outputting the preprocessed three-view two-dimensional image into a reconstructed real three-dimensional image.
The specific embodiment is as follows:
1. constructing SGAN networks employing perceptual consistency constraints
As shown in fig. 1, the SGAN model includes a UNet-type generative model and a five-layer discriminant model. The two models are trained in an alternating iteration mode, and are in an antagonistic relationship, so that the two models can be simultaneously learned and improved together.
The generative model comprises three parts of encoding, migration and decoding. The coding part is constructed by three convolutional layers of 16 channels, 32 channels and 64 channels respectively, and mainly realizes the function of extracting information from a source image. The migration part comprises 6 Residual Network Blocks (RNBs), and the migration model is responsible for migrating information from a source image to a target image. The decoding part comprises 2 deconvolution layers of 16 and 32 channels respectively and a convolution layer of a single channel, and the convolution kernel size of the deconvolution layer is 3 multiplied by 3. The decoding section realizes reconstruction of a target image. The input to each deconvolution layer of the decoding section is a concatenation of the feature map of the preceding feature map and the feature map of the convolution layer of the corresponding encoding section. Such a jump-connection gives the generative model the ability to learn texture information from coarse to fine.
As shown in fig. 2, the discriminant model includes five convolutional layers with channel sizes of 1, 16, 32, 64, and 128, respectively. By Fj(X) (j ═ 1, …,5) represents the feature map of the j-th layer in the discriminant model, and the generative model is then trained to take into account the differences in different positions under different constraints. Each constraint depends on two branches of the discriminant model that input a pair of real and target images and output a difference score at each level to indicate similarity. In addition to the constraints used in the correlation work, the step size of each layer is gradually increased, introducing a coarse-to-fine constraint to enhance the perceptual consistency of the composite image and the corresponding real image. By the method, the perception representation learned in the isomorphic model can be enhanced, and the image synthesis is facilitated, so that the perception information is better transmitted. In turn, the discrimination model also effectively improves more specific perceptual information of the composite image.
2. Two-dimensional image preprocessing
Before the image is input into the SGAN model, because the discriminant model requires that the input and output have the same dimension, there is still a certain gap between the current image and the input image, i.e. the dimension between the two-dimensional slice and the three-dimensional image is not consistent. To solve this problem, a strategy to extend these 2D slice dimensions is proposed.
Front view (X)f) Extending D copies in the elevation direction, left View (X)l) Extending H copies in the left-view direction, Top-view (X)t) The W copies are extended in the top view direction, and the extended images all have the same size (i.e., H × W × D). The extended images are then stacked in channel dimensions to form a 3-channel 3D image (denoted as
Figure BDA0002959840280000051
The size is H × W × D × 3). The scan image is then scaled to have the same spatial resolution in each direction (e.g., 2 × 2 × 2 mm)3). These images are then cropped in the axial direction to 128 x 128 in a sliding window manner to focus on the vertebrae.
3. Three-dimensional image reconstruction
Inputting the source image processed in the step 2 into a generation model, and completing the image reconstruction process from the source image to the target image through three stages of encoding, transferring and decoding.
A pair of real images and a pair of target images are input into two branches of a discriminant model, a difference score is output at each layer in a training process to indicate similarity, each constraint depends on the two branches of the discriminant model, and the SGAN model simultaneously adopts a perception consistency constraint and a matched voxel consistency constraint (namely the constraint from coarse to fine).
4. Testing phase
The method performed a set of experiments on the data set disclosed in KiTS19 Challenge to verify the effect of SGAN on the skeletal localization task, training and testing the model on the training and evaluation sets, respectively. The invention can reconstruct a three-dimensional image from a group of three-view two-dimensional images and achieve the UR purpose to a certain extent.
The invention provides an Ultimate Reconstruction task (UR) -a task of reconstructing a three-dimensional image from a group of orthogonal three-view two-dimensional images, which cannot be realized by the prior art. The method has important significance for reconstructing the three-dimensional image, and reconstructs the three-dimensional image from a group of orthogonal three-view two-dimensional images, thereby avoiding resource waste caused by a large amount of redundant information to a great extent, wherein the redundant information refers to information which is useless for positioning in the original three-dimensional image. Meanwhile, the most useful three-dimensional information is effectively obtained, the radiation dose or the imaging time can be reduced, and the method has a certain positive effect on the health of a human body. Three-dimensional image reconstruction techniques play an important role in medical image analysis. The multi-plane three-dimensional image reconstruction is helpful for doctors to observe the overall shape of the focus and the relation between the focus and the surrounding structure from multiple directions and angles. For a primary doctor or a clinical intern, the condition of a patient can be diagnosed more easily according to the image after three-dimensional reconstruction; the patient can easily see the specific condition of the patient. From the perspective of early disease screening, the early screening accuracy of a plurality of original two-dimensional images is hopefully improved by using low-cost rapid three-dimensional reconstruction. From the perspective of precision medicine, it is desirable and necessary to provide a plurality of verification means to ensure the precision of the operation. In addition, three-dimensional visualization is not only used for doctor-patient communication before the operation, but also can be used for navigation in the operation. The three-dimensional image can also be used for quantitative analysis in certain departments, such as bone fixation operation in bone surgery, the position of a broken bone can be accurately positioned before an operation according to the three-dimensional image, and the healing effect of the bone can be evaluated after the operation.

Claims (5)

1. A three-dimensional medical image reconstruction method for generating a countermeasure network based on perception consistency is characterized by comprising the following steps:
step 1: constructing an SGAN model adopting perception consistency constraint;
the SGAN model comprises a generation model adopting a UNet neural network and a discrimination model with two branches, wherein the two branches of the discrimination model are respectively formed by two identical five-layer convolutional neural networks;
step 2: preprocessing a two-dimensional image;
the source image is a set of orthogonal three-view two-dimensional images, and the front view X is divided intofExtending D copies in the elevation direction, left view XlExtending H copies in left-view direction, top view XtExtending the W copies along the overlooking direction to generate a three-dimensional image with the size of H multiplied by W multiplied by D; stacking the generated three-dimensional images according to the channel dimension to form a 3-channel 3D image represented as
Figure FDA0002959840270000011
The size is H multiplied by W multiplied by D multiplied by 3; scaling the three-dimensional image to have the same spatial resolution in each direction; cutting the three-dimensional image into a plurality of image blocks with the size of NxNxN in three directions in a sliding window mode;
after all source images are processed, the processed source images and real three-dimensional images corresponding to the source images form an image data set;
and step 3: training an SGAN model;
taking the image data set formed in the step 2 as a sample, cutting out a plurality of image blocks with the size of NxNxN from the processed source image, inputting the image blocks into a generation model, and outputting the generation model as a reconstructed three-dimensional target image;
respectively inputting a three-dimensional target image generated by a source image and a real three-dimensional image corresponding to the source image into two branches of a discrimination model, calculating difference scores of output feature maps of the same layer of the two branches, and obtaining the similarity of the output feature maps of the same layer of the two branches as a perception consistency constraint adjustment network parameter; judging whether the final output of the model is the result of whether the three-dimensional target image is true or not;
the generation model and the discrimination model are learned in a mutual confrontation mode, and are trained in an alternate iteration mode; finishing training to obtain a final SGAN model;
and 4, step 4: and (3) preprocessing the three-view two-dimensional image to be processed in the step (2), inputting the preprocessed three-view two-dimensional image into the final SGAN model generation network obtained in the step (3), and outputting the preprocessed three-view two-dimensional image into a reconstructed real three-dimensional image.
2. The three-dimensional medical image reconstruction method based on the perception-consistent generation countermeasure network of claim 1, wherein the generation model is composed of three parts of encoding, migration and decoding, the encoding part realizes the function of extracting information from a source image, the migration model is responsible for migrating the information from the source image to a target image, and the decoding part realizes the reconstruction of the target image.
3. The method of claim 2, wherein the coding part of the generated model is constructed by three convolutional layers of 8, 16 and 32 channels, respectively, the migration part comprises 6 residual network blocks, the decoding part comprises 2 deconvolution layers of 16 and 32 channels, respectively, and a single-channel convolutional layer, and both the two convolutional layers have a size of 3 x 3;
the input of the deconvolution layer of the decoding part 32 channel is formed by the connection of the feature mapping of the migration part and the feature mapping of the convolution layer of the encoding part 32 channel; the input of the deconvolution layer of the decoding part 16 channel is formed by the connection of the feature mapping of the deconvolution layer of the decoding part 32 channel and the feature mapping of the encoding part 16 channel convolution layer; the input of the convolutional layer of the single channel of the decoding part is composed of the feature map of the deconvolution layer of the 16 channels of the decoding part and the feature map of the convolutional layer of the 8 channels of the encoding part.
4. The three-dimensional medical image reconstruction method for generating a countermeasure network based on perceptual coherence as claimed in claim 1, wherein the two identical five-layer convolutional neural networks of the discriminant model are composed of five convolutional layers with channel sizes of 16, 32, 64, 128 and 1 in sequence.
5. The three-dimensional medical image reconstruction method for generating a countermeasure network based on perceptual coherence as defined in claim 1, wherein N is 128.
CN202110235474.4A 2021-03-03 2021-03-03 Three-dimensional medical image reconstruction method for generating confrontation network based on perception consistency Active CN112967379B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110235474.4A CN112967379B (en) 2021-03-03 2021-03-03 Three-dimensional medical image reconstruction method for generating confrontation network based on perception consistency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110235474.4A CN112967379B (en) 2021-03-03 2021-03-03 Three-dimensional medical image reconstruction method for generating confrontation network based on perception consistency

Publications (2)

Publication Number Publication Date
CN112967379A true CN112967379A (en) 2021-06-15
CN112967379B CN112967379B (en) 2022-04-22

Family

ID=76276325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110235474.4A Active CN112967379B (en) 2021-03-03 2021-03-03 Three-dimensional medical image reconstruction method for generating confrontation network based on perception consistency

Country Status (1)

Country Link
CN (1) CN112967379B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035001A (en) * 2022-08-11 2022-09-09 北京唯迈医疗设备有限公司 Intraoperative navigation system based on DSA imaging device, computing device and program product

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413351A (en) * 2013-07-26 2013-11-27 南京航空航天大学 Three-dimensional face rapid-rebuilding method based on compressed sensing theory
CN108765512A (en) * 2018-05-30 2018-11-06 清华大学深圳研究生院 A kind of confrontation image generating method based on multi-layer feature
CN109711442A (en) * 2018-12-15 2019-05-03 中国人民解放军陆军工程大学 Unsupervised layer-by-layer generation fights character representation learning method
CN110119780A (en) * 2019-05-10 2019-08-13 西北工业大学 Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network
CN110335337A (en) * 2019-04-28 2019-10-15 厦门大学 A method of based on the end-to-end semi-supervised visual odometry for generating confrontation network
CN110390638A (en) * 2019-07-22 2019-10-29 北京工商大学 A kind of high-resolution three-dimension voxel model method for reconstructing
CN110517353A (en) * 2019-08-30 2019-11-29 西南交通大学 Fast construction object three-dimensional rebuilding method based on two-dimensional vector figure and a small amount of elevational point
CN111383325A (en) * 2018-12-29 2020-07-07 顺丰科技有限公司 Carriage three-dimensional image generation method and device
CN111899328A (en) * 2020-07-10 2020-11-06 西北工业大学 Point cloud three-dimensional reconstruction method based on RGB data and generation countermeasure network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413351A (en) * 2013-07-26 2013-11-27 南京航空航天大学 Three-dimensional face rapid-rebuilding method based on compressed sensing theory
CN108765512A (en) * 2018-05-30 2018-11-06 清华大学深圳研究生院 A kind of confrontation image generating method based on multi-layer feature
CN109711442A (en) * 2018-12-15 2019-05-03 中国人民解放军陆军工程大学 Unsupervised layer-by-layer generation fights character representation learning method
CN111383325A (en) * 2018-12-29 2020-07-07 顺丰科技有限公司 Carriage three-dimensional image generation method and device
CN110335337A (en) * 2019-04-28 2019-10-15 厦门大学 A method of based on the end-to-end semi-supervised visual odometry for generating confrontation network
CN110119780A (en) * 2019-05-10 2019-08-13 西北工业大学 Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network
CN110390638A (en) * 2019-07-22 2019-10-29 北京工商大学 A kind of high-resolution three-dimension voxel model method for reconstructing
CN110517353A (en) * 2019-08-30 2019-11-29 西南交通大学 Fast construction object three-dimensional rebuilding method based on two-dimensional vector figure and a small amount of elevational point
CN111899328A (en) * 2020-07-10 2020-11-06 西北工业大学 Point cloud three-dimensional reconstruction method based on RGB data and generation countermeasure network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张豪等: "深度学习在单图像三维模型重建的应用", 《计算机应用》 *
黄天怡等: "基于深度学习的轴突三维图像分割与重构", 《神经解剖学杂志》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035001A (en) * 2022-08-11 2022-09-09 北京唯迈医疗设备有限公司 Intraoperative navigation system based on DSA imaging device, computing device and program product
CN115035001B (en) * 2022-08-11 2022-12-09 北京唯迈医疗设备有限公司 Intraoperative navigation system, computing device and program product based on DSA imaging device

Also Published As

Publication number Publication date
CN112967379B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
Ying et al. X2CT-GAN: reconstructing CT from biplanar X-rays with generative adversarial networks
EP3726467B1 (en) Systems and methods for reconstruction of 3d anatomical images from 2d anatomical images
Oulbacha et al. MRI to CT synthesis of the lumbar spine from a pseudo-3D cycle GAN
US11080895B2 (en) Generating simulated body parts for images
CN112967379B (en) Three-dimensional medical image reconstruction method for generating confrontation network based on perception consistency
WO2020198854A1 (en) Method and system for producing medical images
Whitmarsh et al. 3D bone mineral density distribution and shape reconstruction of the proximal femur from a single simulated DXA image: an in vitro study
Kyung et al. Perspective projection-based 3d ct reconstruction from biplanar x-rays
US11704796B2 (en) Estimating bone mineral density from plain radiograph by assessing bone texture with deep learning
Fotsin et al. Shape, pose and density statistical model for 3D reconstruction of articulated structures from X-ray images
Gao et al. 3DSRNet: 3D Spine Reconstruction Network Using 2D Orthogonal X-ray Images Based on Deep Learning
WO2022229816A1 (en) 3d reconstruction of anatomical images
Wang et al. TPG-rayGAN: CT reconstruction based on transformer and generative adversarial networks
Pan et al. Ultimate reconstruction: understand your bones from orthogonal views
Chen et al. Development of Automatic Assessment Framework for Spine Deformity using Freehand 3D Ultrasound Imaging System
CN116848549A (en) Detection of image structures via dimension-reduction projection
US20240185509A1 (en) 3d reconstruction of anatomical images
Cheng et al. Sdct-gan: reconstructing CT from biplanar x-rays with self-driven generative adversarial networks
US20190095579A1 (en) Biomechanical model generation for human or animal torsi
Wang et al. Shape Reconstruction for Abdominal Organs based on a Graph Convolutional Network
MONTEIRO DEEP LEARNING APPROACH FOR THE SEGMENTATION OF SPINAL STRUCTURES IN ULTRASOUND IMAGES
Kalyan Deep Learning & CNN in Imaging of Medical Surgeries
Kumar et al. 3D Volumetric Computed Tomography from 2D X-Rays: A Deep Learning Perspective
US20230368880A1 (en) Learning apparatus, learning method, trained model, and program
Kumar et al. 5 3D Volumetric

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant