CN116385665A - Multi-view X-ray image three-dimensional reconstruction method for dual-mode G-arm X-ray machine - Google Patents

Multi-view X-ray image three-dimensional reconstruction method for dual-mode G-arm X-ray machine Download PDF

Info

Publication number
CN116385665A
CN116385665A CN202310646077.5A CN202310646077A CN116385665A CN 116385665 A CN116385665 A CN 116385665A CN 202310646077 A CN202310646077 A CN 202310646077A CN 116385665 A CN116385665 A CN 116385665A
Authority
CN
China
Prior art keywords
ray
cross
orthogonal
fusion module
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310646077.5A
Other languages
Chinese (zh)
Inventor
章小平
付燕平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Jimai Intelligent Equipment Co ltd
Original Assignee
Hefei Jimai Intelligent Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Jimai Intelligent Equipment Co ltd filed Critical Hefei Jimai Intelligent Equipment Co ltd
Priority to CN202310646077.5A priority Critical patent/CN116385665A/en
Publication of CN116385665A publication Critical patent/CN116385665A/en
Priority to CN202311327298.2A priority patent/CN117351150A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a multi-view X-ray image three-dimensional reconstruction method for a dual-mode G-arm X-ray machine, which relates to the field of medical image processing and comprises the following steps of: and constructing a deep learning network, and performing cross guidance and cross fusion between two images by utilizing the two orthogonal X-ray image positive bitmaps and the two orthogonal X-ray image negative bitmaps to recover three-dimensional volume data represented by the X-ray images. The invention can fully utilize the characteristic that the G arm X-ray machine can shoot two orthogonal X-ray images at the same time, and recover high-fidelity three-dimensional data by utilizing the two orthogonal images; the advantages of the device can be fully utilized, the X-ray scanning times can be effectively reduced, the patient is prevented from being subjected to additional X-ray radiation, and the risk of the patient is reduced.

Description

Multi-view X-ray image three-dimensional reconstruction method for dual-mode G-arm X-ray machine
Technical Field
The invention relates to the field of medical image processing, in particular to a multi-view X-ray image three-dimensional reconstruction method for a dual-mode G-arm X-ray machine.
Background
X-ray images enable us to view the internal anatomy of the human body in a non-invasive manner and to diagnose conditions through changes in the internal anatomy. However, the X-ray image is imaged by transmission such that all tissue is projected into a two-dimensional space, resulting in overlapping of individual organs and groups between the X-ray images. Although bone is clearly visible, soft tissue is often difficult to resolve, which can be inconvenient for diagnosis. In order to clearly distinguish the structures of various organs inside the human body, a great number of methods attempt to recover the three-dimensional structure from the two-dimensional X-ray image by using a three-dimensional reconstruction technique, so as to facilitate the diagnosis of the patient by medical staff and the formulation of a treatment scheme. Tomographic techniques (Computed Tomography, CT) have been developed to solve this problem. Although the CT reconstruction technology can recover high-quality three-dimensional data from X-ray images, CT reconstruction requires 360-degree intensive full-scale scanning of a human body, and has long scanning time and large radiation dose, which inevitably increases the risk of cancer of a patient.
Relatively accurate three-dimensional volume data can also be reconstructed based on filtered backprojection reconstruction (Filter Back Projection, FBP) and iterative reconstruction (Iterative Reconstruction) algorithms. However, they also require hundreds of X-ray scans around the patient to reconstruct the desired three-dimensional volume data, and are therefore difficult to implement on conventional X-ray machines. And simultaneously shoot a large amount of X-ray images to a patient, there is more radiation injury to patient's health, and the calculated amount is also great simultaneously, has restricted clinical use in the art.
With the advent of deep learning, some methods began to attempt to infer corresponding three-dimensional volume data from multi-view X-ray images using deep learning techniques and achieve relatively satisfactory results. However, these methods still require 360 degrees of full-scale dense X-ray scanning to achieve satisfactory reconstruction. Some approaches attempt to three-dimensional reconstruction of X-ray images using a neural attenuation field (Neural Attenuation Fields, neAT), inspired by the success of the neural radiation field (Neural Radiance Fields, neRF) in three-dimensional reconstruction. However, these algorithms all require an absolute accurate pose of each view X-ray image to reconstruct high quality three-dimensional data, otherwise their reconstruction quality would be significantly degraded. In the actual scanning process in the operation, the absolute and accurate pose cannot be obtained at all due to the factors of movement, mechanical error and the like of the imaging equipment. Therefore, it is difficult to apply such algorithms directly to the actual three-dimensional reconstruction of X-ray images. In order to reduce the damage to the patient from X-ray scanning, some deep learning methods attempt to infer three-dimensional volumetric data directly from biplane X-ray images, and although these methods can reconstruct three-dimensional volumetric data in a data-driven manner, it is difficult to distinguish because the X-ray images are transmitted through and projected onto a two-dimensional image of all tissues, organs, and bones that overlap each other. In addition, it is difficult for a conventional C-arm X-ray machine to obtain two perfectly orthogonal biplane images. Most methods can only be performed on synthetic datasets. And the real X-ray scanning is difficult to obtain two X-ray images meeting the orthogonal requirement due to mechanical errors, noise, involuntary movements of patients and the like. Thereby greatly affecting the practicability and expansibility of the algorithm. The adopted G-shaped arm X-ray machine can obtain two completely orthogonal X-ray images at the same time, so that a reliable data source is provided for three-dimensional reconstruction by using the two orthogonal biplane X-ray images.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a multi-view X-ray image three-dimensional reconstruction method for a dual-mode G-arm X-ray machine.
In order to achieve the above purpose, the invention is realized by adopting the following technical scheme:
a multi-view X-ray image three-dimensional reconstruction method for a dual-mode G-arm X-ray machine comprises the following steps: and constructing a deep learning network, and performing cross guidance and cross fusion between two images by utilizing the two orthogonal X-ray image positive bitmaps and the two orthogonal X-ray image negative bitmaps to recover three-dimensional volume data represented by the X-ray images.
The invention further discloses the following technology:
preferably, the deep learning network is mainly composed of a framework of double encoder-3D decoders;
inputting two orthogonal X-ray images into two parallel encoders, reducing the resolution of the input images through a plurality of convolution layers, extracting the features of a plurality of scales, and constructing a feature pyramid with gradually increased resolution;
and simultaneously, cross fusion is carried out on the characteristics of each layer by utilizing the idea of cross guidance, then the cross fusion module results of all layers of the encoder are added into a 3D decoder in a cascading mode, and then the characteristics output by each cross fusion module are connected together and input into the 3D decoder, and three-dimensional data corresponding to the X-ray image is obtained through the 3D decoder of 4 layers.
Preferably, the cross fusion module is divided into two parts, namely a characteristic fusion module and a global information fusion module;
the feature fusion module mainly utilizes the channel attention and the space attention to extract the weights of the two orthogonal directions of X-ray image information, utilizes the weight graphs in different directions to perform feature fusion, and then inputs the two interacted and fused features into the global information fusion module to acquire the context information.
Preferably, the feature fusion module consists of two parts, wherein the first part is a cross-view feature fusion module, the second part is a single-view X-ray image global feature fusion module, the information of two orthogonal X-ray images is subjected to cross fusion in a cross fusion module in a coding stage, and further cross guide learning is performed in a cross guide module, and feature recombination and reinforcement are performed in a 3D coding module;
firstly, respectively inputting two orthogonal X-ray images into two identical convolution networks to respectively extract features; in order to utilize the complementarity of information between the X-ray images in two directions to conduct cross guidance, the features of the extracted X-ray images in different directions of each feature layer are input into a feature fusion module to be fused so as to obtain the features of the X-ray images in orthogonal directions of different scales.
The main advantages of the invention are as follows:
the algorithm designed by the invention can fully utilize the characteristic that the G-arm X-ray machine can shoot two orthogonal X-ray images at the same time, and high-fidelity three-dimensional data can be recovered by utilizing the two orthogonal images. The method can fully utilize the advantages of the equipment, can effectively reduce the times of X-ray scanning, avoid the patient from being subjected to extra X-ray radiation, and reduce the risk of the patient.
Drawings
FIG. 1 is a flow chart of a multi-view X-ray image three-dimensional reconstruction method for a dual-mode G-arm X-ray machine;
FIG. 2 is a diagram of a network architecture of the present invention;
FIG. 3 is a block diagram of a cross-guidance based feature fusion module of the present invention;
FIG. 4 is a block diagram of a global feature fusion module of the present invention;
fig. 5 is a network structure diagram of the 3D decoder of the present invention.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement of the purpose and the effect of the present invention easy to understand, the present invention is further explained below with reference to the specific drawings.
Example 1
A physician with a high experience can easily determine the condition of the patient from two orthogonal X-ray images (orthographic and lateral) and give an effective treatment regimen. Because the human eye can reconstruct a three-dimensional structure from two orthogonal X-ray images according to experience, in addition, because of the strong expression capability of the deep learning network, the corresponding three-dimensional data are reconstructed from the two orthogonal X-ray images, thereby effectively reducing the damage of radiation to the body of a patient and improving the diagnosis effect of doctors. We construct a deep learning network that uses two orthogonal X-ray images (orthographic and lateral) for cross-steering and cross-blending, to recover three-dimensional volume data represented by the X-ray images,
the whole network is mainly composed of a framework of dual encoder-3D decoders, as shown in fig. 1-2. Firstly, inputting two orthogonal X-ray images into two parallel encoders, reducing the resolution of the input images through a plurality of convolution layers, extracting features of a plurality of scales, and constructing a feature pyramid with gradually increased resolution;
and simultaneously, cross fusion is carried out on the characteristics of each layer by utilizing the cross guiding thought, and then the cross fusion module results of all layers of the encoder are added into the 3D decoder in a cascading mode. And then, connecting the characteristics output by each cross fusion module together, inputting the characteristics into a 3D decoder, and obtaining three-dimensional volume data corresponding to the X-ray image through a 4-layer 3D decoder.
The cross fusion module is divided into two parts, namely a characteristic fusion module and a global information fusion module;
the feature fusion module mainly utilizes the channel attention and the space attention to extract the weights of the two orthogonal directions of X-ray image information, utilizes the weight graphs in different directions to perform feature fusion, and then inputs the two interacted and fused features into the global information fusion module to acquire the context information.
The global information fusion module processes the image characteristics after the guide learning, can effectively inhibit noise from interfering with a prediction result, and supplements global context information for the X-ray image, so that the reconstructed volume data structure information is more complete.
The feature fusion module consists of two parts, wherein the first part is a cross-view feature fusion module, the second part is a single-view X-ray image global feature fusion module, the information of two orthogonal X-ray images is subjected to cross fusion in a cross fusion module in a coding stage, and further cross guide learning is performed in a cross guide module, and feature recombination and reinforcement are performed in a 3D coding module;
firstly, respectively inputting two orthogonal X-ray images into two identical convolution networks to respectively extract features; in order to utilize the complementarity of information between the X-ray images in two directions to conduct cross guidance, the features of the extracted X-ray images in different directions of each feature layer are input into a feature fusion module to be fused so as to obtain the features of the X-ray images in orthogonal directions of different scales.
Considering the characteristics of the two orthogonal X-ray image data, the orthogonal information in two directions can be extracted and complemented with each other, but if the two are directly added, a great amount of data redundancy is caused, so that the calculated amount is increased, and inaccurate redundant information affects the final reconstruction result. To overcome this problem, the entire information fusion module is divided into two parts, a cross-guiding part and a feature aggregation part. In the cross guiding part, the channel attention and the space attention are mainly utilized to extract the weights of the X-ray image information in two orthogonal directions so as to achieve the purpose of guiding learning. In the feature aggregation part, two types of features which have undergone interaction are input into a global information module to acquire context information. Providing rich three-dimensional structure information for the decoding stage is facilitated.
In order to enhance the feature response of the three-dimensional structure using the mutual guidance of the X-ray images of two orthogonal view ports, a feature fusion module based on cross guidance is constructed, as shown in FIG. 3. First, the spatial attention of the extracted input features is denoted as SA, as the spatial weights of the two directional features, respectively.
At the same time, the channel attention of the two directional input features is also extracted, denoted CA, to determine which channels should be focused on. In a cross-channel boot module, it is desirable to fully take into account complementarity of cross-channel information. For the characteristics of the positive bitmap, instead of fusing the characteristic information of all the positive bitmaps with the characteristic information of the side bitmaps, the spatial weight of the characteristic information of the positive bitmap is multiplied by the characteristic information of the side bitmaps, so that the complementation of the channel and the space is realized. Specifically, the characteristic Fx proposed by the positive X-ray image and the characteristic Fy extracted by the lateral X-ray image are respectively input into a fusion module. The upper half of the fusion module adds Fx to the channel attention mechanism first, adds spatial attention mechanism to the features Fy of the lateral X-ray images, and then fuses their features. Similarly, for an input bitmap information feature, the channel weight of the bitmap feature is multiplied by the bitmap feature. Next, the fusion features are input to a global feature fusion module, respectively, as shown in fig. 4.
The global information fusion module aims at acquiring strong features of two orthogonal X-ray image information. Inspired by the pyramid pool module, a global information module is designed. Fx and Fy are respectively put into a global information module (Global Information Module) for feature aggregation. Global context information is important for locating the main areas of the category, and area-based context information helps to preserve feature integrity and can effectively suppress noise in the X-ray image.
Next, the multi-scale fusion features are connected to form a multi-scale feature vector. We design a 3D decoder to decode the multi-scale multi-modal feature vector. In order not to lose details in the decoding process, in each layer of decoder, the multi-mode features of different layers are connected into the corresponding 3D decoder in a cross-layer connection mode, and after the multi-mode features pass through the decoder, the corresponding volume data of the X-ray image can be estimated by a 3D convolution module.
In the decoder stage, the output characteristics of the encoder and the outputs out 1-out 4 of the fusion module are connected together and input into the 3D decoder. The output of each encoding stage fusion module is added to the corresponding decoder in the decoder stream. This allows not only to recover the spatial details from the fusion module, but also to extract the details of the low resolution input features from the previous decoder. The decoder module is then used to upsample the input features to twice the input size and reduce the number of channels by half, the network structure of the entire 3D decoder being shown in fig. 5. Finally, the process is carried out, we can obtain a 512 x 512 volume data.
The foregoing has outlined and described the basic principles, main features and features of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made therein without departing from the spirit and scope of the invention, which is defined by the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (4)

1. A multi-view X-ray image three-dimensional reconstruction method for a dual-mode G-arm X-ray machine is characterized by comprising the following steps: and constructing a deep learning network, and performing cross guidance and cross fusion between two images by utilizing the two orthogonal X-ray image positive bitmaps and the two orthogonal X-ray image negative bitmaps to recover three-dimensional volume data represented by the X-ray images.
2. The method for three-dimensional reconstruction of multi-view X-ray images for a dual-mode G-arm X-ray machine according to claim 1, wherein the deep learning network mainly comprises a frame of a dual encoder-3D decoder;
inputting two orthogonal X-ray images into two parallel encoders, reducing the resolution of the input images through a plurality of convolution layers, extracting the features of a plurality of scales, and constructing a feature pyramid with gradually increased resolution;
and simultaneously, cross fusion is carried out on the characteristics of each layer by utilizing the idea of cross guidance, then the cross fusion module results of all layers of the encoder are added into a 3D decoder in a cascading mode, and then the characteristics output by each cross fusion module are connected together and input into the 3D decoder, and three-dimensional data corresponding to the X-ray image is obtained through the 3D decoder of 4 layers.
3. The multi-view X-ray image three-dimensional reconstruction method for the dual-mode G-arm X-ray machine according to claim 2, wherein the cross fusion module is divided into two parts, namely a feature fusion module and a global information fusion module;
the feature fusion module mainly utilizes the channel attention and the space attention to extract the weights of the two orthogonal directions of X-ray image information, utilizes the weight graphs in different directions to perform feature fusion, and then inputs the two interacted and fused features into the global information fusion module to acquire the context information.
4. The three-dimensional reconstruction method for multi-view X-ray images of a dual-mode G-arm X-ray machine according to claim 3, wherein the feature fusion module consists of two parts, the first part is a cross-view feature fusion module, the second part is a single-view X-ray image global feature fusion module, the information of two orthogonal X-ray images is subjected to cross fusion in a cross fusion module in a coding stage, and further cross guide learning is performed in a cross guide module, and feature recombination and reinforcement are performed in a 3D coding module;
firstly, respectively inputting two orthogonal X-ray images into two identical convolution networks to respectively extract features; in order to utilize the complementarity of information between the X-ray images in two directions to conduct cross guidance, the features of the extracted X-ray images in different directions of each feature layer are input into a feature fusion module to be fused so as to obtain the features of the X-ray images in orthogonal directions of different scales.
CN202310646077.5A 2023-06-02 2023-06-02 Multi-view X-ray image three-dimensional reconstruction method for dual-mode G-arm X-ray machine Pending CN116385665A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202310646077.5A CN116385665A (en) 2023-06-02 2023-06-02 Multi-view X-ray image three-dimensional reconstruction method for dual-mode G-arm X-ray machine
CN202311327298.2A CN117351150A (en) 2023-06-02 2023-10-13 Three-dimensional reconstruction method for orthogonal X-ray images of G-arm X-ray machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310646077.5A CN116385665A (en) 2023-06-02 2023-06-02 Multi-view X-ray image three-dimensional reconstruction method for dual-mode G-arm X-ray machine

Publications (1)

Publication Number Publication Date
CN116385665A true CN116385665A (en) 2023-07-04

Family

ID=86971416

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310646077.5A Pending CN116385665A (en) 2023-06-02 2023-06-02 Multi-view X-ray image three-dimensional reconstruction method for dual-mode G-arm X-ray machine
CN202311327298.2A Pending CN117351150A (en) 2023-06-02 2023-10-13 Three-dimensional reconstruction method for orthogonal X-ray images of G-arm X-ray machine

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202311327298.2A Pending CN117351150A (en) 2023-06-02 2023-10-13 Three-dimensional reconstruction method for orthogonal X-ray images of G-arm X-ray machine

Country Status (1)

Country Link
CN (2) CN116385665A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105475A (en) * 2019-12-24 2020-05-05 电子科技大学 Bone three-dimensional reconstruction method based on orthogonal angle X-ray
WO2020156195A1 (en) * 2019-01-30 2020-08-06 腾讯科技(深圳)有限公司 Ct image generation method and apparatus, computer device and computer-readable storage medium
CN114332510A (en) * 2022-01-04 2022-04-12 安徽大学 Hierarchical image matching method
CN115035171A (en) * 2022-05-31 2022-09-09 西北工业大学 Self-supervision monocular depth estimation method based on self-attention-guidance feature fusion
KR20220129405A (en) * 2021-03-16 2022-09-23 조선대학교산학협력단 A method and apparatus for image segmentation using global attention-based convolutional network
CN115760944A (en) * 2022-11-29 2023-03-07 长春理工大学 Unsupervised monocular depth estimation method fusing multi-scale features
CN116092185A (en) * 2022-12-22 2023-05-09 山东大学 Depth video behavior recognition method and system based on multi-view feature interaction fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020156195A1 (en) * 2019-01-30 2020-08-06 腾讯科技(深圳)有限公司 Ct image generation method and apparatus, computer device and computer-readable storage medium
CN111105475A (en) * 2019-12-24 2020-05-05 电子科技大学 Bone three-dimensional reconstruction method based on orthogonal angle X-ray
KR20220129405A (en) * 2021-03-16 2022-09-23 조선대학교산학협력단 A method and apparatus for image segmentation using global attention-based convolutional network
CN114332510A (en) * 2022-01-04 2022-04-12 安徽大学 Hierarchical image matching method
CN115035171A (en) * 2022-05-31 2022-09-09 西北工业大学 Self-supervision monocular depth estimation method based on self-attention-guidance feature fusion
CN115760944A (en) * 2022-11-29 2023-03-07 长春理工大学 Unsupervised monocular depth estimation method fusing multi-scale features
CN116092185A (en) * 2022-12-22 2023-05-09 山东大学 Depth video behavior recognition method and system based on multi-view feature interaction fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANPING FU 等: "CGFNet: cross-guided fusion network for RGB-thermal semantic segmentation", 《THE VISUAL COMPUTER 》, pages 3243 - 3252 *
张杰;王飞;李长红;: "G型臂X线机在椎体后凸成形术治疗骨质疏松性椎体压缩骨折中的应用分析", 临床误诊误治, no. 04, pages 61 - 65 *

Also Published As

Publication number Publication date
CN117351150A (en) 2024-01-05

Similar Documents

Publication Publication Date Title
Lyu et al. Estimating dual-energy CT imaging from single-energy CT data with material decomposition convolutional neural network
CN110009669B (en) 3D/2D medical image registration method based on deep reinforcement learning
US10143433B2 (en) Computed tomography apparatus and method of reconstructing a computed tomography image by the computed tomography apparatus
CN111260748B (en) Digital synthesis X-ray tomography method based on neural network
CN108961237A (en) A kind of low-dose CT picture breakdown method based on convolutional neural networks
CN111105475B (en) Bone three-dimensional reconstruction method based on orthogonal angle X-ray
CN113052936A (en) Single-view CT reconstruction method integrating FDK and deep learning
CN107945850A (en) Method and apparatus for handling medical image
CN112819914A (en) PET image processing method
Nguyen et al. 3D Unet generative adversarial network for attenuation correction of SPECT images
CN112967379B (en) Three-dimensional medical image reconstruction method for generating confrontation network based on perception consistency
Kyung et al. Perspective projection-based 3d CT reconstruction from biplanar X-rays
Gao et al. 3DSRNet: 3D Spine Reconstruction Network Using 2D Orthogonal X-ray Images Based on Deep Learning
KR102382602B1 (en) 3D convolutional neural network based cone-beam artifact correction system and method
CN115300809B (en) Image processing method and device, computer equipment and storage medium
CN115908610A (en) Method for obtaining attenuation correction coefficient image based on single-mode PET image
CN113226184A (en) Method for metal artifact reduction in X-ray dental volume tomography
CN116385665A (en) Multi-view X-ray image three-dimensional reconstruction method for dual-mode G-arm X-ray machine
CN116363248A (en) Method, system, equipment and medium for synthesizing CT image by single plane X-Ray image
Liugang et al. Metal artifact reduction method based on noncoplanar scanning in CBCT imaging
CN113256754B (en) Stacking projection reconstruction method for segmented small-area tumor mass
CN111583354B (en) Training method of medical image processing unit and medical image motion estimation method
CN114305469A (en) Low-dose digital breast tomography method and device and breast imaging equipment
Lyu et al. Dual-energy CT imaging from single-energy CT data with material decomposition convolutional neural network
Júnior et al. Ensemble of convolutional neural networks for sparse-view cone-beam computed tomography

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20230704