CN113658116A - Artificial intelligence method and system for generating medical images with different body positions - Google Patents
Artificial intelligence method and system for generating medical images with different body positions Download PDFInfo
- Publication number
- CN113658116A CN113658116A CN202110877849.7A CN202110877849A CN113658116A CN 113658116 A CN113658116 A CN 113658116A CN 202110877849 A CN202110877849 A CN 202110877849A CN 113658116 A CN113658116 A CN 113658116A
- Authority
- CN
- China
- Prior art keywords
- image
- medical images
- images
- body positions
- artificial intelligence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Abstract
The invention provides an artificial intelligence method for generating medical images of different body positions, which comprises the following steps; acquiring medical images of different body positions of a subject as training samples; preprocessing a medical image of a training sample; training an artificial intelligence network model by using the processed training sample; an image of one body position of a new subject is acquired, and images of other different body positions are generated by using the trained model. According to the image generation method, after the patient finishes scanning one body position, images of other body positions can be generated through the model without any scanning equipment, and the generated images are high in precision. The generation speed is very fast.
Description
Technical Field
The invention relates to the field of medical images, in particular to an artificial intelligence method and system for generating medical images of different body positions.
Background
There are situations in the biomedical field where different posture images are needed, including but not limited to patient rotation while lying down, patient rotation while sitting or standing, and patient imaging while lying down but treatment while sitting or standing. However, in the current scanning device, in order to obtain images of different postures, a subject needs to be scanned in different postures, so that the scanning time of the subject is increased, the load of an imaging system is increased, and the dose received by the subject is increased for systems with ionizing radiation such as CT. If a set of images is to be acquired every few degrees (as is the case with rotational radiotherapy), this is difficult to achieve with existing imaging techniques.
The following is a background discussion of an example of MRI-guided rotational intensity modulated treatment in tumor radiotherapy.
The Chinese tumor onset faces a severe situation, and the radiotherapy technology as an indispensable component part of tumor treatment has been developed into an image-guided three-dimensional radiotherapy mode. In three-dimensional fixed field intensity modulated or rotating intensity modulated radiotherapy, relative rotation is required between a patient and a treatment beam to realize the effects of focusing and irradiating tumors and protecting normal tissues to the maximum extent. In current radiotherapy, the most common is the method of rotating the therapeutic handpiece. The method needs to design the accelerator as a rotatable handpiece, and simultaneously has higher requirements on the precision of handpiece rotation. With the development of radiotherapy technology, MRI-guided radiotherapy, proton heavy-particle radiotherapy and the like have put high demands on the design of an accelerator handpiece. Such as: in the development of MRI-guided radiotherapy equipment, functions such as a high-field-intensity superconducting magnetic field, accelerator beams and the like need to be realized on an accelerator handpiece; the handpiece of a heavy ion radiotherapy apparatus weighs hundreds of tons. In order to overcome the defects of complex design and poor rotation precision of a treatment handpiece, a new mode of patient rotation radiotherapy is provided by scholars, namely, in the radiotherapy process, a rack is fixed, and the purposes of three-dimensional treatment and/or image guidance and the like are achieved by rotating a patient. However, since the human tissue is affected by gravity, the patient's body surface and internal organs may shift and deform in different body positions. For the accuracy of target and organ-at-risk delineation and dose calculation, images of each irradiated body position are acquired. However, the current analog positioning device cannot realize analog positioning scanning and image reconstruction when the patient rotates.
Disclosure of Invention
The invention provides a method and a system for generating medical images with different body positions based on an artificial intelligence algorithm, aiming at the problem that the traditional medical imaging equipment cannot well realize multi-body-position or even rotary scanning reconstruction. To obtain medical images of different body positions.
An artificial intelligence method for generating medical images of different postures comprises the following steps:
step one), acquiring medical images of different body positions of a subject to obtain training samples of different body positions;
step two), preprocessing the medical images of different body positions;
step three), carrying out artificial intelligence training by using the processed training sample, and establishing a model for generating other body position images by using one body position image;
and step four), acquiring an image of one body position of a new subject, and generating images of other different body positions by using the model obtained by training.
Further, the medical image is an MRI image or a CT image.
Further, the specific method for preprocessing the medical image in the second step) is as follows: registering the training data of different body positions, and carrying out image transformation according to registration parameters; calibrating image non-uniformity caused by the imaging equipment by using a bias field calibration algorithm; and further calibrating the input medical image by using a histogram equalization or histogram matching method to ensure the consistency of the distribution of the image gray value range.
Further, in the third step), a deep learning network method is adopted to train the model, and other body position images are generated from the image of one body position.
Further, the model training is performed by using the generated countermeasure network in the third step, specifically:
inputting the image of a body position as an input into a generation countermeasure network, and outputting the image of the body position as an image of other body positions;
generating a basic construction of a confrontation network model into a generator and a discriminator; respectively forming a generator and a discriminator by using a convolutional neural network; for training data, the network randomly selects a batch of images for learning each time, and alternately trains the discriminator and the generator respectively; the two dual structures are used for circularly generating the confrontation network, so that the deformation caused by organ gravity and joint movement in different body positions is improved.
Further, the model training is performed by using a deep learning method of a convolutional neural network in the step three), which specifically comprises:
inputting the image of a body position as an input into a convolutional neural network, and outputting the image of the body position as an image of other body positions;
the convolutional neural network comprises an input layer, a convolutional layer and a downsampling layer; the convolutional neural network adopts a hole convolutional filter structure and is used for directly receiving input image information; the convolutional neural network adopts a 'short circuit' type design of a local residual error network, and the data output of a plurality of layers is directly skipped from a plurality of layers and is introduced into the input part of a rear data layer; the decoder learns the upsampling parameters by adopting a deconvolution upsampling network.
Further, the medical images of different body positions include a supine position, a prone position, a side lying position, a standing position and a sitting position.
Further, the medical images in the supine position, the standing position or the sitting position are used in the step three) and the step four) to generate the medical images of other body positions.
Further, the medical image generated in the fifth step) is converted into a DICOM format and used for scenes needing medical images of different body positions.
The application also provides an artificial intelligence system for generating medical images with different body positions, which comprises an imaging system for acquiring the medical images, and is characterized in that the imaging system collects the medical image of one body position of a subject and transmits the medical image to a body position image server;
the image server comprises an image preprocessing module, medical images acquired by the imaging system are processed by the image preprocessing module and then input to other body position image generation modules, trained artificial intelligent image generation models are arranged in the other body position image generation modules, and the artificial intelligent image generation models can automatically generate images of other body positions according to the medical images of one body position;
the image output by the artificial intelligence image generation model is converted into a DICOM format through post-processing and stored in a storage array for clinical application.
Preferably, the artificial intelligence image generation model is obtained by training by adopting a deep learning method for generating a countermeasure network or directly establishing a convolutional neural network.
Preferably, the imaging system collects the lying position image of the human body, and generates images of other body positions by adopting an artificial intelligence image.
Preferably, the imaging system collects a standing position image of the human body, and generates images of other body positions by adopting an artificial intelligence image.
According to the image generation method, after the patient finishes scanning one body position, images of other body positions can be generated through the model without any scanning equipment, and each body position can reach high precision. Each image is generated very quickly, on the order of milliseconds.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings required in the description of the embodiments are briefly introduced below. It should be apparent that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained without creative efforts, and the technical solutions directly obtained from the drawings also belong to the protection scope of the present invention.
FIG. 1 is a schematic diagram of a cycle generation countermeasure network
FIG. 2 is a diagram of a medical image generation depth convolution neural network architecture;
FIG. 3 is a schematic diagram of a residual network "short-circuit" design;
FIG. 4 is a diagram of the evaluation of the accuracy of the generated image;
FIG. 5 is a flow chart of a system for generating medical images of different body positions by an artificial intelligence algorithm.
Detailed Description
The image generation method of the present invention can be used for generating various medical images such as CT and MRI, and the present embodiment is explained by generating prone position MRI by supine position scanning MRI:
first, MRI images of a subject in a supine position and MRI images of other body positions are acquired, and historical training samples of different body positions are generated. The training sample can cover all parts of the body and positions of different angles. In this example, 25 healthy male volunteers were selected and all received MRI scans in the supine and prone positions, with the scan sequence being a T1-weighted fast spin echo sequence.
And then, preprocessing the DICOM image file generated by the training sample. In the embodiment, an N3 algorithm (or an algorithm such as N4) is adopted for bias field calibration, so that the influence of the nonuniformity of the scanned MR image on the training model is reduced. The MR images of the supine and prone positions are rigidly registered. And carrying out gray value normalization and histogram matching on the MR simulation positioning image.
After the training samples are preprocessed, the training samples are trained respectively according to different body positions, and MRI generating models of the different body positions are established. The invention adopts an artificial intelligence algorithm, including but not limited to convolution neural network for image generation, generation of confrontation network and other latest deep learning methods.
The artificial intelligence algorithm adopted in this example is to generate an antagonistic network and perform model training. Generating a basic construction of a confrontation network model into a generator and a discriminator; the generator and the discriminator are respectively formed by a convolutional neural network. For training data, the network randomly selects a batch of images for learning each time, and alternately trains the discriminator and the generator respectively.
In order to improve the deformation of the organ which affects the organ gravity and joint movement of different body positions, two dual-structure circulation generation countermeasure networks are further used (figure 1).
Two generation networks are employed to learn the forward and reverse mapping from the standard supine position image X (X e X) to the other postural image domain Y (Y e Y): g: x → Y and F: Y → X. Training the forward mapping to position the supine positionThe images are input to a generating network as input, output as other body position images, and parameters of the generator are trained. The generator converts the supine position image input by the network into an MR image of other body positions by training a mapping function; the discriminator is used for discriminating whether the input image is a real MR image of other body position or a generated MR image of other body position, and feeding the result of the discrimination back to the generator. Training creates an inverse map F to ensure that there is a meaningful correlation between the input standard supine position images and the generation of other position images, i.e., the input and output share some features. Using two discriminators DxAnd DyWherein D isxFor distinguishing between images { x } and { F (y) }, DyFor distinguishing between images y and G (x).
The network needs to contain two types of losses: one is the classical antagonistic loss function and the other is the cyclic uniform loss function. The penalty function is used to match the distribution of the generated image and the target image. E.g. for mapping of G and its corresponding discriminator DyChallenge loss using standard GAN:
and the cyclic consistent loss function of the dual network is used for preventing the generated mappings G and F from conflicting with each other so as to improve the prediction accuracy of different posture images in the rotation treatment. It is divided into forward cyclic loss and backward cyclic loss. For each standard supine position image X, the cyclic confrontation generation network converts X to other position images G (X) and back to the starting image domain X, which is the forward cyclic loss, i.e., X → G (X) → F (G (X)) ≈ X. Similarly, for each image y, the inverse cyclic consensus loss can be calculated, i.e., y → F (y) → G (F (y) ≈ y). Thus, the cyclic uniform loss function is defined as the following equation, where λ is used to control the relative magnitude of the two types of target losses.
The comprehensive overall loss is:
therefore, the generator and the arbiter form a minimum maximum (minimax) countermeasure process, and the final desired optimal generator is:
confrontation discriminator DxAnd DyTo determine whether the output of the two mappings F and G is true or false, i.e. given a standard supine position image x, the other generated body position image y is output for confusing the discriminator DyAnd D isxAn attempt is made to distinguish between the image reconstructed by mapping F and the real supine-position image x. The generator and the discriminator are trained together for improving the performance of each other in a cooperative way to complete the image transformation between the two domains, and the change of the geometric shape and the spatial relation of the organ caused by the body position influence is accurately predicted.
The generator network structure consists of an encoder and a decoder. Wherein the encoder consists of a structure of convolutional layers and 9 ResNet blocks. Wherein each ResNet block contains two 3 × 3 convolutional layers (step size 1) and InstanceNorm, specifically: 3x3 conv, InstanceNorm, ReLU,3x3 conv, InstanceNorm. The decoder is composed of two layers of transposed convolution and one layer of convolution layer to generate the final image.
The discriminator network employs 70x70 PatchGANs for distinguishing between true and false corresponding image blocks of 70x 70. Such a block-level discriminator has a smaller amount of parameters than the entire image discriminator, and can use the full convolution operation for an image of an arbitrary size.
The trained model can generate MRI medical images. The different position images of each patient, which may be converted to DICOM format, are transmitted back to the planning system for defining the region of interest of the subject in different positions, including the target area and/or normal tissue organs.
In addition, a convolutional neural network can be directly established, model parameters can be trained, and generation of other posture images can be directly realized. As shown in fig. 2 and 3: the convolutional neural network comprises an encoder and a decoder, wherein the encoder consists of an input layer, a convolutional layer and a downsampling layer, and a deep network setting is adopted to extract the most essential information of an image. In order to solve the problem of individual difference of medical images, a high-grade convolution structure is introduced on the basis of a conventional convolution layer so as to extract more image details and obtain a more accurate model. The hollow convolution filter structure is used for directly receiving input image information and is used in the middle and deep layers, so that the convolution neural network is allowed to obtain more features in a larger receiving domain, multi-scale and multi-level feature learning is carried out, and high-dimensional features in the image are extracted.
The 'short-circuit' design of the local residual network is introduced, and the data output of the first layers is directly skipped from multiple layers and introduced into the input part of the following data layers (figure 3). The skip connection improves the coarse pixel positioning of the up-sampling, which in turn enables better handling of differences between different body positions.
The decoder adopts a deconvolution up-sampling network to learn up-sampling parameters, and can reduce the quantity of parameters and time consumption on the multi-scale overdisch problem so as to reconstruct high-quality detail information of different volume images.
A series of data enhancement techniques are adopted in the whole training process to obtain a more stable prediction model, such as operations of left-right turning, random clipping, random scaling, random rotation and the like.
FIG. 4 is a scanned and generated image of a subject in different body positions, the first column being a scanned supine position image; the second column is a generated prone position image; the third column is the scanned prone position image.
According to the method, after the scanning of one body position of the subject is completed, the MRI of other body positions can be generated through the model without any scanning equipment, and each body position can reach high precision. Each image is generated very quickly, on the order of milliseconds.
The image generation system is shown in fig. 5, an image server in the system comprises an image preprocessing module, medical images acquired by the imaging system are processed by the image preprocessing module and then input to other body position image generation modules, trained artificial intelligence image generation models are arranged in the other body position image generation modules, and the artificial intelligence image generation models automatically generate images of other body positions according to the medical image of one body position acquired by the imaging system; the image output by the artificial intelligence image generation model is converted into a DICOM format through post-processing and stored in a storage array, and the image can be used for clinical application.
Claims (10)
1. An artificial intelligence method for generating medical images of different postures is characterized by comprising the following steps:
step one), acquiring medical images of different body positions of a subject to obtain training samples of different body positions;
step two), preprocessing the medical images of different body positions;
step three), carrying out artificial intelligence training by using the processed training sample, and establishing a model for generating other body position images by using one body position image;
and step four), acquiring an image of one body position of a new subject, and generating images of other different body positions by using the model obtained by training.
2. An artificial intelligence method for generating medical images of different body positions as claimed in claim 1, wherein said medical images are MRI images or CT images.
3. The artificial intelligence method for generating medical images of different postures as claimed in claim 1, wherein the specific method for preprocessing the medical images in the second step is as follows: registering the training data of different body positions, and carrying out image transformation according to registration parameters; calibrating image non-uniformity caused by the imaging equipment by using a bias field calibration algorithm; and further calibrating the input medical image by using a histogram equalization or histogram matching method to ensure the consistency of the distribution of the image gray value range.
4. An artificial intelligence method for generating medical images of different body positions as claimed in claim 1, wherein in the third step), a deep learning network method is adopted to train the model, and images of other body positions are generated from the image of one body position.
5. The artificial intelligence method for generating medical images of different postures as claimed in claim 4, wherein in the third step), a generation countermeasure network is used for model training, and the method specifically comprises the following steps:
inputting the image of a body position as an input into a generation countermeasure network, and outputting the image of the body position as an image of other body positions;
generating a basic construction of a confrontation network model into a generator and a discriminator; respectively forming a generator and a discriminator by using a convolutional neural network; for training data, the network randomly selects a batch of images for learning each time, and alternately trains the discriminator and the generator respectively; the two dual structures are used for circularly generating the confrontation network, so that the deformation caused by organ gravity and joint movement in different body positions is improved.
6. The artificial intelligence method for generating medical images with different postures as claimed in claim 4, wherein in the third step), a deep learning method of a convolutional neural network is used for model training, and specifically:
inputting the image of a body position as an input into a convolutional neural network, and outputting the image of the body position as an image of other body positions;
the convolutional neural network comprises an input layer, a convolutional layer and a downsampling layer; the convolutional neural network adopts a hole convolutional filter structure and is used for directly receiving input image information; the convolutional neural network adopts a 'short circuit' type design of a local residual error network, and the data output of a plurality of layers is directly skipped from a plurality of layers and is introduced into the input part of a rear data layer; the decoder learns the upsampling parameters by adopting a deconvolution upsampling network.
7. An artificial intelligence method for generating medical images of different body positions as claimed in claim 1, wherein the medical images of different body positions include a supine position, a prone position, a side toilet, a standing position, and a sitting position.
8. An artificial intelligence method for generating medical images of different body positions as claimed in claim 7, wherein the medical images of the supine, standing or sitting positions are used in steps three) and four) to generate medical images of other body positions.
9. The artificial intelligence method for generating medical images with different body positions as claimed in claim 1, wherein the medical image generated in the fifth step) is converted into DICOM format for the scene needing medical images with different body positions.
10. An artificial intelligence system for generating medical images of different body positions comprises an imaging system for acquiring medical images, and is characterized in that the imaging system collects the medical images of one body position of a subject and transmits the medical images to a body position image server;
the image server comprises an image preprocessing module, medical images acquired by the imaging system are processed by the image preprocessing module and then input to other body position image generation modules, trained artificial intelligent image generation models are arranged in the other body position image generation modules, and the artificial intelligent image generation models can automatically generate images of other body positions according to the medical images of one body position;
the image output by the artificial intelligent image generation model is converted into a DICOM format through post-processing and is stored in a storage array for clinical application;
preferably, the artificial intelligence image generation model is obtained by training by adopting a deep learning method for generating a countermeasure network or directly establishing a convolutional neural network;
preferably, the imaging system collects the lying position image of the human body, and generates images of other body positions by adopting an artificial intelligence image;
preferably, the imaging system collects a standing position image of the human body, and generates images of other body positions by adopting an artificial intelligence image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110877849.7A CN113658116B (en) | 2021-07-30 | 2021-07-30 | Artificial intelligence method and system for generating medical images with different body positions |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110877849.7A CN113658116B (en) | 2021-07-30 | 2021-07-30 | Artificial intelligence method and system for generating medical images with different body positions |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113658116A true CN113658116A (en) | 2021-11-16 |
CN113658116B CN113658116B (en) | 2023-09-15 |
Family
ID=78490175
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110877849.7A Active CN113658116B (en) | 2021-07-30 | 2021-07-30 | Artificial intelligence method and system for generating medical images with different body positions |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113658116B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115148341A (en) * | 2022-08-02 | 2022-10-04 | 重庆大学附属三峡医院 | AI structure delineation method and system based on body position recognition |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110070935A (en) * | 2019-03-20 | 2019-07-30 | 中国科学院自动化研究所 | Medical image synthetic method, classification method and device based on confrontation neural network |
JP2019198376A (en) * | 2018-05-14 | 2019-11-21 | キヤノンメディカルシステムズ株式会社 | Medical image processor, medical image processing method, and medical image processing system |
CN110781976A (en) * | 2019-10-31 | 2020-02-11 | 重庆紫光华山智安科技有限公司 | Extension method of training image, training method and related device |
CN111583354A (en) * | 2020-05-08 | 2020-08-25 | 上海联影医疗科技有限公司 | Training method for medical image processing unit and medical image motion estimation method |
-
2021
- 2021-07-30 CN CN202110877849.7A patent/CN113658116B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019198376A (en) * | 2018-05-14 | 2019-11-21 | キヤノンメディカルシステムズ株式会社 | Medical image processor, medical image processing method, and medical image processing system |
CN110070935A (en) * | 2019-03-20 | 2019-07-30 | 中国科学院自动化研究所 | Medical image synthetic method, classification method and device based on confrontation neural network |
CN110781976A (en) * | 2019-10-31 | 2020-02-11 | 重庆紫光华山智安科技有限公司 | Extension method of training image, training method and related device |
CN111583354A (en) * | 2020-05-08 | 2020-08-25 | 上海联影医疗科技有限公司 | Training method for medical image processing unit and medical image motion estimation method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115148341A (en) * | 2022-08-02 | 2022-10-04 | 重庆大学附属三峡医院 | AI structure delineation method and system based on body position recognition |
CN115148341B (en) * | 2022-08-02 | 2023-06-02 | 重庆大学附属三峡医院 | AI structure sketching method and system based on body position recognition |
Also Published As
Publication number | Publication date |
---|---|
CN113658116B (en) | 2023-09-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Klages et al. | Patch‐based generative adversarial neural network models for head and neck MR‐only planning | |
US20120302880A1 (en) | System and method for specificity-based multimodality three- dimensional optical tomography imaging | |
CN108375746A (en) | A kind of phase warp folding method and apparatus | |
Jabbarpour et al. | Unsupervised pseudo CT generation using heterogenous multicentric CT/MR images and CycleGAN: Dosimetric assessment for 3D conformal radiotherapy | |
US9355454B2 (en) | Automatic estimation of anatomical extents | |
Emami et al. | Attention-guided generative adversarial network to address atypical anatomy in synthetic CT generation | |
EP2483866A1 (en) | Medical image analysis system using n-way belief propagation for anatomical images subject to deformation and related methods | |
Dai et al. | Self‐supervised learning for accelerated 3D high‐resolution ultrasound imaging | |
Baydoun et al. | Dixon-based thorax synthetic CT generation using Generative Adversarial Network | |
CN113658116B (en) | Artificial intelligence method and system for generating medical images with different body positions | |
CN114881848A (en) | Method for converting multi-sequence MR into CT | |
WO2011041473A1 (en) | Medical image analysis system for anatomical images subject to deformation and related methods | |
CN113205567A (en) | Method for synthesizing CT image by MRI image based on deep learning | |
Fei et al. | Registration and fusion of SPECT, high-resolution MRI, and interventional MRI for thermal ablation of prostate cancer | |
Xie et al. | Generation of contrast-enhanced CT with residual cycle-consistent generative adversarial network (Res-CycleGAN) | |
WO2011041474A1 (en) | Medical image analysis system for displaying anatomical images subject to deformation and related methods | |
CN115861464A (en) | Pseudo CT (computed tomography) synthesis method based on multimode MRI (magnetic resonance imaging) synchronous generation | |
CN115908610A (en) | Method for obtaining attenuation correction coefficient image based on single-mode PET image | |
Oulbacha et al. | MRI to C‐arm spine registration through Pseudo‐3D CycleGANs with differentiable histograms | |
CN113052840B (en) | Processing method based on low signal-to-noise ratio PET image | |
Jiangtao et al. | MRI to CT synthesis using contrastive learning | |
CA3104607A1 (en) | Contrast-agent-free medical diagnostic imaging | |
Zhao et al. | A transfer fuzzy clustering and neural network based tissue segmentation method during PET/MR attenuation correction | |
Emami et al. | Attention-guided generative adversarial network to address atypical anatomy in modality transfer | |
Wu et al. | Registration of organ surface with intra-operative 3D ultrasound image using genetic algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |