WO2020182033A1 - 图像区域定位方法、装置和医学图像处理设备 - Google Patents
图像区域定位方法、装置和医学图像处理设备 Download PDFInfo
- Publication number
- WO2020182033A1 WO2020182033A1 PCT/CN2020/077746 CN2020077746W WO2020182033A1 WO 2020182033 A1 WO2020182033 A1 WO 2020182033A1 CN 2020077746 W CN2020077746 W CN 2020077746W WO 2020182033 A1 WO2020182033 A1 WO 2020182033A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- voxel
- image
- target
- dimensional images
- dimensional
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 88
- 238000012545 processing Methods 0.000 title claims abstract description 70
- 230000004927 fusion Effects 0.000 claims abstract description 163
- 238000007499 fusion processing Methods 0.000 claims abstract description 30
- 208000026310 Breast neoplasm Diseases 0.000 claims description 51
- 206010006187 Breast cancer Diseases 0.000 claims description 45
- 230000008569 process Effects 0.000 claims description 25
- 238000007781 pre-processing Methods 0.000 claims description 22
- 238000003860 storage Methods 0.000 claims description 22
- 238000000605 extraction Methods 0.000 claims description 19
- 238000012216 screening Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000009825 accumulation Methods 0.000 claims description 6
- 230000001575 pathological effect Effects 0.000 description 89
- 210000001519 tissue Anatomy 0.000 description 38
- 238000010586 diagram Methods 0.000 description 26
- 238000002595 magnetic resonance imaging Methods 0.000 description 24
- 239000000284 extract Substances 0.000 description 23
- 230000006870 function Effects 0.000 description 22
- 230000011218 segmentation Effects 0.000 description 16
- 238000010606 normalization Methods 0.000 description 15
- 238000005481 NMR spectroscopy Methods 0.000 description 13
- 238000002597 diffusion-weighted imaging Methods 0.000 description 13
- 230000007170 pathology Effects 0.000 description 11
- 239000000243 solution Substances 0.000 description 11
- 210000000481 breast Anatomy 0.000 description 10
- 239000011159 matrix material Substances 0.000 description 10
- 238000007477 logistic regression Methods 0.000 description 9
- 238000005070 sampling Methods 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 7
- 210000000115 thoracic cavity Anatomy 0.000 description 7
- 206010028980 Neoplasm Diseases 0.000 description 6
- 238000013500 data storage Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000009792 diffusion process Methods 0.000 description 5
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 5
- 239000008280 blood Substances 0.000 description 4
- 210000004369 blood Anatomy 0.000 description 4
- 239000002872 contrast media Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000007726 management method Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 201000007295 breast benign neoplasm Diseases 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 238000002347 injection Methods 0.000 description 3
- 239000007924 injection Substances 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 241000282472 Canis lupus familiaris Species 0.000 description 2
- UFHFLCQGNIYNRP-UHFFFAOYSA-N Hydrogen Chemical compound [H][H] UFHFLCQGNIYNRP-UHFFFAOYSA-N 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 2
- 238000012952 Resampling Methods 0.000 description 2
- 210000003484 anatomy Anatomy 0.000 description 2
- 238000002583 angiography Methods 0.000 description 2
- 230000017531 blood circulation Effects 0.000 description 2
- 230000019522 cellular metabolic process Effects 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 229910052739 hydrogen Inorganic materials 0.000 description 2
- 239000001257 hydrogen Substances 0.000 description 2
- 230000006698 induction Effects 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000035790 physiological processes and functions Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 208000024891 symptom Diseases 0.000 description 2
- 210000001835 viscera Anatomy 0.000 description 2
- 206010001233 Adenoma benign Diseases 0.000 description 1
- 208000003170 Bronchiolo-Alveolar Adenocarcinoma Diseases 0.000 description 1
- 206010011732 Cyst Diseases 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 208000007659 Fibroadenoma Diseases 0.000 description 1
- 208000002927 Hamartoma Diseases 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 208000037396 Intraductal Noninfiltrating Carcinoma Diseases 0.000 description 1
- 206010073099 Lobular breast carcinoma in situ Diseases 0.000 description 1
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 1
- 108010001267 Protein Subunits Proteins 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 201000005389 breast carcinoma in situ Diseases 0.000 description 1
- 201000003149 breast fibroadenoma Diseases 0.000 description 1
- 201000002143 bronchus adenoma Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 210000000038 chest Anatomy 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 208000031513 cyst Diseases 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 208000028715 ductal breast carcinoma in situ Diseases 0.000 description 1
- 238000013535 dynamic contrast enhanced MRI Methods 0.000 description 1
- 125000004435 hydrogen atom Chemical group [H]* 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 201000003159 intraductal papilloma Diseases 0.000 description 1
- 230000005865 ionizing radiation Effects 0.000 description 1
- 210000001503 joint Anatomy 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 201000011059 lobular neoplasia Diseases 0.000 description 1
- 201000005202 lung cancer Diseases 0.000 description 1
- 208000020816 lung neoplasm Diseases 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 239000002207 metabolite Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 201000010879 mucinous adenocarcinoma Diseases 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000009279 non-visceral effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000010827 pathological analysis Methods 0.000 description 1
- 210000004197 pelvis Anatomy 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 210000000664 rectum Anatomy 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 210000004994 reproductive system Anatomy 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 208000000649 small cell carcinoma Diseases 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 238000004611 spectroscopical analysis Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 210000003932 urinary bladder Anatomy 0.000 description 1
- 210000004291 uterus Anatomy 0.000 description 1
- 210000001215 vagina Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/4818—MR characterised by data acquisition along a specific k-space trajectory or by the temporal order of k-space coverage, e.g. centric or segmented coverage of k-space
- G01R33/482—MR characterised by data acquisition along a specific k-space trajectory or by the temporal order of k-space coverage, e.g. centric or segmented coverage of k-space using a Cartesian trajectory
- G01R33/4822—MR characterised by data acquisition along a specific k-space trajectory or by the temporal order of k-space coverage, e.g. centric or segmented coverage of k-space using a Cartesian trajectory in three dimensions
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/54—Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
- G01R33/56—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
- G01R33/5608—Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/29—Graphical models, e.g. Bayesian networks
- G06F18/295—Markov models or related models, e.g. semi-Markov models; Markov random fields; Networks embedding Markov models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/84—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using probabilistic graphical models from image or video features, e.g. Markov models or Bayesian networks
- G06V10/85—Markov-related models; Markov random fields
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/483—NMR imaging systems with selection of signals or spectra from particular regions of the volume, e.g. in vivo spectroscopy
- G01R33/4838—NMR imaging systems with selection of signals or spectra from particular regions of the volume, e.g. in vivo spectroscopy using spatially selective suppression or saturation of MR signals
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/54—Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
- G01R33/56—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
- G01R33/563—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution of moving material, e.g. flow contrast angiography
- G01R33/56341—Diffusion imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/54—Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
- G01R33/56—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
- G01R33/563—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution of moving material, e.g. flow contrast angiography
- G01R33/5635—Angiography, e.g. contrast-enhanced angiography [CE-MRA] or time-of-flight angiography [TOF-MRA]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- This application relates to the field of image processing technology, in particular to an image region positioning technology.
- semantic segmentation is a typical use of machine learning to process image positioning. It involves using some raw data (for example, medical three-dimensional images) as Enter and convert them into masks with highlighted regions of interest.
- the region of interest can be referred to as a target region, and the target region can be, for example, a breast tumor region.
- the current target region recognition methods based on three-dimensional images have the problem of inaccurate recognition.
- the embodiments of the present application provide an image region positioning method, device, and medical image processing equipment, which can improve the accuracy of target recognition.
- the embodiment of the present application provides an image region positioning method, including:
- the target area is located based on the position information of the target voxel.
- determining the voxel type corresponding to the voxel in the three-dimensional image according to the fusion feature includes:
- the voxel type corresponding to the voxel is determined.
- performing fusion processing on the image features of the multiple three-dimensional images includes:
- weighting processing is performed on the image features of the multiple three-dimensional images.
- weighting the image features of the multiple three-dimensional images based on the preset feature weight includes:
- locating the target area based on the position information of the target voxel includes:
- the target area is located on the target three-dimensional image.
- positioning the target area on the target three-dimensional image based on the position information of the target voxel includes:
- the voxel value of the corresponding voxel is set to a preset value to identify the target area.
- positioning the target area on the target three-dimensional image based on the position information of the target voxel includes:
- extracting image features of the plurality of three-dimensional images includes:
- performing preprocessing operations on the plurality of three-dimensional images includes:
- the original coordinate system of the multiple three-dimensional images is transformed into the reference coordinate system.
- performing preprocessing operations on the plurality of three-dimensional images includes:
- the original voxel spacing of the plurality of three-dimensional images is transformed into the reference voxel spacing.
- performing preprocessing operations on the plurality of three-dimensional images includes:
- the original size of the multiple three-dimensional images is transformed into the reference size.
- An embodiment of the present application also provides an image region positioning device, including:
- An acquiring module to acquire multiple three-dimensional images of the target part, wherein the multiple three-dimensional images include multiple three-dimensional images of different modalities;
- An extraction module for extracting image features of the multiple three-dimensional images
- the fusion module performs fusion processing on the image features of the multiple three-dimensional images to obtain fusion features
- a classification module which determines the voxel type corresponding to the voxel in the three-dimensional image according to the fusion feature
- a screening module to select a target voxel whose voxel type is a preset voxel type from the three-dimensional image to obtain position information of the target voxel;
- the positioning module locates the target area based on the position information of the target voxel.
- the embodiment of the application also provides a medical image processing device, which
- the medical image acquisition unit is used to acquire multiple three-dimensional images of a target part of a living body
- the memory is used to store image data and multiple instructions
- the processor is configured to read multiple instructions stored in the memory to execute the following steps:
- the multiple three-dimensional images include multiple three-dimensional images of different modalities; extract the image features of the multiple three-dimensional images; merge the image features of the multiple three-dimensional images Process to obtain a fusion feature; determine the voxel type corresponding to the voxel in the three-dimensional image according to the fusion feature; select a target voxel whose voxel type is a preset voxel type from the three-dimensional image, and obtain all The position information of the target voxel; and the target area is located based on the position information of the target voxel.
- the processor specifically executes determining the fusion feature corresponding to the voxel in the three-dimensional image; calculates the fusion corresponding to the voxel The probability that the feature belongs to each voxel type, and obtain the probability that the voxel belongs to each voxel type;
- the processor When performing the step of performing fusion processing on the image features of the multiple three-dimensional images, the processor specifically executes obtaining preset feature weights corresponding to the image features; based on the preset feature weights, Image features are weighted;
- the processor specifically executes determining multiple images corresponding to voxels at the same position in the multiple three-dimensional images Features; weighting the multiple image features with preset feature weights to obtain multiple image features after weighting processing; performing an accumulation operation on multiple image features after weighting processing to obtain the fusion of the voxels feature;
- the processor When the execution step locates the target area based on the position information of the target voxel, the processor specifically executes selecting the target three-dimensional image from the multiple three-dimensional images; based on the position information of the target voxel, Position the target area on the target 3D image.
- the embodiment of the present application provides an image region positioning method, including:
- the breast tumor area is located.
- determining the voxel type corresponding to the voxel in the three-dimensional image according to the fusion feature includes:
- the voxel type corresponding to the voxel is determined.
- performing fusion processing on the image features of the multiple three-dimensional images includes:
- weighting processing is performed on the image features of the multiple three-dimensional images.
- weighting the image features of the multiple three-dimensional images based on the preset feature weight includes:
- locating the breast tumor area based on the position information of the target voxel includes:
- the breast tumor region is located on the target three-dimensional image.
- locating the breast tumor region on the target three-dimensional image based on the position information of the target voxel includes:
- the voxel value of the corresponding voxel is set to a preset value to identify the breast tumor area.
- locating the breast tumor region on the target three-dimensional image based on the position information of the target voxel includes:
- the center point of the breast tumor region is located on the target three-dimensional image.
- extracting image features of the plurality of three-dimensional images includes:
- performing preprocessing operations on the plurality of three-dimensional images includes:
- the original coordinate system of the multiple three-dimensional images is transformed into the reference coordinate system.
- performing preprocessing operations on the plurality of three-dimensional images includes:
- the original voxel spacing of the plurality of three-dimensional images is transformed into the reference voxel spacing.
- performing preprocessing operations on the plurality of three-dimensional images includes:
- the original size of the multiple three-dimensional images is transformed into the reference size.
- An embodiment of the present application also provides an image region positioning device, including:
- An acquiring module to acquire multiple three-dimensional images of the target part, wherein the multiple three-dimensional images include multiple three-dimensional images of different modalities;
- An extraction module for extracting image features of the multiple three-dimensional images
- the fusion module performs fusion processing on the image features of the multiple three-dimensional images to obtain fusion features
- a classification module which determines the voxel type corresponding to the voxel in the three-dimensional image according to the fusion feature
- a screening module selecting a target voxel whose voxel type is a breast tumor type from the three-dimensional image to obtain position information of the target voxel;
- the positioning module locates the breast tumor area based on the position information of the target voxel.
- the embodiment of the application also provides a medical image processing device, which
- the medical image acquisition unit is used to acquire multiple three-dimensional images of a target part of a living body
- the memory is used to store image data and multiple instructions
- the processor is configured to read multiple instructions stored in the memory to execute the following steps:
- the multiple three-dimensional images include multiple three-dimensional images of different modalities; extract the image features of the multiple three-dimensional images; fuse the image features of the multiple three-dimensional images Process to obtain a fusion feature; determine the voxel type corresponding to the voxel in the three-dimensional image according to the fusion feature; select a target voxel whose voxel type is a breast tumor type from the three-dimensional image to obtain the target Position information of the voxel; locate the breast tumor area based on the position information of the target voxel.
- the processor specifically executes determining the fusion feature corresponding to the voxel in the three-dimensional image; calculates the fusion corresponding to the voxel The probability that the feature belongs to each voxel type, and obtain the probability that the voxel belongs to each voxel type;
- the processor When performing the step of performing fusion processing on the image features of the multiple three-dimensional images, the processor specifically executes the acquisition of preset feature weights corresponding to the image features; based on the preset feature weights, Image features are weighted;
- the processor specifically executes determining multiple images corresponding to voxels at the same position in the multiple three-dimensional images Features; weighting the multiple image features with preset feature weights to obtain multiple image features after weighting processing; performing an accumulation operation on multiple image features after weighting processing to obtain the fusion of the voxels feature;
- the processor When the execution step locates the breast tumor area based on the position information of the target voxel, the processor specifically executes selecting the target three-dimensional image from the multiple three-dimensional images; based on the position information of the target voxel, Position the breast tumor region on the target three-dimensional image.
- the embodiment of the present application can acquire multiple three-dimensional images of a target part, wherein the multiple three-dimensional images include multiple three-dimensional images of different modalities; extract image features of the multiple three-dimensional images; Perform fusion processing on the image features of, to obtain fusion features; determine the voxel type corresponding to the voxel in the three-dimensional image according to the fusion feature; select the target whose voxel type is the preset voxel type from the three-dimensional image Voxel, obtain the position information of the target voxel; locate the target area based on the position information of the target voxel.
- fusion features can be obtained based on different image features provided by three-dimensional images of different modalities, and the target area can be located directly based on the fusion features. Due to the multiple three-dimensional images of different modalities, the subsequent area positioning step can be analyzed and processed from multiple angles, which can reduce the probability of misjudgment and improve the accuracy of target area positioning.
- FIG. 1a is a schematic diagram of a scene of an image region positioning method provided by an embodiment of the present application
- FIG. 1b is a schematic flowchart of an image region positioning method provided by an embodiment of the present application.
- FIG. 1c is a process of performing a convolution operation on a 3D image sequence by a 3D convolution kernel provided in an embodiment of the present application;
- Figure 1d is a schematic diagram of feature fusion provided by an embodiment of the present application.
- Figure 1e is another schematic diagram of feature fusion provided by an embodiment of the present application.
- Figure 1f is a schematic diagram of the relationship between NB, HMM, CRF, and LR provided by an embodiment of the present application;
- FIG. 2a is another schematic flowchart of an image region positioning method provided by an embodiment of the present application.
- Figure 2b is a schematic diagram of feature fusion based on preset weights provided by an embodiment of the present application.
- Fig. 2c is another schematic diagram of feature fusion based on preset weights provided by an embodiment of the present application.
- 2d is a schematic diagram of a scene in which corresponding voxels are determined on a target image provided by an embodiment of the present application;
- Figure 3a is a schematic diagram of DCE and DWI images provided by an embodiment of the present application.
- Figure 3b is a schematic diagram of a specific embodiment process provided by an embodiment of the present application.
- Fig. 3c is a schematic structural diagram of a 3D U-Net model provided by an embodiment of the present application.
- FIG. 3d is a schematic diagram of the output result provided by this embodiment provided by the embodiment of the present application.
- FIG. 4 is a schematic structural diagram of an image area positioning device provided by an embodiment of the present application.
- Figure 5a is a schematic diagram of the internal structure of a medical image processing device provided by an embodiment of the present application.
- Fig. 5b is a schematic diagram of the principle of the acquisition unit provided in an embodiment of the present application.
- the embodiments of the present application provide an image region positioning method, device, and medical image processing equipment.
- the image area locating device may be specifically integrated in electronic equipment, which may include magnetic resonance imaging equipment, medical image data processing equipment, medical image data storage equipment, and so on.
- Figure 1a is a schematic diagram of a scene of an image region positioning method provided by an embodiment of the present application.
- an electronic device can acquire multiple three-dimensional (3D) images of a target part, where the multiple three-dimensional images include multiple different modes.
- Three-dimensional images of different states such as three-dimensional image A, three-dimensional image B, and three-dimensional image C in Figure 1a; extract the image features of multiple three-dimensional images to obtain image feature a, image feature b, and image feature c respectively;
- the image features are fused to obtain the fusion feature;
- the voxel type corresponding to the voxel in the 3D image is determined according to the fusion feature;
- the target voxel whose voxel type is the preset voxel type is selected from the 3D image to obtain the position of the target voxel Information; locate the target area based on the location information of the target voxel.
- the image area positioning device may be specifically integrated in an electronic device.
- the electronic device may include magnetic resonance image acquisition equipment, magnetic resonance imaging equipment, and medical image data. Processing equipment and medical image data storage equipment, etc.
- an image area positioning method is provided. As shown in FIG. 1b, the specific process of the image area positioning method may be as follows:
- Target parts can refer to living bodies such as humans, cats, dogs and other animals and plants, certain parts of their bodies, or non-living bodies such as human tissue sections, animal specimens, certain parts of metabolites of living bodies, etc. Including part of the model in the three-dimensional model in computer vision, for example, the part of the patient's chest, the brain of a dog specimen, and so on.
- a three-dimensional image can refer to a three-dimensional image with three dimensions of length, width, and height, or a continuous two-dimensional image with three dimensions of length, width, and time.
- the three-dimensional image can be, for example, a laser hologram, a computer three-dimensional model, and a magnetic resonance three-dimensional image. Images and more.
- multiple three-dimensional images to be processed can be obtained from locally stored images or externally stored images; for example, three-dimensional images can be obtained from a local image database; or, communicate with other storage devices via a network, Acquire three-dimensional images.
- the electronic device may also collect three-dimensional images by itself, and select multiple three-dimensional images to be processed therefrom.
- the collected three-dimensional image of a certain part of a certain modal can be displayed on the electronic device, and the user can preview on the displayed image, and The preview interface intercepts the target part, so as to reduce the time and accuracy of the subsequent image processing caused by the information of the non-target area, thereby improving the efficiency and accuracy of identifying the target area.
- the subsequent region positioning step can be analyzed and processed from multiple angles, thereby improving the accuracy of recognition.
- the three-dimensional image may be a three-dimensional magnetic resonance image, that is, the method can perform regional positioning on the three-dimensional magnetic resonance image.
- the device for acquiring a three-dimensional magnetic resonance image may be a magnetic resonance image acquisition device.
- the magnetic resonance imaging technology uses the principle of Nuclear Magnetic Resonance (NMR), based on the different attenuation of the released energy in different structural environments within the substance, and detects the emitted electromagnetic waves by applying a gradient magnetic field. Knowing the position and type of the nucleus that compose this object can be used to draw an image of the internal structure of the object.
- NMR Nuclear Magnetic Resonance
- the magnetic resonance signals collected by the magnetic resonance imaging equipment can be subjected to a series of post-processing to generate T1 and T2 weighted (T1 Weighted and T2 Weighted), angiography (Magnetic Resonance Angiography, MRA), and diffusion weighted imaging (Diffusion Weighted Image).
- T1 Weighted and T2 Weighted T1 Weighted and T2 Weighted
- angiography Magnetic Resonance Angiography
- MRA Magnetic Resonance Angiography
- Diffusion Weighted Image diffusion weighted imaging
- DWI Diffusion Coefficient
- ADC Apparent Diffusion Coefficient
- FS Fat Suppression
- DCE Dynamic Contrast-Enhanced
- MRI is widely used in medical diagnosis and has very good resolution of soft tissues such as bladder, rectum, uterus, vagina, joints and muscles.
- various MRI parameters can be used for imaging, and MRI images with multiple modalities can provide rich diagnostic information.
- the required profile can be freely selected by adjusting the magnetic field, and three-dimensional image sequences can be generated from various directions.
- MRI has no ionizing radiation damage to the human body, it is often used in the detection and diagnosis of diseases such as the reproductive system, breast, and pelvis.
- the time and accuracy of the subsequent image processing can be reduced by the absence of pathological information (non-target area information), thereby improving the identification of pathological tissue (target area information) Efficiency and accuracy.
- the 3D magnetic resonance image sequence is a cube formed by stacking multiple 2D magnetic resonance images.
- the magnetic resonance image acquisition equipment can apply a radio frequency pulse of a specific frequency to the target to be detected in the static magnetic field, so that the hydrogen protons inside the target to be detected are excited to cause a magnetic resonance phenomenon. After the pulse is stopped, the protons are in the relaxation process.
- the NMR signal is generated in the NMR signal, and the magnetic resonance signal is generated through processing processes such as the reception of the NMR signal, spatial encoding and image reconstruction, that is, the acquisition of the magnetic resonance three-dimensional image sequence.
- three-dimensional magnetic resonance images may be obtained from a medical image data storage system; or, through communication with other storage devices via a network, three-dimensional magnetic resonance images may be obtained.
- Image features can include color features, texture features, shape features, spatial relationship features, and so on.
- the length and width dimensions of the two-dimensional neural network can be changed into the length, width and height dimensions of the three-dimensional neural network by adding the height dimension to the neural network.
- the three-dimensional images of each mode are input into multiple channels of the three-dimensional model to extract image features, and after multiple extraction operations, the image features of all three-dimensional images obtained in step S101 can be obtained.
- 3D image processing networks may include 3D Convolutional Neural Networks (Threedemension Convolutional Neural Networks, 3D CNN) and 3D Fully Convolutional Networks (Three Demension Fully Convolutional Networks, 3D FCN) and so on.
- a preset 3D FCN model can be used to extract the image features of a 3D image, which will be described in detail below:
- FCN can classify images at the voxel level, thereby solving the problem of semantic segmentation (semantic segmentation).
- semantic segmentation semantic segmentation
- FCN can accept input images of any size, using reverse The convolutional layer upsamples the feature map of the last convolutional layer to restore it to the same size of the input image, so that a prediction can be generated for each voxel, while retaining the spatial information in the original input image, and finally Perform voxel classification on the up-sampled feature map.
- FCN replaces the fully connected layer of CNN with a convolutional layer, so that the network output is no longer a category but a heatmap, and at the same time, to solve the problem of convolution and pooling operations
- up-sampling it is proposed to use up-sampling to restore.
- 3D FCN Compared with 2D FCN for convolution operations on 2D images, 3D FCN will consider not only the length information and width information of the image, but also the height information of the image.
- Figure 1c is the process of convolving a 3D image sequence using a 3D convolution kernel in 3D FCN.
- the height dimension of the convolution operation in the figure is N.
- the continuous N frame images are stacked into a cube, and then used
- the 3D convolution kernel performs convolution operations on the cube.
- each feature map in the convolutional layer is connected to multiple adjacent consecutive frames in the previous layer, so height information can be captured. That is, through the convolution between these N frames of images, 3D FCN extracts a certain correlation between the heights.
- the input 3D image can be passed through multiple convolutional layers and down-sampling layers to obtain a heat map, that is, a high-dimensional feature map, and then the high-dimensional feature map can be passed through multiple convolutional layers and downsampling layers.
- the image features are obtained after upsampling layers.
- the 1 ⁇ 1 ⁇ 1 size 3D convolution kernel complete the sliding of one row in the y-axis from low to high on the x-axis on a 5 ⁇ 5 ⁇ 5 3D image sequence with a step length of 1.
- the preset 3D FCN model can be obtained from the locally stored model collection or the externally stored model collection; for example, the preset 3D FCN model can be obtained through communication with other storage devices through the network.
- 3D convolution kernel only provides one type of weight, only one type of feature can be extracted from the cube. Since the feature extraction obtained by one convolution kernel is insufficient, it is possible to identify multiple features by adding multiple convolution kernels. By using multiple convolution kernels, each channel corresponds to a convolution kernel, 3D FCN can extract a variety of features of 3D images.
- S103 Perform fusion processing on image features of multiple three-dimensional images to obtain fusion features.
- the image features that reflect the type information in the target part from multiple angles can be fused, and the fused features obtained have accuracy and diversity.
- common fusion methods can include serial fusion, parallel fusion, selection fusion, transformation fusion, and so on.
- serial fusion is to combine all image features according to the serial method to form a new image feature, that is, to complete the serial fusion of features.
- Parallel fusion is to combine all image features according to a parallel method to form a new image feature, which completes the parallel fusion of features.
- Selective fusion is to select one or more optimal data from all image features and corresponding data of each dimension, and finally combine all the selected data into new features, which completes the feature selection fusion.
- Transformation fusion is to put all the image features together, and use a certain mathematical method to transform them into a brand-new feature expression method, which completes the transformation and fusion of features.
- FIG. 1d is a schematic diagram of feature fusion. As shown in the figure, after extracting image features of 3D image A and 3D image B respectively, the feature points of the same layer and the same position of 3D image A and 3D image B are performed Serial fusion, new feature points are obtained. After repeated many times, the fusion feature of 3D image A and 3D image B can be obtained.
- FIG. 1e is another schematic diagram of feature fusion. As shown in the figure, when image feature extraction is performed on multiple three-dimensional images in step S103, the same layer and the same layer of 3D image A and 3D image B are The voxel at the position is serially fused, and the fusion feature of the voxel is obtained according to the fusion result.
- S104 Determine the voxel type corresponding to the voxel in the three-dimensional image according to the fusion feature.
- the probability that the fusion feature corresponding to the voxel belongs to each voxel type can be calculated according to the fusion feature, so as to determine the probability of the voxel type corresponding to the voxel in the three-dimensional image.
- the normalization operation normalizes the value of the fusion feature to the interval [0, 1].
- Commonly used normalization methods may include function normalization, dimension normalization, sorting normalization, and so on.
- the function normalization can map the feature value to the [0, 1] interval through the mapping function, such as using the maximum and minimum normalization method, which is a linear mapping.
- the mapping function such as using the maximum and minimum normalization method, which is a linear mapping.
- the sub-dimension normalization can also use the maximum and minimum normalization method, but the maximum and minimum values are the maximum and minimum values of the category, that is, the local maximum and minimum values are used.
- the sorting normalization can directly sort the features by size regardless of the original feature value range, and assign a new value to the feature according to the ranking corresponding to the feature.
- the voxel type corresponding to the voxel can be determined according to the probability that the voxel belongs to each voxel type.
- a dictionary can be used to query the voxel type corresponding to the probability to determine the voxel type corresponding to the voxel in the three-dimensional image.
- the dictionary can be obtained from local memory or external memory.
- a dictionary can be obtained from a local database; or, it can be obtained by communicating with other storage devices through the network.
- the voxel type may refer to the type represented by the voxel, and the voxel type may include, for example, a pathological type, that is, the classification of the disease represented by the voxel from a pathological perspective, focusing on describing the current symptoms.
- a pathological type that is, the classification of the disease represented by the voxel from a pathological perspective, focusing on describing the current symptoms.
- Table 1 is a schematic diagram of the dictionary format.
- the probability of determining the voxel type corresponding to a voxel in a three-dimensional image by fusion features is 0, (0, x), (x, y), [y, 1) and 1.
- the voxel type corresponding to the probability of 0 is A
- the voxel type corresponding to the probability greater than 0 and less than or equal to x is B
- the voxel type corresponding to the probability greater than x and less than y is C
- the probability greater than or equal to y and less than 1 corresponds to
- the voxel type is D.
- these fusion features can be optimized through a Probabilistic Graphical Model (GPM) to obtain more refined fusion features, so that the accuracy of identifying the target region is improved.
- GPM Probabilistic Graphical Model
- GPM can interpret the correlation (dependency) relationship between each voxel in a 3D image from a mathematical perspective, that is, you can use GPM to determine the voxel type corresponding to the voxel in the three-dimensional image.
- the probabilistic graphical model is a general term for a class of models based on probabilistic correlations expressed by graphical patterns.
- Probabilistic graphical model combines the knowledge of probability theory and graph theory, and uses graphs to represent the joint probability distribution of variables related to the model.
- probability graph models include Maximum Entropy Markov Model (MEMM), Hidden Markov Model (HMM), Conditional Random Field Algorithm (CRF) and so on.
- MEMM Maximum Entropy Markov Model
- HMM Hidden Markov Model
- CRF Conditional Random Field Algorithm
- a probability graph consists of nodes (also called vertices) and links between them (also called edges or arcs).
- each node represents one or a group of random variables, and the link represents the probability relationship between these variables.
- Probabilistic graphical models are mainly divided into two types, one is a directed graphical model, which is a Bayesian network. The characteristic of this graph model is that the links are directional. The other is undirected graphical models (undirected graphical models), or Markov random fields (Markov random fields). The links of this model have no directional nature.
- the GPM may also determine the probability of the voxel type corresponding to the voxel in the three-dimensional image according to the fusion feature.
- HMM is a generative model used to label sequence data X with Y, and use Markov chain to model the joint probability P(X, Y):
- the Logistic Regression (LR) model is a discriminative model in the classification problem, which directly uses the LR function to model the conditional probability P(y
- the logistic regression function is a special form of the normalized exponential function (softmax), and LR is equivalent to the maximum entropy model (Maximum Entropy Model), which can be written in the form of maximum entropy:
- Zw(x) is a normalization factor
- w is a parameter preset by the model
- ifi(x, y) is a feature function, which can describe the relationship of (x, y).
- CRF is a discriminant model to solve the labeling problem.
- each voxel i has a category label xi and a corresponding observation value yi
- each voxel is used as a node, and the relationship between voxels and voxels is taken as an edge, which constitutes A conditional random field.
- the category label xi corresponding to the voxel i can be inferred by observing the variable yi.
- conditional random field conforms to the Gibbs distribution, where y is the observed value:
- I) is the energy function
- the unary potential function ⁇ i ⁇ u(yi) is the output from the front-end three-dimensional convolutional neural network.
- the binary potential function is as follows:
- the binary potential function describes the relationship between voxels and voxels, encouraging similar voxels to be assigned the same label, while voxels with a large difference are assigned different labels.
- Figure 1f is a diagram of the relationship between NB, HMM, CRF and LR.
- LR is a special form of NB
- CRF is a special form of HMM
- the structure between NB and HMM is different, so LR and CRF are different
- the structure is also different.
- S105 Select a target voxel whose voxel type is a preset voxel type from the three-dimensional image, and obtain position information of the target voxel.
- the types of all voxels in the 3D image can be determined, and a voxel whose voxel type is a preset voxel type is selected from the 3D image as the target voxel.
- the preset voxel type may be a voxel type of interest, such as a preset pathology type in the medical field, and the preset pathology type may be, for example, a breast tumor type. Therefore, in this embodiment, the preset voxel type may be a breast tumor type, and correspondingly, the target area to be identified may be a breast tumor area.
- Table 2 is an example of the position information of the target voxel, where the position information of the target voxel may include information such as coordinate positions and target voxel numbers corresponding to all target voxels of preset voxel types in the 3D image.
- the position information corresponding to the target voxel in the 3D image may include position information corresponding to any one or more of the 3D images, and the position information may be a coordinate position relative to the world coordinate system or relative to the internal coordinate system of the 3D image. Coordinate position and so on.
- S106 Locate the target area based on the position information of the target voxel.
- the target area can be obtained through the position information of the target voxel, and then the target area can be displayed from a multi-directional angle, so as to facilitate the user's actual observation and use.
- the imaging device may draw a table according to the position information of the target voxel, and display the position information of the target voxel in the form of a table.
- the imaging device may calculate the volume of the lesion area according to the coordinate position of each target voxel in the position information of the target voxel.
- the image region locating method provided by the embodiment of the present invention can be applied to the scene of automatically identifying tumors in the magnetic resonance three-dimensional image.
- the image region locating method provided by the embodiment of the present invention extracts multiple magnetic resonances of a certain part of the patient’s body.
- the image features of the three-dimensional image, the fusion feature is calculated based on these image features, and the tumor in the three-dimensional magnetic resonance image can be analyzed in multiple directions based on the fusion feature, so as to realize the automatic recognition of the tumor in the three-dimensional magnetic resonance image.
- S106 may be to select a target three-dimensional image from multiple three-dimensional images, and then, based on the position information of the target voxel, locate the target area on the target three-dimensional image.
- the embodiment of the present application can acquire multiple three-dimensional images of the target part, where the multiple three-dimensional images include multiple three-dimensional images of different modalities; extract the image features of multiple three-dimensional images; The feature is fused to obtain the fusion feature; the voxel type corresponding to the voxel in the 3D image is determined according to the fusion feature; the target voxel whose voxel type is the preset voxel type is selected from the 3D image to obtain the position information of the target voxel ; Locate the target area based on the location information of the target voxel.
- This solution can extract different image features (such as blood features, water molecule features, fat features, etc.) from three-dimensional images of different modalities, and can obtain fusion features containing multi-modal information based on these image features, and directly locate based on the fusion features Out of the target area. As a result, this solution can reduce the probability of misjudgment, thereby improving the accuracy of image region positioning.
- image features such as blood features, water molecule features, fat features, etc.
- the image area positioning device will be specifically integrated in the electronic device for description.
- FIG. 2a is another schematic flow chart of the image region location method provided by the embodiment of the present application.
- the electronic device can locate the pathological tissue area in the image according to the three-dimensional magnetic resonance image, and the electronic device performs The regional positioning process is as follows:
- S202 Perform a preprocessing operation on a plurality of three-dimensional magnetic resonance images to obtain a plurality of preprocessed three-dimensional magnetic resonance images.
- the acquired multiple three-dimensional images such as the magnetic resonance three-dimensional image in this implementation, have different sizes and resolutions, in order to further improve the accuracy of pathological tissue recognition, after acquiring multiple three-dimensional images, extract the image features of multiple three-dimensional images Previously, multiple 3D images could be preprocessed.
- the multiple 3D images may not be in the same coordinate system.
- the coordinate system registration operation can be performed on the multiple 3D images. The way can be:
- reference coordinate system In order to describe the position of a voxel, its coordinate system must be selected.
- reference coordinate system In the reference coordinate system, in order to determine the position of a voxel in space, an ordered set of data, namely coordinates, is selected according to a prescribed method.
- the type of reference coordinate system can be Cartesian rectangular coordinate system, plane polar coordinate system, cylindrical coordinate system and spherical coordinate system, etc.
- the reference coordinate system can be world coordinate system, camera coordinate system, image coordinate system, voxel coordinate system, etc. and many more.
- the reference coordinate system refers to the method of presetting the coordinates specified by the rules, that is, the reference coordinate system used in this question.
- the reference coordinate system can be pre-stored in the local memory or input by the user.
- the medical image processing device can mark the three-dimensional image when collecting it, and in another embodiment, the original coordinate system sent by the image acquisition device can be received. , In another embodiment can be input by the user.
- the original coordinate system can also be obtained from local storage or external storage; for example, the original coordinate system of the three-dimensional image can be obtained from a local image database such as a medical image data storage system; or, through the network and other The storage device communicates to obtain the original coordinate system of the three-dimensional image.
- a local image database such as a medical image data storage system
- the original coordinates of the voxels measured in the original coordinate system are transformed into the coordinate system to obtain the representation of the original coordinates on the reference coordinate system.
- registration algorithms can be divided into overall registration and partial registration according to the process, such as Iterative Closest Point (ICP) and so on.
- a pair of voxels can be registered through the pair-wise registration method.
- a preset three-dimensional rotation matrix R is applied to accurately register the coordinates of the voxel with the reference coordinate system:
- (x, y, z) are the original coordinates of the three-dimensional image
- (x', y', z') are the coordinates after registration
- R is the third-order rotation matrix
- the rotation matrices of the rotation matrix R on the x-axis, y-axis, and z-axis are:
- rotation around any axis can be decomposed into a superposition of rotation around three coordinate axes, and the final rotation matrix R is the product of the above three matrices.
- the preset three-dimensional rotation matrix R can be preset by those skilled in the art and stored in the local memory.
- the voxel distances of the multiple 3D images are different.
- the multiple 3D images can be registered as follows. The registration operation can be implemented as follows:
- Voxel Spacing can describe the density between voxels, and is also called point spacing. Specifically, it refers to the distance from the center of a voxel to the center of an adjacent voxel. Since the voxel spacing reflects the size of the space between two voxels, a smaller voxel spacing means a smaller space between voxels, that is, higher voxel density and higher screen resolution.
- the resolutions of the acquired multiple three-dimensional images may be inconsistent, the accuracy of pathological tissue recognition can be further improved by registering multiple three-dimensional image resolutions.
- the reference voxel spacing can be set by user input.
- the original voxel distance of a three-dimensional image may be selected from a plurality of three-dimensional images as the reference voxel distance.
- the reference voxel distance may also be obtained from local storage or external storage.
- the primitive voxel spacing of the three-dimensional image can be marked by the image capture device when the three-dimensional image is captured.
- the primitive voxel distance sent by the image acquisition device may be received.
- it may be input by the user.
- the protovoxel spacing can also be obtained from local storage or external storage; for example, the protovoxel spacing of a three-dimensional image can be obtained from a local image database such as a medical image data storage system; or The network communicates with other storage devices to obtain the primitive voxel spacing of the three-dimensional image.
- the voxel spacing registration method is similar to the voxel spacing registration method, such as using interpolation, gradient, optimization, maximum mutual information and so on.
- an interpolation method such as nearest neighbor interpolation, two-line interpolation, three-line interpolation, etc. can be used for registration.
- the principle based on the interpolation method is to first enlarge the two images with the same resolution through resampling, and perform interpolation during resampling, that is, to introduce new voxel data to complete the registration.
- the nearest neighbor interpolation method is used to assign the gray value of the voxel closest to the voxel to the original voxel, thereby performing an interpolation operation.
- the gray values of the 4 nearest points of the voxel are used to calculate the values through linear equations to perform interpolation operations.
- corresponding weights are generated according to the distance between the new sampling point and each neighboring point, and the gray value of the new sampling point is interpolated by the gray value of each neighboring point according to the weight.
- the sizes of the multiple three-dimensional images are different.
- the multiple three-dimensional images can also be registered as follows.
- the registration operation can be implemented as follows:
- the accuracy of pathological tissue recognition can be further improved by registering the sizes of multiple three-dimensional images.
- the reference size can be set by user input.
- the original size of a three-dimensional image can be selected from a plurality of three-dimensional images as the reference size.
- the reference size can also be obtained from local storage or external storage.
- the original size of the three-dimensional image For example, in some embodiments, it can be marked by the image capturing device when capturing the three-dimensional image.
- the original size sent by the image capture device can be received. In another embodiment, it can also be input by the user.
- S203 Extract image features of a plurality of preprocessed magnetic resonance three-dimensional images.
- a preset semantic segmentation network can be used to extract image features of multiple three-dimensional images, thereby completing the semantic segmentation of the image, and assigning each pixel in the input image a semantic category to obtain pixelized dense classification.
- the general semantic segmentation architecture can be an encoder-decoder network architecture.
- the encoder may be a well-trained classification network, such as Visual Geometry Group (VGG) and Residual Neural Network (ResNet). The difference between these architectures lies mainly in the decoder network.
- VCG Visual Geometry Group
- ResNet Residual Neural Network
- the task of the decoder is to semantically map the discriminable features learned by the encoder to the pixel space to obtain dense classification.
- Semantic segmentation requires not only the ability to discriminate at the pixel level, but also a mechanism that can map the discriminable features learned by the encoder at different stages to the pixel space, such as using long jump connections and pyramid pooling as part of the decoding mechanism.
- S202 may specifically include:
- each feature map in the convolutional layer is connected to multiple adjacent consecutive frames in the previous layer, so height information can be captured.
- the 1 ⁇ 1 ⁇ 1 size 3D convolution kernel complete the sliding of one row in the y-axis from low to high on the x-axis on a 5 ⁇ 5 ⁇ 5 3D image sequence with a step length of 1.
- the preset 3D semantic segmentation model can be obtained from a locally stored model set or an externally stored model set. For example, it can communicate with other storage devices through the network to obtain a preset 3D semantic segmentation model.
- a 3D convolution kernel only provides one type of weight, only one type of feature can be extracted from the cube.
- the feature extraction obtained by one convolution kernel is not sufficient.
- multiple convolution kernels multiple features can be identified.
- each channel corresponds to a convolution kernel, the 3D semantic segmentation model can extract a variety of features of the 3D image.
- the input 3D image can be passed through multiple convolutional layers and downsampling layers in the previous step to obtain a heat map, that is, a high-dimensional feature map. Then the high-dimensional feature map can pass through multiple up-sampling layers in the current step to obtain image features.
- sampling methods such as nearest neighbor interpolation, bilinear interpolation, mean interpolation, median interpolation, etc. It can be used in the above up-sampling and down-sampling steps.
- S204 Perform fusion processing on the image features of the multiple magnetic resonance three-dimensional images to obtain fusion features.
- S103 may specifically include:
- the preset feature weight may be stored in a preset 3D semantic segmentation model, and the preset 3D semantic segmentation model may be obtained from a locally stored model set or an externally stored model set. For example, it can communicate with other storage devices through the network to obtain a preset 3D semantic segmentation model.
- preset feature weights can be used to perform fusion processing on these image features that reflect the information of the target region in the target part from multiple angles, and the obtained fusion features are accurate and diverse.
- FIG. 2b is a schematic diagram of feature fusion based on preset weights. As shown in the figure, after extracting features for 3D image A and 3D image B, respectively, 3D image A and 3D image B are on the same layer and at the same position. According to the preset weights w1 and w2, the feature points are serially fused to obtain new feature points. After repeated many times, the fusion feature of 3D image A and 3D image B can be obtained.
- Fig. 2c is another schematic diagram of feature fusion. As shown in the figure, when image feature extraction is performed on multiple three-dimensional images in step S103, the same layer and the same layer of 3D image A and 3D image B are The voxel of the position is serially fused according to the preset weights W1 and W2, and the fusion feature of the voxel is obtained according to the fusion result.
- the preset feature weight can be preset by a technician and stored in the local memory.
- a possible implementation of weighting the image features of the multiple three-dimensional images may be to determine multiple image features corresponding to voxels at the same position in the multiple three-dimensional images;
- the multiple image features are weighted with preset feature weights to obtain multiple weighted image features;
- the multiple image features after weighted processing are accumulated to obtain the fusion feature of the voxel.
- S205 Determine the pathological type corresponding to the voxel in the three-dimensional magnetic resonance image according to the fusion feature.
- the voxel type in this embodiment may be a pathological type, which may refer to a pathological condition in a pathological perspective.
- the classification focuses on describing current symptoms.
- the pathological types of lung cancer may include small cell carcinoma, alveolar carcinoma, bronchial adenoma, etc.
- the pathological types of breast tumors may include benign breast tumors and malignant breast tumors
- the pathological types of benign breast tumors may include Fibroadenoma, intraductal papilloma, hamartoma, cyst, etc.
- the pathological types of breast malignant tumors can include lobular carcinoma in situ, intraductal carcinoma, mucinous adenocarcinoma and so on.
- the probability of the pathological type corresponding to the voxel in the three-dimensional image is determined by the fusion feature, and a dictionary can be used to query the pathological type corresponding to the probability to determine the pathological type corresponding to the voxel in the three-dimensional image.
- S206 Select a target voxel whose pathology type is a preset pathology type from the three-dimensional magnetic resonance image, and obtain position information of the target voxel.
- the pathological tissue represented by the target voxel can be displayed on a three-dimensional image, so the target three-dimensional image can be selected from multiple three-dimensional images to display the pathological tissue from multiple angles.
- multiple three-dimensional images can include image sequences of different modalities such as MRA, DWI, ADC, FS, and DCE, these image sequences can produce MRI three-dimensional images with various characteristics, which can not only reflect the human anatomy in three-dimensional space, but also Reflect the information of physiological functions such as human blood flow and cell metabolism.
- One or more image sequences are selected from these image sequences of different modalities as the target three-dimensional image.
- the target three-dimensional image is selected according to a preset rule.
- the user designates one or more image sequences from a plurality of image sequences with different modalities as the target three-dimensional image.
- communication with a network server via a network a selection instruction sent by the network server is obtained, and one or more image sequences are selected as the target three-dimensional image according to the selection instruction.
- the preset 3D empty image in order to display the pathological tissue represented by the target voxel more freely and flexibly, can also be set as the target three-dimensional image.
- the preset 3D empty image has its size, resolution, and each The pixel value can be pre-set by the technician, or it can be set instantly by the user.
- the preset 3D empty image can be stored in the local memory, can also be obtained from a foreign memory through the network, or can be set by the user and then generated locally.
- the first way may be:
- Figure 2d is a schematic diagram of determining the corresponding voxel on the target MRI3D image. As shown in the figure, the position corresponding to the position information A is determined on the target MRI3D image, and the voxel at this position is marked as voxel a; similarly, confirm The position corresponding to the position information B, and the voxel at this position is denoted as voxel a.
- Voxel a and voxel b are corresponding voxels on the target three-dimensional image.
- the target pathological area that is, the target area is the target pathological area
- the preset value is used to identify the target pathological area.
- the type of the preset value can be gray value, RGB (Red Green Blue) value, RGBW (Red Green Blue White), etc., which can be preset by those skilled in the art, or can be set instantly by users .
- the size of the preset value can be stored in the local memory, can also be obtained from a foreign memory through the network, or can be set by the user.
- the type of the preset value in order to highlight the contour of the pathological tissue on the target three-dimensional image to improve the readability of the displayed image, the type of the preset value can be set to the gray value type, and the value size can be 1.
- the preset value type in order to display the contour of the pathological tissue on the target three-dimensional image in bright red, thereby improving the readability of the displayed image, for example, the preset value type can be set to RGB type, and the value size can be It is #EE0000.
- the second way can be:
- the target pathological area that is, the target area is the target pathological area
- the center point position information of the target pathological area is obtained.
- the position information of the target voxel such as coordinates, relative position coordinates, and so on, are calculated numerically to obtain the center position information of all target voxels, that is, the center point position information of the pathological tissue.
- the calculation result may be rounded to obtain the center point position information of the pathological tissue.
- the target area may be a target pathological area.
- the center point coordinates can be directly displayed on the display interface.
- the center point may be circled in the 3D image according to the center point position information.
- the embodiment of the present application can acquire multiple 3D magnetic resonance images of the target part of a living body; perform preprocessing operations on multiple transformed 3D magnetic resonance images to obtain multiple preprocessed 3D magnetic resonance images; extract Image features of a plurality of preprocessed MRI 3D images; fusion processing of image features of multiple MRI 3D images to obtain fusion features; according to the fusion features, determine the pathological type corresponding to the voxel in the MRI 3D image; Select the target voxel whose pathological type is the preset pathological type from the three-dimensional resonance image to obtain the position information of the target voxel; select the three-dimensional magnetic resonance image of the target from multiple three-dimensional magnetic resonance images; based on the position information of the target voxel, Locating the target pathological area on the three-dimensional magnetic resonance image.
- This solution can and can preprocess three-dimensional images of different modalities, and extract different image features (such as blood features, water molecular features, fat features, etc.) provided by these preprocessed images, and then can be based on These image features obtain fusion features containing multi-modal information, directly locate them according to the fusion features, and display the location area of the pathological tissue on the target three-dimensional image. Therefore, the solution can improve the readability of the recognition result, reduce the misjudgment rate of pathological tissues, and improve the accuracy of pathological area recognition.
- image features such as blood features, water molecular features, fat features, etc.
- this embodiment will provide a specific area positioning application scenario.
- the area positioning device is specifically integrated into a medical image storage and transmission system (Picture Archiving and Communication Systems, PACS)
- PACS Picture Archiving and Communication Systems
- PACS will automatically acquire multiple 3D magnetic resonance images of the patient’s thoracic cavity and extract the image features of these 3D magnetic resonance images; then perform fusion processing on the image features of these 3D magnetic resonance images to obtain the fusion features.
- the fusion feature determine the pathological type corresponding to the voxel in the 3D magnetic resonance image; then select the target voxel whose pathological type is the breast tumor type from the 3D magnetic resonance image to obtain the position information of the target voxel; based on the position of the target voxel
- the information performs pathological analysis on the patient’s thoracic cavity to obtain pathological information of breast tumors.
- PACS Planar Component Automation Network
- Imaging department of a hospital Its main task is to transfer all kinds of daily medical images, such as images produced by magnetic resonance, CT, ultrasound, infrared instruments, and microscopes, through simulation, DICOM, and Internet.
- Various interfaces are stored in a digital way, and can be quickly transferred back to use under certain authorization when needed. At the same time, some auxiliary diagnosis management functions are added.
- the PACS calls the DCE 3D sequence and the DWI 3D sequence of the target patient from the local memory.
- Figure 3a is a certain layer of DCE and DWI images in a multimodal MRI 3D image.
- the DCE image can clearly see the breast and its internal breast pathological tissue, as well as the chest cavity and its internal heart and other internal organs.
- the DWI sequence can only clearly see the breast and its internal breast pathological tissue. Due to the different composition of the pathological tissue and the heart, the thoracic cavity and the internal heart in the figure are low signal, that is, black.
- the DCE three-dimensional sequence and the DWI three-dimensional sequence are used as the multimodal magnetic resonance three-dimensional image to predict the pathological tissue, and the image of the breast and its internal breast pathological tissue can be directly obtained, and the thoracic cavity and the heart inside will be It will be displayed in black, so the thoracic cavity and the heart inside will not affect the judgment of the breast pathology by this scheme, and there is no need to crop these images to local parts after obtaining the multi-modal MRI 3D images to obtain the breast image To reduce the impact of the heart.
- PACS will also perform registration operations on the acquired DCE and DWI images to improve the accuracy of pathological tissue recognition.
- PACS records the DCE three-dimensional sequence before the contrast agent injection of the target patient as DCE_T0, the DCE-MRI three-dimensional sequence after the contrast agent injection as DCE_Ti, and the DWI three-dimensional sequence as DWI_bi.
- the i in DCE_Ti represents the DCE sequence at the i-th time point after the patient is injected with the contrast agent.
- DWI_bi represents the diffusion sensitivity factor
- i represents the DWI sequence of the i-th b value.
- Fig. 3b is a schematic diagram of the specific embodiment process provided by this embodiment, including a tumor prediction part and a model training part.
- PACS registers the acquired DWI_bi and DCE_Ti according to the acquired DCE_T0. Obtain the registered training images, use these images to train the 3D Unet model, and obtain the trained 3D Unet model.
- PACS registers the acquired DWI_bi and DCE_Ti according to the acquired DCE_T0, so that the resolution, image size, and coordinate system of DWI_bi, DCE_Ti, and DCE_T0 are the same, and then the registered data is input into the trained 3D Unet Model to locate the pathological area.
- the value of i in DCE_Ti is 3, the value of b in DWI_bi is 600, and the value of i is 800.
- PACS uses the current world coordinate system as the reference coordinate system, and registers the DCE_T0, DCE_T3 and DWI_b800 coordinate systems to the current world coordinate system.
- PACS uses the voxel spacing of the DCE three-dimensional sequence before the contrast agent injection of the target patient as the reference voxel spacing to register other DCE three-dimensional sequences and DWI three-dimensional sequences. That is, the voxel spacing of DCE_T0 is used to register the voxel spacing of DCE_T3 and DWI_b800 sequences.
- the origin of the coordinate system (x 0 , y 0 , z 0 ) is stored in the PACS memory, and when the PACS obtains the DWI_b800 image, it can be read from the image.
- the space rectangular coordinate system that is, the right-hand coordinate system, as the current world coordinate system, so take the rotation matrix R as the rotation matrix:
- the voxel spacing of DCE_T0 in the x, y, and z directions is (0.84, 0.84, 1.6), which is recorded as the reference voxel spacing.
- R is the rotation matrix
- the origin of the coordinate system (x 0 , y 0 , z 0 ) is stored in the PACS memory.
- DWI_b800 is transformed to (429,214,96) by linear interpolation of 3D data
- the size needs to be further trimmed, that is, DWI_b800 of size (429,214,96) is padded to (448,448,72)
- the missing places are filled with 0, and the extra places are deleted.
- the above DCE_T0, DCE_T3, and DWI_b800 are all three-dimensional images. After the above three steps, three sets of MRI three-dimensional images with the same size and the same coordinate system are obtained after registration.
- the 3D U-Net model is used to process the three-dimensional image.
- Figure 3c is a schematic diagram of the structure of the 3D U-Net model.
- the 3D U-Net model is based on the U-Net model. All 2D operations in the network model will be replaced with 3D operations, such as 3D convolution. Three-dimensional pooling, three-dimensional upsampling, etc.
- the 3D U-Net model is also an encoder-decoder structure. The encoder is used to analyze the global information of the three-dimensional image and perform feature extraction and analysis on it.
- the 3D U-Net -Net model has been trained in advance, it includes and uses the following convolution operations:
- Each layer of neural network contains two convolutions with a three-dimensional size of 3*3*3.
- the decoder is used to repair target details, and the decoder includes and executes the following operations before the last layer:
- Each layer of neural network contains a 2*2*2 three-dimensional deconvolution layer for upsampling, with a step size of 2.
- the DCE_T0, DCE_T3, and DWI_b800 three-dimensional images obtained in the previous step are used as the three channels of the input data, and the trained 3D U-net model is input to extract the corresponding image features.
- a 1*1 convolution kernel is used to perform convolution operations, and each 64-dimensional feature vector is mapped to the output layer of the network.
- the image features of the DCE_T0 three-dimensional image, the image feature of the DCE_T3 three-dimensional image, and the image feature of the DWI_b800 three-dimensional image are extracted. These image features are mapped to the output layer of the network in the last layer of the 3D U-Net model decoding end , The fusion feature is obtained.
- the output layer of the 3D U-Net model can determine the probability of the pathological type corresponding to the voxel in the magnetic resonance three-dimensional image according to the fusion feature.
- x max is the maximum value of the sample data
- x min is the minimum value of the sample data
- x' is the normalized result.
- the probability of the pathological type corresponding to the voxel in the 3D magnetic resonance image is determined by fusion features, and the preset pathology dictionary is used to query the pathological type corresponding to the probability to determine the pathological type corresponding to the voxel in the 3D magnetic resonance image .
- the preset pathology dictionary is stored in the local memory of the PACS, and by calling the preset pathology dictionary, the pathology type corresponding to the voxel in the magnetic resonance three-dimensional image can be determined.
- Table 3 is an example of the format of the preset pathology dictionary. As shown in the table, the obtained probabilities are 0, (0, x], (x, y), [y, 1), and 1, respectively. Among them, the pathological type corresponding to the probability of 0 is A, the pathological type corresponding to the probability greater than 0 and less than or equal to x is B, the pathological type corresponding to the probability greater than x and less than y is C, and the pathological type corresponding to the probability greater than or equal to y and less than 1 For D, the pathological type corresponding to probability 1 is E.
- the pathological type of all voxels in the MRI 3D image can be determined. From the MRI 3D image, select the voxel whose pathological type is breast malignant tumor type as the target voxel. Table 4 shows the position of the target voxel Information example, where the position information of the target voxel may include the coordinate values of the target voxels of all breast malignant tumor types in the 3D image.
- the target pathological area is breast malignant tumor area; if the pathological type is breast tumor type, the target pathological area is breast tumor area, and so on.
- the DCE_T0 image and the preset 3D empty image are selected from a plurality of three-dimensional magnetic resonance images as the target three-dimensional magnetic resonance image.
- the size and resolution of the preset 3D empty image are the same as the size and resolution of the DCE_T0 image, and the type of each pixel is the gray value and the value is 0. Among them, the preset 3D empty image is stored in the local memory of the PACS.
- the coordinate values of all target voxels are applied to the DCE_T0 image, the corresponding voxel set is obtained on the DCE_T0 image, and the values of these voxels are set to the type
- the gray value the value is 1, to highlight the contour of the pathological tissue on the DCE_T0 image to improve the readability of the displayed image.
- the coordinate values of all target voxels are applied to the preset 3D empty image, the corresponding voxel set is obtained on the preset 3D empty image, and the values of these voxels are set to grayscale value and value 1.
- the voxel value of is set to 0.8 to display the location of the center point on the DCE_T0 image and the preset 3D empty image, and to display the value of the center point on the preset 3D empty image.
- Figure 3d is the output result provided by this embodiment. As shown in the figure, it includes three views of the contour of the pathological tissue displayed on the DCE_T0 image and a side view of the contour of the pathological tissue on the preset 3D empty image. All the contours of the pathological tissue They are all highlighted, and the center point is displayed with a "cross" on all images, and the value of the center point is marked on the preset 3D empty image.
- PACS obtains multiple MRI 3D images of the patient’s thoracic cavity from the local memory.
- the image features of the multiple MRI 3D images are extracted; then the image features of the multiple MRI 3D images are fused to obtain Fusion feature, according to the fusion feature to determine the pathological type corresponding to the voxel in the 3D magnetic resonance image; then select the target voxel whose pathological type is the breast tumor type from the 3D magnetic resonance image to obtain the position information of the target voxel, based on the target voxel The location information of locating the breast tumor area.
- PACS can extract different image characteristics (such as blood) provided by different modal magnetic resonance three-dimensional images. Features, water molecule features, fat features, etc.), based on these image features, PACS obtains fusion features containing multi-modal information, and directly locates the breast tumor area based on the fusion features. As a result, this solution can reduce the probability of misjudging other tissues and organs as pathological tissues, thereby improving the accuracy of pathological area positioning.
- an embodiment of the present application also provides an image area positioning device, which may be specifically integrated in an electronic device.
- the electronic device may include magnetic resonance image acquisition equipment, magnetic resonance imaging equipment, Medical image data processing equipment and medical image data storage equipment, etc.
- the image area positioning device may include an acquisition module 401, an extraction module 402, a fusion module 403, a classification module 404, a screening module 405, and a positioning module 406, as follows:
- the obtaining module 401 obtains multiple three-dimensional images of the target part, where the multiple three-dimensional images include multiple three-dimensional images of different modalities;
- the extraction module 402 is used to extract image features of multiple three-dimensional images
- the fusion module 403 is used to perform fusion processing on image features of multiple three-dimensional images to obtain fusion features;
- the classification module 404 is configured to determine the voxel type corresponding to the voxel in the three-dimensional image according to the fusion feature;
- the screening module 405 is used to select a target voxel whose voxel type is a preset voxel type from the three-dimensional image to obtain position information of the target voxel;
- the positioning module 406 is used to locate the target area based on the position information of the target voxel.
- the extraction module 402 may include a preprocessing module and an extraction sub-module, as follows:
- the preprocessing module is used to perform preprocessing operations on multiple 3D images to obtain multiple preprocessed 3D images
- the extraction sub-module is used to extract image features of multiple preprocessed three-dimensional images.
- the extraction submodule can be specifically used for:
- the extraction sub-module can be specifically used for:
- the original voxel spacing of multiple three-dimensional images is transformed into a reference voxel spacing.
- the extraction submodule can also be specifically used for:
- the fusion module 403 may include a weight acquisition module and a weighting module, as follows:
- the weight acquisition module is used to acquire preset feature weights corresponding to image features
- the weighting module is used to perform weighting processing on the image features of multiple three-dimensional images based on the preset feature weights.
- the weighting module may be specifically used for:
- the classification module 404 may include the following:
- the determination module is used to determine the fusion features corresponding to the voxels in the three-dimensional image
- the probability module is used to calculate the probability that the fusion feature corresponding to the voxel belongs to each voxel type, and obtain the probability that the voxel belongs to each voxel type;
- the classification sub-module is used to determine the voxel type corresponding to the voxel according to the probability of each voxel type.
- the positioning module 406 may include an image selection module and a positioning sub-module, as follows:
- the image selection module is used to select a target 3D image from multiple 3D images
- the positioning sub-module is used to locate the target area on the target 3D image based on the position information of the target voxel.
- the positioning sub-module may be specifically used for:
- the positioning submodule can be specifically used for:
- the center point of the target area is located on the target three-dimensional image.
- each of the above modules can be implemented as an independent entity, or can be combined arbitrarily, and implemented as the same or several entities.
- each of the above modules please refer to the previous method embodiments, which will not be repeated here.
- the image area positioning device of this embodiment acquires multiple three-dimensional images of the target part by the acquiring module 401; the extracting module 402 extracts the image features of the multiple three-dimensional images; the fusion module 403 fuses the image features of the multiple three-dimensional images Process to obtain the fusion feature; the classification module 404 determines the voxel type corresponding to the voxel in the 3D image according to the fusion feature; the screening module 405 selects the target voxel whose voxel type is the preset voxel type from the 3D image to obtain the target voxel Position information; the positioning module 406 locates the target area based on the position information of the target voxel.
- This solution can extract different image features provided by three-dimensional images of different modalities, and can obtain fusion features containing multi-modal information based on these image features, and directly locate regions based on the fusion features. As a result, this solution can reduce the misjudgment rate, thereby improving the accuracy of regional positioning.
- an embodiment of the present invention also provides a medical image processing device, which includes an image acquisition unit, a processor, and a memory, and the memory stores multiple instructions.
- the medical image processing equipment may have integrated functions such as image acquisition, image analysis, and lesion location.
- the medical image acquisition unit can be used to acquire multiple three-dimensional images of a target part of a living body
- the memory can be used to store image data and multiple instructions
- the processor can be used to read multiple instructions stored in the memory to perform the following steps:
- the processor may load instructions from the memory to obtain multiple three-dimensional images of the target part, where the multiple three-dimensional images include multiple three-dimensional images of different modalities;
- the target area is located based on the location information of the target voxel.
- the processor when the execution step determines the voxel type corresponding to the voxel in the three-dimensional image according to the fusion feature, the processor specifically executes the determination of the fusion feature corresponding to the voxel in the three-dimensional image; calculates that the fusion feature corresponding to the voxel belongs to each voxel. According to the probability of voxel type, obtain the probability that the voxel belongs to each voxel type; determine the voxel type corresponding to the voxel according to the probability of the voxel belonging to each voxel type;
- the processor when performing the step of performing fusion processing on the image features of the multiple three-dimensional images, specifically executes the acquisition of preset feature weights corresponding to the image features; based on the preset feature weights, the image features of the multiple three-dimensional images Perform weighting processing;
- the processor when the executing step performs weighting processing on image features of multiple three-dimensional images based on preset feature weights, the processor specifically executes determining multiple image features corresponding to voxels at the same position in the multiple three-dimensional images; Perform weighting processing on multiple image features with preset feature weights to obtain multiple image features after weighting processing; performing an accumulation operation on multiple image features after weighting processing to obtain voxel fusion features;
- the processor when the execution step locates the target area based on the position information of the target voxel, specifically executes the selection of the target 3D image from a plurality of 3D images; based on the position information of the target voxel, the target 3D image Position the target area.
- the processor when the executing step locates the target area on the target 3D image based on the position information of the target voxel, the processor specifically executes the position information based on the target voxel to determine the corresponding voxel on the target 3D image. Voxels to obtain the corresponding voxels; set the voxel value of the corresponding voxels to the preset value to identify the target area.
- the processor when the execution step locates the target area on the target three-dimensional image based on the position information of the target voxel, specifically executes the average calculation of the position information of all target voxels to obtain the center of the target area Point location information; based on the center point location information, locate the center point of the target area on the target three-dimensional image.
- the processor when performing the step of extracting image features of multiple three-dimensional images, specifically performs preprocessing operations on multiple three-dimensional images to obtain multiple preprocessed three-dimensional images; extracting multiple preprocessed three-dimensional images Image characteristics of three-dimensional images.
- the processor when performing preprocessing operations on multiple three-dimensional images, specifically executes acquiring a reference coordinate system and the original coordinate systems of the multiple three-dimensional images; transforming the original coordinate systems of the multiple three-dimensional images into reference Coordinate System.
- the image area locating device provided in the above embodiment can also be used to locate the breast tumor area.
- the acquiring module 401 acquires multiple three-dimensional images of the target part, where the multiple three-dimensional images include multiple Three-dimensional images of different modalities;
- the extraction module 402 is configured to extract image features of the multiple three-dimensional images
- the fusion module 403 performs fusion processing on the image features of the multiple three-dimensional images to obtain fusion features
- the classification module 404 determines the voxel type corresponding to the voxel in the three-dimensional image according to the fusion feature
- the screening module 405 selects a target voxel whose voxel type is a breast tumor type from the three-dimensional image to obtain position information of the target voxel;
- the positioning module 406 locates the breast tumor area based on the position information of the target voxel.
- the processor of the above-mentioned medical image processing device is used to read multiple instructions stored in the memory to perform the following steps:
- the breast tumor area is located.
- Fig. 5a shows a schematic diagram of the internal structure of the medical image processing device involved in the embodiment of the present invention, specifically:
- the medical image processing equipment may include one or more processing core processors 501, one or more computer-readable storage media of memory 502, power supply 503, input unit 504, image acquisition unit 505 and other components.
- processing core processors 501 one or more computer-readable storage media of memory 502, power supply 503, input unit 504, image acquisition unit 505 and other components.
- FIG. 5a does not constitute a limitation on the medical image processing device, and may include more or fewer components than shown in the figure, or combine certain components, or different The layout of the components. among them:
- the processor 501 is the control center of the medical image processing equipment. It uses various interfaces and lines to connect the various parts of the entire medical image processing equipment, runs or executes the software programs and/or modules stored in the memory 502, and calls the The data in the memory 502 executes various functions of the medical image processing device and processes data, so as to monitor the medical image processing device as a whole.
- the processor 501 may include one or more processing cores; preferably, the processor 501 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application Programs, etc.
- the modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 501.
- the memory 502 can be used to store software programs and modules.
- the processor 501 executes various functional applications and data processing by running the software programs and modules stored in the memory 502.
- the memory 502 may mainly include a storage program area and a storage data area.
- the storage program area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of medical image processing equipment, etc.
- the memory 502 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
- the memory 502 may further include a memory controller to provide the processor 501 with access to the memory 502.
- the medical image processing equipment also includes a power supply 503 for supplying power to various components.
- the power supply 503 may be logically connected to the processor 501 through a power management system, so that functions such as charging, discharging, and power consumption management can be managed through the power management system.
- the power supply 503 may also include one or more DC or AC power supplies, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and any other components.
- the medical image processing equipment may further include an input unit 504, which may be used to receive inputted digital or character information and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
- an input unit 504 which may be used to receive inputted digital or character information and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
- the image acquisition unit 505 includes a magnet, a gradient sub-unit, and a radio frequency sub-unit.
- Their main technical performance parameters are magnetic induction intensity, magnetic field uniformity, magnetic field stability, spatial range of fringe fields, magnetic induction intensity and linearity of gradient fields, sensitivity of radio frequency coils, etc., responsible for the generation, detection and encoding of magnetic resonance signals, That is, the acquisition of magnetic resonance three-dimensional images.
- the image acquisition unit 505 can superimpose a gradient magnetic field on the static magnetic field, and can arbitrarily change the gradient direction of the gradient magnetic field, thereby successfully performing thin layer selective excitation and resonance frequency spatial encoding.
- Fig. 5b is a schematic diagram of the principle of the image acquisition unit 505.
- the image acquisition unit 505 may include physical components such as a main magnet, a radio frequency subunit, and a gradient subunit.
- the main magnet is used to generate the field strength, that is, the main magnetic field. Its type can be divided into permanent magnet, normal conducting and superconducting, etc. For example, when a person's body or a part of the body is placed in the main magnetic field, the nuclear spins associated with the hydrogen nuclei in the body's tissue water are polarized.
- the gradient subunit can generate a gradient magnetic field to generate an NMR signal echo signal, can perform spatial positioning encoding of the NMR signal, flow velocity phase encoding of a flowing liquid, and applying a diffusion-sensitive gradient field during DWI imaging.
- the gradient subunit may include gradient coils, gradient amplifiers, digital-to-analog converters, gradient controllers, gradient coolers, and so on.
- the radio frequency sub-unit is responsible for transmitting, amplifying, and receiving to excite the hydrogen nuclei in living or non-living bodies to generate magnetic resonance signals and receive them.
- the radio frequency subunit may include a radio frequency generator, a radio frequency amplifier, and a radio frequency coil.
- the radio frequency coil of the medical image processing device may be a quadrature coil.
- a surface coil may be selected.
- phased array surface coils and integrated phased array surface coils, etc. can also be used.
- the actual process of acquiring a three-dimensional magnetic resonance image of a living body or a non-living body can be divided into two steps.
- the first is thin layer selective excitation and spatial coding, and then the useful information contained in the coding capacity is determined.
- the simplest imaging is a single thin layer imaging, and the steps include: selectively exciting the nuclei in the thin layer to be studied, and two-dimensionally encoding the information obtained from the thin layer; using gradient slope and radio frequency pulse The width of the thin layer can be measured.
- the spatial encoding in a single thin layer can be performed using two-dimensional high-resolution spectroscopy.
- the spatial encoding method in a thin layer is to apply a phase encoding gradient first, and then a frequency encoding or readout gradient, and the application object is a series of polarized spins in the thin layer.
- the thin layer selection gradient is disconnected, and the second orthogonal gradient Gy is applied within a fixed time period t.
- the nuclear processes at different frequencies are simultaneously determined by their positions relative to the second gradient.
- the final result of phase encoding is the distance information along the Y direction.
- the gradient is disconnected, and then the third gradient Gx that is orthogonal to the first two gradients is applied, and it is applied and encoded only at the selected appropriate time t_x2.
- Properly and continuously changing the frequency value can finally provide a spatial code along the X axis. As long as the value of the phase encoding gradient is gradually increased, this process can be repeated.
- the medical image processing equipment may also include a display unit and a cooling system, etc., which will not be repeated here.
- the medical image processing equipment may specifically include one or more instruments.
- the medical image processing device may specifically be constituted by an instrument, such as a nuclear magnetic resonance apparatus, a nuclear magnetic resonance medical image processing device, and so on.
- an instrument such as a nuclear magnetic resonance apparatus, a nuclear magnetic resonance medical image processing device, and so on.
- a processor 501, a memory 502, a power supply 503, an input unit 504, and an image acquisition unit 505 are embedded in the medical magnetic resonance imaging device.
- the medical image processing equipment may also be composed of multiple instruments, such as a nuclear magnetic resonance image acquisition system.
- the image acquisition unit 505 is embedded in the nuclear magnetic resonance instrument bed of the nuclear magnetic resonance image acquisition system, and the processor 501, the memory 502, the power supply 503 and the input unit 504 are embedded in the console.
- the image area positioning device of this embodiment can obtain multiple three-dimensional images of the target part by the processor 501; extract the image features of the multiple three-dimensional images; then the processor 501 performs fusion processing on the image features of the multiple three-dimensional images , Obtain the fusion feature; the processor 501 determines the voxel type corresponding to the voxel in the three-dimensional image according to the fusion feature; selects the target voxel whose voxel type is the preset voxel type from the three-dimensional image, and obtains the position information of the target voxel; The target area is located based on the location information of the target voxel.
- the solution processor 501 can extract different image features provided by three-dimensional images of different modalities, and can obtain fusion features containing multi-modal information based on these image features, and directly locate regions based on the fusion features. As a result, this solution can reduce the misjudgment rate, thereby improving the accuracy of target area positioning.
- an embodiment of the present application provides a memory in which multiple instructions are stored, and the instructions can be loaded by a processor to execute the steps in any image region positioning method provided in the embodiments of the present application.
- the instruction can perform the following steps:
- the target area is located based on the location information of the target voxel.
- the memory may include: read only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
- ROM read only memory
- RAM random access memory
- magnetic disk or optical disk etc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- High Energy & Nuclear Physics (AREA)
- Quality & Reliability (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biodiversity & Conservation Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Signal Processing (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
一种图像区域定位方法、装置和医学图像处理设备。该方法包括:获取目标部位的多个三维图像(S101),其中,多个三维图像包括多个不同模态的三维图像;提取多个三维图像的图像特征(S102);对多个三维图像的图像特征进行融合处理,得到融合特征(S103);根据融合特征确定三维图像中体素对应的体素类型(S104);从三维图像中选择体素类型为预设体素类型的目标体素,得到目标体素的位置信息(S105);基于目标体素的位置信息定位出目标区域(S106)。在该方法中,可以根据不同模态的三维图像所提供的不同的图像特征得到融合特征,直接根据融合特征定位出目标区域。由此可以降低误判目标区域的概率,从而提升目标区域定位的精确度。
Description
本申请要求于2019年3月8日提交中国专利局、申请号201910175745.4、申请名称为“图像区域定位方法、装置和医学图像处理设备”的中国专利申请的优先权,以及要求于2019年3月8日提交中国专利局、申请号201910684018.0、申请名称为“图像区域定位方法、装置和医学图像处理设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及图像处理技术领域,具体涉及一种图像区域定位技术。
近年来,以深度学习为中心的机器学习技术引起了人们的关注,其中,语义分割是一种典型的利用机器学习处理图像定位的问题,其涉及将一些原始数据(例如,医学三维图像)作为输入并将它们转换为具有突出显示的感兴趣区域的掩模。其中,感兴趣区域可以称为目标区域,目标区域例如可以是乳腺肿瘤区域等。
目前基于三维图像的目标区域识别方法存在着识别不精确的问题。
发明内容
本申请实施例提供一种图像区域定位方法、装置和医学图像处理设备,可以提升目标识别的精确度。
本申请实施例提供一种图像区域定位方法,包括:
获取目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;
提取所述多个三维图像的图像特征;
对所述多个三维图像的图像特征进行融合处理,得到融合特征;
根据所述融合特征确定所述三维图像中体素对应的体素类型;
从所述三维图像中选择所述体素类型为预设体素类型的目标体素,得到所述目标体素的位置信息;
基于所述目标体素的位置信息定位出目标区域。
在一些实施例中,根据所述融合特征确定所述三维图像中体素对应的体素类型,包括:
确定所述三维图像中体素对应的融合特征;
计算所述体素对应的融合特征属于各个体素类型的概率,得到所述体素属于所述各个体素类型的概率;
根据所述体素属于所述各个体素类型的概率,确定所述体素对应的体素类型。
在一些实施例中,对所述多个三维图像的图像特征进行融合处理,包括:
获取图像特征对应的预设特征权重;
基于所述预设特征权重,对所述多个三维图像的图像特征进行加权处理。
在一些实施例中,基于所述预设特征权重,对所述多个三维图像的图像特征进行加权处理,包括:
确定所述多个三维图像中相同位置的体素对应的多个图像特征;
对所述多个图像特征进行预设特征权重的加权处理,得到加权处理后的多个图像特征;
对所述加权处理后的多个图像特征进行累加操作,得到所述体素的融合特征。
在一些实施例中,基于所述目标体素的位置信息定位出目标区域,包括:
从所述多个三维图像中选取目标三维图像;
基于所述目标体素的位置信息,在所述目标三维图像上对所述目标区域进行定位。
在一些实施例中,基于所述目标体素的位置信息,在所述目标三维图像上对所述目标区域进行定位,包括:
基于所述目标体素的位置信息,在所述目标三维图像上确定对应的体素,得到对应体素;
将所述对应体素的体素值设为预设数值,以标识所述目标区域。
在一些实施例中,基于所述目标体素的位置信息,在所述目标三维图像上对目标区域进行定位,包括:
对所述所有目标体素的位置信息进行均值计算,得到所述目标区域的中心点位置信息;
基于所述中心点位置信息,在所述目标三维图像上定位出所述目标区域的中心点。
在一些实施例中,提取所述多个三维图像的图像特征,包括:
对所述多个三维图像进行预处理操作,得到多个预处理后的三维图像;
提取所述多个预处理后的三维图像的图像特征。
在一些实施例中,对所述多个三维图像进行预处理操作,包括:
获取参考坐标系以及所述多个三维图像的原坐标系;
将所述多个三维图像的原坐标系变换为所述参考坐标系。
在一些实施例中,对所述多个三维图像进行预处理操作,包括:
获取参考体素间距以及所述多个三维图像的原体素间距;
将所述多个三维图像的原体素间距变换为所述参考体素间距。
在一些实施例中,对所述多个三维图像进行预处理操作,包括:
获取参考尺寸大小以及所述多个三维图像的原尺寸大小;
将所述多个三维图像的原尺寸大小变换为所述参考尺寸大小。
本申请实施例还提供一种图像区域定位装置,包括:
获取模块,获取目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;
提取模块,用于提取所述多个三维图像的图像特征;
融合模块,对所述多个三维图像的图像特征进行融合处理,得到融合特征;
分类模块,根据所述融合特征确定所述三维图像中体素对应的体素类型;
筛选模块,从所述三维图像中选择所述体素类型为预设体素类型的目标体素,得到所述目标体素的位置信息;
定位模块,基于所述目标体素的位置信息定位出目标区域。
本申请实施例还提供一种医学图像处理设备,所述医学图像处理设备
包括医学图像采集单元、处理器和存储器,其中:
所述医学图像采集单元用于采集生命体目标部位的多个三维图像;
所述存储器用于存储图像数据以及多条指令;
所述处理器用于读取存储器存储的多条指令,来执行以下步骤:
获取目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;提取所述多个三维图像的图像特征;对所述多个三维图像的图像特征进行融合处理,得到融合特征;根据所述融合特征确定所述三维图像中体素对应的体素类型;从所述三维图像中选择所述体素类型为预设体素类型的目标体素,得到所述目标体素的位置信息;基于所述目标体素的位置信息定位出目标区域。
当执行步骤根据所述融合特征确定所述三维图像中体素对应的体素类型时,所述处理器具体执行确定所述三维图像中体素对应的融合特征;计算所述体素对应的融合特征属于各个体素类型的概率,得到所述体素属于所述各个体素类型的概率;
根据所述体素属于所述各个体素类型的概率,确定所述体素对应的体素类型;
当执行步骤对所述多个三维图像的图像特征进行融合处理时,所述处理器具体执行获取图像特征对应的预设特征权重;基于所述预设特征权重,对所述多个三维图像的图像特征进行加权处理;
当执行步骤基于所述预设特征权重,对所述多个三维图像的图像特征进行加权处理时,所述处理器具体执行确定所述多个三维图像中相同位置的体素对应的多个图像特征;对所述多个图像特征进行预设特征权重的加权处理,得到加权处理后的多个图像特征;对所述加权处理后的多个图像特征进行累加操作,得到所述体素的融合特征;
当执行步骤基于所述目标体素的位置信息定位出目标区域时,所述处理器具体执行从所述多个三维图像中选取目标三维图像;基于所述目标体素的位置信息,在所述目标三维图像上对目标区域进行定位。
本申请实施例提供一种图像区域定位方法,包括:
获取目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;
提取所述多个三维图像的图像特征;
对所述多个三维图像的图像特征进行融合处理,得到融合特征;
根据所述融合特征确定所述三维图像中体素对应的体素类型;
从所述三维图像中选择所述体素类型为乳腺肿瘤类型的目标体素,得到所述目标体素的位置信息;
基于所述目标体素的位置信息定位出乳腺肿瘤区域。
在一些实施例中,根据所述融合特征确定所述三维图像中体素对应的体素类型,包括:
确定所述三维图像中体素对应的融合特征;
计算所述体素对应的融合特征属于各个体素类型的概率,得到所述体素属于所述各个体素类型的概率;
根据所述体素属于所述各个体素类型的概率,确定所述体素对应的体素类型。
在一些实施例中,对所述多个三维图像的图像特征进行融合处理,包括:
获取图像特征对应的预设特征权重;
基于所述预设特征权重,对所述多个三维图像的图像特征进行加权处理。
在一些实施例中,基于所述预设特征权重,对所述多个三维图像的图像特征进行加权处理,包括:
确定所述多个三维图像中相同位置的体素对应的多个图像特征;
对所述多个图像特征进行预设特征权重的加权处理,得到加权处理后的多个图像特征;
对所述加权处理后的多个图像特征进行累加操作,得到所述体素的融合特征。
在一些实施例中,基于所述目标体素的位置信息定位出乳腺肿瘤区域,包括:
从所述多个三维图像中选取目标三维图像;
基于所述目标体素的位置信息,在所述目标三维图像上对所述乳腺肿瘤区域进行定位。
在一些实施例中,基于所述目标体素的位置信息,在所述目标三维图像上对所述乳腺肿瘤区域进行定位,包括:
基于所述目标体素的位置信息,在所述目标三维图像上确定对应的体素,得到对应体素;
将所述对应体素的体素值设为预设数值,以标识所述乳腺肿瘤区域。
在一些实施例中,基于所述目标体素的位置信息,在所述目标三维图像上对所述乳腺肿瘤区域进行定位,包括:
对所述所有目标体素的位置信息进行均值计算,得到所述乳腺肿瘤区域的中心点位置信息;
基于所述中心点位置信息,在所述目标三维图像上定位出所述乳腺肿瘤区域的中心点。
在一些实施例中,提取所述多个三维图像的图像特征,包括:
对所述多个三维图像进行预处理操作,得到多个预处理后的三维图像;
提取所述多个预处理后的三维图像的图像特征。
在一些实施例中,对所述多个三维图像进行预处理操作,包括:
获取参考坐标系以及所述多个三维图像的原坐标系;
将所述多个三维图像的原坐标系变换为所述参考坐标系。
在一些实施例中,对所述多个三维图像进行预处理操作,包括:
获取参考体素间距以及所述多个三维图像的原体素间距;
将所述多个三维图像的原体素间距变换为所述参考体素间距。
在一些实施例中,对所述多个三维图像进行预处理操作,包括:
获取参考尺寸大小以及所述多个三维图像的原尺寸大小;
将所述多个三维图像的原尺寸大小变换为所述参考尺寸大小。
本申请实施例还提供一种图像区域定位装置,包括:
获取模块,获取目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;
提取模块,用于提取所述多个三维图像的图像特征;
融合模块,对所述多个三维图像的图像特征进行融合处理,得到融合特征;
分类模块,根据所述融合特征确定所述三维图像中体素对应的体素类型;
筛选模块,从所述三维图像中选择所述体素类型为乳腺肿瘤类型的目标体素,得到所述目标体素的位置信息;
定位模块,基于所述目标体素的位置信息定位出乳腺肿瘤区域。
本申请实施例还提供一种医学图像处理设备,所述医学图像处理设备
包括医学图像采集单元、处理器和存储器,其中:
所述医学图像采集单元用于采集生命体目标部位的多个三维图像;
所述存储器用于存储图像数据以及多条指令;
所述处理器用于读取存储器存储的多条指令,来执行以下步骤:
获取目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;提取所述多个三维图像的图像特征;对所述多个三维图像的图像特征进行融合处理,得到融合特征;根据所述融合特征确定所述三维图像中体素对应的体素类型;从所述三维图像中选择所述体素类型为乳腺肿瘤类型的目标体素,得到所述目标体素的位置信息;基于所述目标体素的位置信息定位出乳腺肿瘤区域。
当执行步骤根据所述融合特征确定所述三维图像中体素对应的体素类型时,所述处理器具体执行确定所述三维图像中体素对应的融合特征;计算所述体素对应的融合特征属于各个体素类型的概率,得到所述体素属于所述各个体素类型的概率;
根据所述体素属于所述各个体素类型的概率,确定所述体素对应的体素类型;
当执行步骤对所述多个三维图像的图像特征进行融合处理时,所述处理器具体执行获取图像特征对应的预设特征权重;基于所述预设特征权重,对所述多个三维图像的图像特征进行加权处理;
当执行步骤基于所述预设特征权重,对所述多个三维图像的图像特征进行加权处理时,所述处理器具体执行确定所述多个三维图像中相同位置的体素对应的多个图像特征;对所述多个图像特征进行预设特征权重的加权处理,得到加权处理后的多个图像特征;对所述加权处理后的多个图像特征进行累加操作,得到所述体素的融合特征;
当执行步骤基于所述目标体素的位置信息定位出乳腺肿瘤区域时,所述处理器具体执行从所述多个三维图像中选取目标三维图像;基于所述目标体素的位置信息,在所述目标三维图像上对乳腺肿瘤区域进行定位。
本申请实施例可以获取目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;提取所述多个三维图像的图像特征;对所述多个三维图像的图像特征进行融合处理,得到融合特征;根据所述融合特征确定所述三维图像中体素对应的体素类型;从所述三维图像中选择所述体素类型为预设体素类型的目标体素,得到目标体素的位置信息;基于所述目标体素的位置信息定位出目标区域。
在本申请实施例中,可以基于不同模态的三维图像所提供的不同的图像特征得到融合特征,直接根据融合特征定位出目标区域。由于多个不同模态的三维图像,可以使得接下来的区域定位步骤从多个角度进行分析与处理,可以降低误判的概率,从而提升目标区域定位的精确度。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1a是本申请实施例提供的图像区域定位方法的场景示意图;
图1b是本申请实施例提供的图像区域定位方法的流程示意图;
图1c是本申请实施例提供的3D卷积核对3D图像序列进行卷积操作的过程;
图1d是本申请实施例提供的特征融合示意图;
图1e是本申请实施例提供的另一特征融合示意图;
图1f是本申请实施例提供的NB、HMM、CRF以及LR之间的关系示意图;
图2a是本申请实施例提供的图像区域定位方法的另一流程示意图;
图2b是本申请实施例提供的基于预设权重的特征融合示意图;
图2c是本申请实施例提供的基于预设权重的另一特征融合示意图;
图2d是本申请实施例提供的在目标图像上确定对应体素的场景示意图;
图3a是本申请实施例提供的DCE以及DWI图像示意图;
图3b是本申请实施例提供的具体实施例流程示意图
图3c是本申请实施例提供的3D U-Net模型的结构示意图;
图3d是本申请实施例提供的本实施例提供的输出结果示意图;
图4是本申请实施例提供的图像区域定位装置结构示意图;
图5a是本申请实施例提供的医学图像处理设备内部结构示意图;
图5b是本申请实施例提供的采集单元的原理示意图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供一种图像区域定位方法、装置和医学图像处理设备。
其中,该图像区域定位装置具体可以集成在电子设备中,该电子设备可以包括磁共振成像设备、医学影像数据处理设备以及医学影像数据存储设备等等。
图1a是本申请实施例提供的图像区域定位方法的场景示意图,参考图1a,电子设备可以获取目标部位的多个三维(Three Dimension,3D)图像,其中,多个三维图像包括多个不同模态的三维图像,例如图1a中三维图像A、三维图像B和三维图像C;提取多个三维图像的图像特征,分别得到图像特征a、图像特征b、图像特征c;对多个三维图像的图像特征进行融合处理,得到融合特征;根据融合特征确定三维图像中体素对应的体素类型;从三维图像中选择体素类型为预设体素类型的目标体素,得到目标体素的位置信息;基于目标体素的位置信息定位出目标区域。
以下分别进行详细说明。需说明的是,以下实施例的序号不作为对实施例优选顺序的限定。
在本申请实施例中,将以图像区域定位装置的角度进行描述,该图像区域定位装置具体可以集成在电子设备中,该电子设备可以包括磁共振图像采集设备、磁共振成像设备、医学影像数据处理设备以及医学影像数据存储设备等等。
在本实施例中,提供了一种图像区域定位方法,如图1b所示,该图像区域定位方法的具体流程可以如下:
S101、获取目标部位的多个三维图像。
目标部位可以指生命体例如人类、猫、狗等动植物,其躯体的某组成部位,也可以指非生命体例如人体组织切片、动物标本、生命体代谢产物的某组成部位等等,还可以包括计算机视觉中的三维模型中模型的一部分,例如,病人的胸腔部位、狗标本的脑部等等。
三维图像可以指具有长、宽、高三个维度的立体图像,也可以指具有长、宽、时间三个维度的连续二维图像,三维图像例如可以是激光全息图、计算机三维模型以及磁共振三维图像等等。
获取目标部位的多个三维图像的方式可以有多种。在一些实施例中,可以从本地存储的图像,或者外部存储的图像中获取待处理的多个三维图像;例如,可以从本地图像数据库中获取三维图像;或者,通过网络与其他存储设备通信,获取三维图像。
在一些实施例中,电子设备还可以自己采集三维图像,并从中选择待处理的多个三维图像。
在一些实施例中,为了便于用户选择目标部位,提升识别的精确度,可以在电子设备上显示采集到的某部位的某模态的三维图像,用户可以在显示的图像上进行预览,并且在预览界面截取目标部位,以便减少非目标区域的信息对之后的图像处理产生时间上以及精度上的影响,从而提高了识别目标区域的效率与精确度。
通过获取目标部位的多个不同模态的三维图像,可以使得接下来的区域定位步骤从多个角度进行分析与处理,从而提升识别的精确度。
在一些实施例中,三维图像可以是磁共振三维图像,即该方法可以对磁共振三维图像进行区域定位。此时,采集磁共振三维图像的设备可以是磁共振图像采集设备。
具体地,磁共振成像技术是利用核磁共振(Nuclear Magnetic Resonance,NMR)原理,依据所释放的能量在物质内部不同结构环境中不同的衰减,通过外加梯度磁场检测所发射出的电磁波,即可得知构成这一物体原子核的位置和种类,据此可以绘制成物体内部的结构图像。
被磁共振成像设备采集的磁共振信号可以被处一系列的后置处理,生成T1和T2加权(T1 Weighted and T2 Weighted)、血管造影(Magnetic Resonance Angiography,MRA)、弥散加权成像(Diffusion Weighted Image,DWI)、表观弥散系数(Apparent Diffusion Coefficient,ADC)、脂肪抑制图像(Fat Suppression,FS)以及动态增强成像(Dynamic Contrast-Enhanced,DCE)等等不同模态的图像序列,这些图像序列能够产生各具特点的MRI三维图像,不仅能够在三维空间上反映人体解剖形态,而且能够反映人体血流和细胞代谢等生理功能的信息。
目前,MRI被广泛运用在医学诊断上,对软组织例如膀胱、直肠、子宫、阴道、关节以及肌肉等部位具有非常好的分辨力。而且,MRI各种参数都可以用来成像,多个模态的MRI图像能提供丰富的诊断信息。另外,通过调节磁场可自由选择所需剖面,能够从各种方向生成三维图像序列。除此之外,由于MRI对人体没有电离辐射损伤,因此经常被使用在生殖系统、乳房以及骨盆等疾病的侦测及诊断上。
在这种情况下,通过在预览界面截取目标部位,可以减少无病理信息(非目标区域信息)对之后的图像处理产生时间上以及精度上的影响,从而提高了识别病理组织(目标区域信息)的效率与精确度。
其中,磁共振三维图像序列是由多个磁共振二维图片堆叠成的立方体。
其中,磁共振图像采集设备可以通过对静磁场中的待检测目标施加某种特定频率的射频脉冲,使得待检测目标内部的氢质子受到激励而发生磁共振现象,停止脉冲后质子在弛豫过程中产生核磁共振信号,通过对核磁共振信号的接收、空间编码和图像重建等处理过程,产生磁共振信号,即采集得到磁共振三维图像序列。
比如,在一些实施例中,可以从医学影像数据存储系统中获取磁共振三维图像;或者,通过网络与其他存储设备通信,获取磁共振三维图像。
S102、提取多个三维图像的图像特征。
图像特征可以包括图像的颜色特征、纹理特征、形状特征和空间关系特征等等。
本实施例中可以通过在神经网络中增加高度的维度,来将二维的神经网络的长宽维度变成三维的神经网络的长宽高维度,获得这些三维神经网络训练成的三维模型后,将每个模态的三维图像输入到三维模型的多个通道中来提取图像特征,进行多次提取操作后,即可获得步骤S101中获得的所有的三维图像的图像特征。
目前,常用的三维图像处理网络可以包括三维卷积神经网络(Three Demension Convolutional Neural Networks,3D CNN)以及三维全卷积网络(Three Demension Fully Convolutional Networks,3D FCN)等等。
在一些实施例中,为了解决因为卷积和池化操作对图像尺寸的影响、提升计算效率,可以采用预设的3D FCN模型来提取一个3D图像的图像特征,以下进行具体描述:
FCN可以对图像进行体素级的分类,从而解决了语义级别的图像分割(semantic segmentation)问题。与经典的CNN在卷积层之后使用若干个全连接层,将卷积层产生的(feature map)映射成一个固定长度的图像特征从而进行分类不同,FCN可以接受任意尺寸的输入图像,采用反卷积层对最后一个卷积层的特征图进行上采样,使它恢复到输入图像相同的尺寸,从而可以对每个体素都产生了一个预测,同时保留了原始输入图像中的空间信息,最后在上采样的特征图上进行逐体素分类。简单的来说,FCN与CNN的区别在于FCN将CNN的全连接层换成了卷积层,这样网络输出不再是类别而是热图(heatmap),同时为了解决因为卷积和池化操作对图像尺寸的影响,提出了使用上采样的方式恢复。
而相比于2D FCN对2D图像进行卷积操作,3D FCN除了会考虑图像的长度信息和宽度信息,还会考虑图像的高度信息。
图1c是在3D FCN中采用3D卷积核对3D图像序列进行卷积操作的过程,图中进行卷积操作的高度维度为N,将连续的N帧(frame)图像堆叠成一个立方体,然后采用3D卷积核对立方体进行卷积操作。在这个结构中,卷积层中每一个特征图都会与上一层中多个邻近的连续帧相连,因此可以捕捉高度信息。即通过这N帧图像之间的卷积,3D FCN提取了高度之间某种的相关性。
当三维图像输入预设的3D FCN中时,可以将输入的3D图像经过多个卷积层和下采样层后得到热图(heat map),即高维特征图,然后高维特征图经过多个上采样层后得到图像特征。
具体地,让1×1×1尺寸的3D卷积核,以步长为1在5×5×5的3D图像序列上按照在x轴上由低到高完成y轴中一行的滑动,再在y轴上由下到上完成z轴中一张图片的滑动,最后在z轴上低到高完成整个3D图像序列的滑动,把每个经停的位置都带入3D FCN网络,可以得到的5×5×5个位置的类别得分。
其中,可以从本地存储的模型集,或者外部存储的模型集中获取预设的3D FCN模型;例如,可以通过网络与其他存储设备通信,获取预设的3D FCN模型。
需要注意的是,一种3D卷积核由于只提供一种权值,故只能从立方体中提取一种类型的特征。由于一个卷积核得到的特征提取是不充分的,故可以通过添加多个卷积核,可以识别多种特征。通过采用多种卷积核,每个通道对应一个卷积核,3D FCN可以提取3D图像的多种特征。
S103、对多个三维图像的图像特征进行融合处理,得到融合特征。
从步骤S102获取多个三维图像的图像特征后,可以对这些从多个角度反映了目标部位中类型信息的图像特征进行融合处理,获得的融合特征具有准确性和多样性。
目前,常见的融合方式可以包括串行融合、并行融合、选择融合以及变换融合等等。
其中,串行融合是将所有的图像特征,按照串行的方法组合在一起,构成一个新的图像特征,即完成了特征的串行融合。
而并行融合是将所有的图像特征,按照并行的方法组合在一起,构成一个新的图像特征,即完成了特征的并行融合。
选择融合是从所有的图像特征中,对应的每一维数据中都选择出一个或多个最优的数据,最后把选择出来的所有数据组合成新的特征,即完成了特征的选择融合。
变换融合是将所有的图像特征放在一起,使用一定的数学方法变换为一种全新的特征表达方式,即完成了特征的变换融合。
在一些实施例中,图1d是特征融合示意图,如图所示,分别对3D图像A和3D图像B提取图像特征后,将3D图像A和3D图像B的同一层、同一位置的特征点进行串行融合,得到新的特征点。重复多次后,即可得到3D图像A和3D图像B的融合特征。
在另一些实施例中,图1e是另一特征融合示意图,如图所示,在步骤S103中对多个三维图像进行图像特征的提取时,对3D图像A和3D图像B的同一层、同一位置的体素进行串行融合,根据融合结果得到该体素的融合特征。
S104、根据融合特征确定三维图像中体素对应的体素类型。
获取步骤S103的融合特征后,可以根据该融合特征计算体素对应的融合特征属于各个体素类型的概率,从而确定三维图像中体素对应的体素类型的概率。
由于不同的融合特征具有不同的取值范围,为了降低融合特征的取值范围对最终结果的影响,平衡融合特征的取值范围,提高识别目标区域的精确度,需要对融合特征的范围事先进行归一化操作,将融合特征取值归一化到[0,1]区间。
常用的归一化方法可以包括函数归一化、分维度归一化、排序归一化等等。
其中,函数归一化可以通过映射函数将特征取值映射到[0,1]区间,比如使用最大最小值归一化法,是一种线性的映射。除此之外,还可以通过非线性函数例如log函数的映射进行归一化操作。
其中,分维度归一化也可以使用最大最小归一化方法,但是最大最小值选取的是所属类别的最大最小值,即使用的是局部最大最小值。
其中,排序归一化可以不考虑原来特征取值范围,直接将特征按大小排序,根据特征所对应的排序赋予该特征一个新的值。
根据上述步骤通过融合特征确定三维图像中体素对应的体素类型的概率后,可以根据体素属于各个体素类型的概率,确定体素对应的体素类型。例如,可以采用字典,查询该概率对应的体素类型,从而确定三维图像中体素对应的体素类型。
其中,该字典可以从本地内存中,或者外部内存中获取该字典。例如,可以从本地数据库中获取字典;或者,通过网络与其他存储设备通信,获取字典。
其中,体素类型可以是指体素代表的类型,体素类型例如可以包括病理类型,即体素所代表的病症在病理角度下的分类,侧重于描述当前症状。
表1是字典格式示意图,如表所示,在一些实施例中,通过融合特征确定三维图像中体素对应的体素类型的概率分别为0、(0,x]、(x,y)、[y,1)以及1。其中,概率为0对应的体素类型为A,概率大于0且小于等于x对应的体素类型为B,概率大于x且小于y对应的体素类型为C,概率大于等于y且小于1对应的体素类型为D。
表1
概率 | 0 | (0,x] | (x,y) | [y,1) |
体素类型 | A | B | C | D |
在一些实施例中,可以通过概率图模型(Probabilistic Graphical Model,GPM)对这些融合特征进行优化,得到更加精细的融合特征,使得识别目标区域的精确度提高。
GPM能够从数学的方向解释3D图中每个体素之间的相关(依赖)关系,即可以使用GPM确定三维图像中体素对应的体素类型。
其中,概率图模型是一类用图形模式表达基于概率相关关系的模型的总称。概率图模型结合概率论与图论的知识,利用图来表示与模型有关的变量的联合概率分布。
常用的概率图模型包括最大熵马尔可夫模型(Maximum Entropy Markov Model,MEMM)、隐马尔可夫模型(Hidden Markov Model,HMM)以及条件随机场(conditional random field algorithm,CRF)等等。
一个概率图由结点(nodes)(也被称为端点(vertices))和它们之间的链接(links)(也被称为边(edges)或弧(arcs))组成。在概率图模型中,每个结点表示一个或一组随机变量,链接则表示这些变量之间的概率关系。概率图模型主要分为两种,一种是有向图模型(directed graphical model),也就是贝叶斯网络(Bayesian network)。这种图模型的特点是链接是有方向的。另外一种是无向图(undirected graphical models),或者叫马尔科夫随机场(Markov random fields)。这种模型的链接没有方向性质。
在一些实施例中,GPM在获取步骤S103的融合特征后,也可以根据该融合特征确定三维图像中体素对应的体素类型的概率。
朴素贝叶斯(
Bayes,NB)模型是分类问题中的生成模型(generative model),以联合概率P(x,y)=P(x|y)P(y)建模,运用贝叶斯定理求解后验概率P(y|x)。NB假定输入x的特征向量(x(1),x(2),…,x(j),…,x(n))条件独立(conditional independence),即:
P(x|y)P(y)=P(y)∏P(x(j)|y)
HMM是用于对序列数据X做标注Y的生成模型,用马尔可夫链(Markov chain)对联合概率P(X,Y)建模:
P(X,Y)=∏P(yt|yt-1)P(xt|yt)
然后,通过维特比算法(Viterbi)求解P(Y|X)P(Y|X)的最大值。
逻辑回归分析(Logistic Regression,LR)模型是分类问题中的判别模型(discriminative model),直接用LR函数建模条件概率P(y|x)P(y|x)。实际上,逻辑回归函数是归一化指数函数(softmax)的特殊形式,并且LR等价于最大熵模型(Maximum Entropy Model),它可以写成最大熵的形式:
Pw(y|x)=exp(∑iwifi(x,y))/Zw(x)
其中,Zw(x)为归一化因子,w为模型预设的参数,ifi(x,y)为特征函数(feature function),可以描述(x,y)的关系。
CRF便是为了解决标注问题的判别模型,对于每个体素i所具有类别标签xi还有对应的观测值yi,将每个体素作为节点,体素与体素间的关系作为边,即构成了一个条件随机场。通过观测变量yi可以推测体素i对应的类别标签xi。
条件随机场符合吉布斯分布,其中y是观测值:
P(Y=y|I)=exp(-E(y|I))/Z(I)
其中,E(y|I)是能量函数:
E(x)=∑iΨu(yi)+∑(i<j)Ψp(yi,yj)
其中的一元势函数∑iΨu(yi)即来自于前端三维卷积神经网络的输出。而二元势函数如下:
Ψp(yi,yj)=u(yi,yj)∑Mω(m)k(m)G(fi,fj)
二元势函数是描述体素与体素之间的关系,鼓励相似体素分配相同的标签,而相差较大的体素分配不同标签。
图1f是NB、HMM、CRF以及LR之间的关系图,如图所示,LR是NB的特殊形式,CRF是HMM的特殊形式,而NB和HMM之间的结构不同,故LR和CRF之间的结构也不同。
S105、从三维图像中选择体素类型为预设体素类型的目标体素,得到目标体素的位置信息。
在步骤S104后可以确定3D图像中所有体素的类型,从3D图像中选择体素类型为预设体素类型的体素作为目标体素。其中,预设体素类型可以为所感兴趣的体素类型,例如在医学领域的预设病理类型,预设病理类型例如可以为乳腺肿瘤类型。因此,在本实施例中,预设体素类型可以是乳腺肿瘤类型,相应的,所需识别的目标区域可以为乳腺肿瘤区域。
表2是目标体素的位置信息示例,其中,该目标体素的位置信息可以包括所有预设体素类型的目标体素在3D图像中对应的坐标位置、目标体素编号等信息。
表2
目标体素编号 | 0x01 | 0x02 | 0x03 | 0x04 | 0x05 |
坐标位置 | (a,b,c) | (d,e,f) | (g,h,i) | (j,k,l) | (m,n,o) |
其中,目标体素在3D图像中对应的位置信息可以包括在任一或多个3D图像上对应的位置信息,该位置信息可以是相对于世界坐标系的坐标位置、相对于3D图像内部坐标系的坐标位置等等。
S106、基于目标体素的位置信息定位出目标区域。
在步骤S105获取目标体素的位置信息后,可以通对该目标体素的位置信息得到目标区域,再从多方位的角度展示目标区域,从而方便用户的实际观察与使用。
比如,在一些实施例中,成像设备可以根据目标体素的位置信息绘制表格,以表格的形式显示目标体素的位置信息。
比如,在一些实施例中,成像设备可以根据目标体素的位置信息中每个目标体素的坐标位置计算病变区域的体积。
本发明实施例提供的图像区域定位方法可以应用在自动识别磁共振三维图像中肿瘤的场景中,比如,通过本发明实施例提供的图像区域定位方法提取出病患身体某部位的多个磁共振三维图像的图像特征,根据这些图像特征计算融合特征,基于融合特征可以对磁共振三维图像里的肿瘤进行多方位的分析,从而实现自动识别磁共振三维图像中的肿瘤。
需要说明的是,S106的一种可能实现方式可以是多个三维图像中选取目标三维图像,然后,基于目标体素的位置信息,在目标三维图像上对目标区域进行定位。
由上可知,本申请实施例可以获取目标部位的多个三维图像,其中,多个三维图像包括多个不同模态的三维图像;提取多个三维图像的图像特征;对多个三维图像的图像特征进行融合处理,得到融合特征;根据融合特征确定三维图像中体素对应的体素类型;从三维图像中选择体素类型为预设体素类型的目标体素,得到目标体素的位置信息;基于目标体素的位置信息定位出目标区域。本方案可以提取不同模态的三维图像中不同的图像特征(如血液特征、水分子特征、脂肪特征等),可以根据这些图像特征得到包含了多模态信息的融合特征,直接根据融合特征定位出目标区域。由此,该方案可以降低误判概率,从而提升图像区域定位的精确度。
根据上述实施例所描述的方法,以下将举例作进一步详细说明。
在本申请实施例中,将以图像区域定位装置具体集成在电子设备进行说明。
需要说明的是,S102中提取多个三维图像的图像特征的方式可以是对多个三维图像进行预处理操作,得到多个预处理后的三维图像;提取多个预处理后的三维图像的图像特征。在这种情况下,图2a是本申请实施例提供的图像区域定位方法的另一流程示意图,如图2a所示,电子设备可以根据磁共振三维图像定位图中的病理组织区域,电子设备进行区域定位的流程如下:
S201、获取生命体目标部位的多个磁共振三维图像。
获取目标部位的多个磁共振三维图像的具体方法请参考S101中的详细步骤,在此不做赘述。
S202、对多个磁共振三维图像进行预处理操作,得到多个预处理后的磁共振三维图像。
由于获取的多个三维图像例如本实施的磁共振三维图像尺寸、分辨率等规格不相同,为了进一步提高病理组织识别的精确度,在获取多个三维图像之后、提取多个三维图像的图像特征之前,可以对多个三维图像进行预处理操作。
在一些实施例中,多个三维图像可能并未处于同一坐标系,为了进一步提高目标区域例如病理组织识别的精确度,可以对多个三维图像进行坐标系的配准操作,配准操作的实现方式可以是:
(1)获取参考坐标系以及多个三维图像的原坐标系。
为了说明体素的位置,必须选取其坐标系。在参考坐标系中,为确定空间中体素的位置,按规定方法选取的有次序的一组数据,即坐标。参考坐标系的种类可以是笛卡尔直角坐标系、平面极坐标系、柱面坐标系和球面坐标系等,参考坐标系具体可以是世界坐标系、相机坐标系、图像坐标系、体素坐标系等等。
参考坐标系是指预设规则规定的坐标的方法,即是该问题所用的参考坐标系。参考坐标系可以预存在本地内存中,也可以由用户输入。
获取三维图像的原坐标系的方式有多种,比如,在一些实施例中,可以由医学图像处理设备在采集三维图像时标注,在另一实施例中可以接收图像采集设备发送的原坐标系,在另一实施例中可以由用户输入。
比如,在一些实施例中,还可以从本地存储,或者外部存储获取原坐标系;例如,可以从本地图像数据库如医学影像数据存储系统中获取三维图像的原坐标系;或者,通过网络与其他存储设备通信,获取三维图像的原坐标系。
(2)将多个三维图像的原坐标系变换为参考坐标系。
将原坐标系中测得到的体素的原坐标进行坐标系的变换,以得到原坐标在参考坐标系上的表示。目前常用的配准算法按照过程可以分为整体配准和局部配准,比如迭代最近点算法(Iterative Closest Point,ICP)等等。
一对体素可以通过两两配准(pair-wise registration)法进行配准。在一些实施例中,通过应用一个预设三维的旋转矩阵R来使得体素的坐标精确地与参考坐标系进行配准:
其中,(x,y,z)为三维图像的原坐标,(x’,y’,z’)为配准后的坐标,R为三阶的旋转矩阵。
在一些实施例中,使用右手螺旋定理时,旋转矩阵R在x轴、y轴、z轴的旋转矩阵分别为:
其中,绕任意轴旋转则可以分解成绕三个坐标轴旋转的叠加,最终得到的旋转矩阵R便是上述三个矩阵的乘积。
其中,预设三维的旋转矩阵R可以由本领域技术人员预先设定并保存在本地内存中。
在另一些实施例中,多个三维图像的体素间距不同,为了进一步提高病理组织识别的精确度,可以如下对多个三维图像进行配准操作,配准操作的实现方式可以是:
(1)获取参考体素间距以及多个三维图像的原体素间距。
体素间距(Voxel Spacing)可以描述体素之间的密度,也被称为点间距,具体是指从某一体素中心到相邻体素中心的距离。由于体素间距反映了两个体素之间的空间大小,因此较小的体素间距意味着体素之间的空间较小,即更高的体素密度和更高的屏幕分辨率。
由于在获取的多个三维图像分辨率可能并不一致,通过配准多个三维图像分辨率可以进一步提高病理组织识别的精确度。
获取参考体素间距的方式有多种,比如,在一些实施例中,参考体素间距可以由用户输入设定。在另一实施例中,可以从多个三维图像中选取一个三维图像的原体素间距作为参考体素间距。在另一实施例中,参考体素间距还可以从本地存储,或者外部存储中获取原体素间距。
获取三维图像的原体素间距的方式也有多种,比如,在一些实施例中,可以由图像采集设备在采集三维图像时标注。在另一实施例中可以接收图像采集设备发送的原体素间距。在另一实施例中可以由用户输入。
比如,在一些实施例中,还可以从本地存储,或者外部存储中获取原体素间距;例如,可以从本地图像数据库如医学影像数据存储系统中获取三维图像的原体素间距;或者,通过网络与其他存储设备通信,获取三维图像的原体素间距。
(2)将多个三维图像的原体素间距变换为参考体素间距。
体素间距配准的方法类似体素间距配准方法,例如使用插值法、梯度法、优化法、最大互信息法等等。在一些实施例中,可以使用插值法如最邻近插值法、双线插值法、三线插值法等等来进行配准。
基于插值方法的原理是首先通过再采样使两幅图像放大并且具有相同的分辨率,在再采样时进行插值操作,即引入新的体素数据,从而完成配准。
在一些实施例中,使用最邻近插值法将体素最邻近的体素的灰度值赋给原体素,从而进行插值操作。
在一些实施例中,利用体素的4个最近点的灰度值并通过线性方程计算得到数值,从而进行插值操作。
在一些实施例中,按新采样点到其各个邻点的距离产生相应的权重,新采样点的灰度值由各邻点的灰度值按权重进行插值。
在另一些实施例中,多个三维图像的尺寸不同,为了进一步提高病理组织识别的精确度,还可以如下对多个三维图像进行配准操作,配准操作的实现方式可以是:
获取参考尺寸大小以及多个三维图像的原尺寸大小;
将多个三维图像的原尺寸大小变换为参考尺寸大小。
其中,由于在获取的多个三维图像尺寸大小可能并不一致,通过配准多个三维图像的尺寸大小可以进一步提高病理组织识别的精确度。
获取参考尺寸大小的方式有多种,比如,在一些实施例中,参考尺寸大小可以由用户输入设定。在另一实施例中,可以从多个三维图像中选取一个三维图像的原尺寸大小作为参考尺寸大小。在另一实施例中,参考尺寸大小还可以从本地存储,或者外部存储中获取。获取三维图像的原尺寸大小的方式也有多种,比如,在一些实施例中,可以由图像采集设备在采集三维图像时标注。在另一实施例中可以接收图像采集设备发送的原尺寸大小。在另一实施例中还可以由用户输入。
需要注意的是,上述三个预处理操作,即对3D图像的坐标系、体素间距、尺寸大小的变换操作,可以同时实施,不影响实施顺序。
S203、提取多个预处理后的磁共振三维图像的图像特征。
在本实施例中可以使用预设的语义分割网络对多个三维图像的图像特征进行提取,从而完成图像的语义分割,将输入图像中的每个像素分配一个语义类别,以得到像素化的密集分类。
常用的语义分割网络可以包括PSPNet、RefineNet、DeepLab、U-net等等。一般的语义分割架构可以是编码器-解码器(encoder-decoder)的网络架构。编码器可以是一个训练好的分类网络,例如视觉几何组(Visual Geometry Group,VGG)、残差网络(Residual Neural Network,ResNet)。这些架构之间的不同主要在于解码器网络。解码器的任务是将编码器学习到的可判别特征从语义上映射到像素空间,以获得密集分类。
语义分割不仅需要在像素级有判别能力,还需要有能将编码器在不同阶段学到的可判别特征映射到像素空间的机制,例如使用跳远连接、金字塔池化等作为解码机制的一部分。
在一些实施例中,S202具体可以包括:
(1)采用预设的三维卷积核,对三维图像进行卷积处理:
将连续的N帧(frame)2D图片堆叠成一个立方体,然后采用3D卷积核对该立方体进行卷积操作。在这个结构中,卷积层中每一个特征图都会与上一层中多个邻近的连续帧相连,因此可以捕捉高度信息。
具体地,让1×1×1尺寸的3D卷积核,以步长为1在5×5×5的3D图像序列上按照在x轴上由低到高完成y轴中一行的滑动,再在y轴上由下到上完成z轴中一张图片的滑动,最后在z轴上低到高完成整个3D图像序列的滑动,把每个经停的位置都带入语义分割网络,可以得到的5×5×5个位置的类别得分。
其中,预设的3D语义分割模型可以从本地存储的模型集,或者外部存储的模型集中获取。例如,可以通过网络与其他存储设备通信,获取预设的3D语义分割模型。
需要注意的是,一种3D卷积核由于只提供一种权值,故只能从立方体中提取一种类型的特征。一个卷积核得到的特征提取是不充分的,通过添加多个卷积核,可以识别多种特征。通过采用多种卷积核,每个通道对应一个卷积核,3D语义分割模型可以提取3D图像的多种特征。
(2)采用预设的三维反卷积核,对卷积处理后的三维图像进行上采样,得到三维图像的图像特征:
当核三维图像输入预设的3D语义分割模型中时,可以在上一步骤将输入的3D图像经过多个卷积层和下采样层后得到热图(heat map),即高维特征图,然后高维特征图在当前步骤可以经过多个上采样层,得到图像特征。
其中,采样方式有很多种,如最近邻插值,双线性插值,均值插值,中值插值等方法。在上述上采样和下采样的步骤中均能使用。
S204、对多个磁共振三维图像的图像特征进行融合处理,得到融合特征。
具体实施方式请参考S103中的详细描述,在此不做赘述。
在一些实施例中,S103具体可以包括:
(1)获取图像特征对应的预设特征权重;
该预设特征权重可以保存在预设3D语义分割模型中,预设的3D语义分割模型可以从本地存储的模型集,或者外部存储的模型集中获取。例如,可以通过网络与其他存储设备通信,获取预设的3D语义分割模型。
(2)基于预设特征权重,对多个三维图像的图像特征进行累加和/或累乘处理,得到融合特征。
获取多个三维图像的图像特征后,可以采用预设特征权重对这些从多个角度反映了目标部位中目标区域信息的图像特征进行融合处理,获得的融合特征具有准确性和多样性。
在一些实施例中,图2b是基于预设权重的特征融合示意图,如图所示,分别对3D图像A和3D图像B提取特征后,将3D图像A和3D图像B的同一层、同一位置的特征点依照预设的权重w1、w2进行串行融合,得到新的特征点。重复多次后,即可得到3D图像A和3D图像B的融合特征。
在另一些实施例中,图2c是另一特征融合示意图,如图所示,在步骤S103中对多个三维图像进行图像特征的提取时,对3D图像A和3D图像B的同一层、同一位置的体素依照预设的权重W1、W2进行串行融合,根据融合结果得到该体素的融合特征。
其中,预设特征权重可以由技术人员预先设定并保存在本地内存中。
基于所述预设特征权重,对所述多个三维图像的图像特征进行加权处理的一种可能的实现方式可以是确定多个三维图像中相同位置的体素对应的多个图像特征;对所述多个图像特征进行预设特征权重的加权处理,得到加权处理后的多个图像特征;对所述加权处理后的多个图像特征进行累加操作,得到所述体素的融合特征。
S205、根据融合特征确定磁共振三维图像中体素对应的病理类型。
可以理解的是,本实施例对磁共振三维图像进行处理,以定位图中的病理组织区域,故,本实施例的体素类型可以为病理类型,该病理类型可以是指病症在病理角度下的分类,侧重于描述当前症状。比如,肺癌的病理类型可以包括小细胞癌、肺泡癌、支气管腺瘤等等;再比如,乳腺肿瘤的病理类型可以包括乳腺良性肿瘤和乳腺恶性肿瘤;进一步的,乳腺良性肿瘤的病理类型可以包括纤维腺瘤,乳管内乳头状瘤,错构瘤,囊肿等;乳腺恶性肿瘤的病理类型可以包括小叶原位癌、导管内癌、粘液腺癌等等。
通过融合特征确定三维图像中体素对应的病理类型的概率,可以采用字典,查询该概率对应的病理类型,从而确定三维图像中体素对应的病理类型。
S206、从磁共振三维图像中选择病理类型为预设病理类型的目标体素,得到目标体素的位置信息。
具体实施方式请参见S105,在此不做赘述。
S207、从多个磁共振三维图像中选取目标磁共振三维图像:
为了更好的展示S205中获得的目标体素,可以将目标体素所代表的病理组织在三维图像上显示,故可以从多个三维图像中选取目标三维图像,以多角度展示病理组织。
由于多个三维图像可以包括MRA、DWI、ADC、FS以及DCE等不同模态的图像序列,这些图像序列能够产生各具特点的MRI三维图像,不仅能够在三维空间上反映人体解剖形态,而且能够反映人体血流和细胞代谢等生理功能的信息。
从这些不同模态的图像序列中选取一个或多个图像序列作为目标三维图像。
其中,选取的方式有多种。比如,在一些实施例中,根据预设规则选取目标三维图像。比如,在另一些实施例中,由用户从众多不同模态的图像序列中指定一个或多个图像序列作为目标三维图像。比如,在另一些实施例中,通过网络与网络服务器通信,获取网络服务器发送的选择指令,根据该选择指令选取一个或多个图像序列作为目标三维图像。
在一些实施例中,为了更加自由、灵活地显示目标体素所代表的病理组织,还可以将预设3D空图像设为目标三维图像,该预设3D空图像的尺寸、分辨率以及每个像素数值可以由技术人员预先设定,也可以由用户即时设定。其中,预设3D空图像可以保存在本地内存中,也可以通过网络从外地内存中获得,还可以由用户设定后再本地生成。
S208、基于目标体素的位置信息,在目标磁共振三维图像上对目标病理区域进行定位。
需要说明的是,基于目标体素的位置信息,在目标三维图像上对目标区域进行定位的方式可以包括多种,第一种方式可以是:
(1)基于目标体素的位置信息,在目标三维图像上确定对应的体素,得到对应体素。
图2d是在目标MRI3D图像上确定对应的体素的示意图,如图所示,在目标MRI3D图像上确定位置信息A所对应的位置,该位置的体素记为体素a;同样的,确定位置信息B所对应的位置,该位置的体素记为体素a。
体素a与体素b即为在目标三维图像上对应体素。
(2)将对应体素的体素值设为预设数值,以标识目标区域:
其中,在S208中需要定位的是目标病理区域,即目标区域为目标病理区域,则利用预设数值标识目标病理区域。该预设数值的类型可以是灰度值,也可以是RGB(Red Green Blue)值、RGBW(Red Green Blue White)等等,可以由本领域的技术人员预先设定,也可以由用户即时设定。
其中,该预设数值的大小可以保存在本地内存中,也可以通过网络从外地内存中获得,还可以由用户设定。
在一些实施例中,为了在目标三维图像上将病理组织的轮廓进行高亮显示,以提高显示图像的可读性,预设数值的类型可以设定为灰度值类型,其数值大小可以为1。
在另一些实施例中,为了在目标三维图像上将病理组织的轮廓进行亮红色显示,从而提升显示图像的可读性,比如,预设数值的类型可以设定为RGB类型,其数值大小可以为#EE0000。
第二种方式可以是:
(1)对所有目标体素的位置信息进行均值计算,得到目标区域的中心点位置信息;
其中,在S208中需要定位的是目标病理区域,即目标区域为目标病理区域,则得到的是目标病理区域的中心点位置信息。对目标体素的位置信息如坐标、相对位置坐标等等进行数值上的均值计算,即可得到所有目标体素的中心位置信息,即病理组织的中心点位置信息。
在一些实施例中,为了使得中心点落于三维图像上体素,对目标体素的位置信息进行数值上的均值计算后,可以对计算结果进行四舍五入,得到病理组织的中心点位置信息。
(2)基于中心点位置信息,在目标三维图像上定位出目标区域的中心点。
其中,目标区域可以为目标病理区域。在目标三维图像上定位出目标病理区域的中心点的方式有多种,在一些实施例中,可以在显示界面直接显示中心点坐标。在另一实施例中,可以根据中心点位置信息,在3D图像中圈出中心点。
由上可知,本申请实施例可以获取生命体目标部位的多个磁共振三维图像;对多个变换后的磁共振三维图像进行预处理操作,得到多个预处理后的磁共振三维图像;提取多个预处理后的磁共振三维图像的图像特征;对多个磁共振三维图像的图像特征进行融合处理,得到融合特征;根据融合特征确定磁共振三维图像中体素对应的病理类型;从磁共振三维图像中选择病理类型为预设病理类型的目标体素,得到目标体素的位置信息;从多个磁共振三维图像中选取目标磁共振三维图像;基于目标体素的位置信息,在目标磁共振三维图像上对目标病理区域进行定位。本方案可以且本方案可以对不同模态的三维图像进行预处理,并提取这些预处理后的图像所提供的不同的图像特征(如血液特征、水分子特征、脂肪特征等),然后可以根据这些图像特征得到包含了多模态信息的融合特征,直接根据融合特征定位,并在目标三维图像上对病理组织的定位区域进行展示。由此,该方案可以提升识别结果的可读性,降低病理组织的误判率,从而提升病理区域识别的精确度。
根据前面实施例所描述的方法,本实施例将提供一种具体的区域定位的应用场景,本实施例以该区域定位的装置具体集成在医疗影像储传系统(Picture Archiving and Communication Systems,PACS)中,对乳腺病理组织的预测进行说明。
在实施例提供中,PACS将自动获取病人胸腔部位的多个磁共振三维图像,并提取这些磁共振三维图像的图像特征;然后对这些磁共振三维图像的图像特征进行融合处理,得到融合特征,并根据融合特征确定磁共振三维图像中体素对应的病理类型;再从磁共振三维图像中选择病理类型为乳腺肿瘤类型的目标体素,得到目标体素的位置信息;基于目标体素的位置信息对病人胸腔部位进行病理分析,得到乳腺肿瘤的病理信息。
(一)获取目标部位的多个磁共振三维图像。
PACS是应用在医院影像科室的系统,主要的任务就是把日常产生的各种医学影像,例如磁共振、CT、超声、红外仪、显微仪等设备产生的图像,通过例如模拟、DICOM、网络等各种接口,以数字化的方式海量保存起来,当需要的时候在一定的授权下能够很快的调回使用,同时增加一些辅助诊断管理功能。
在本实施例中,PACS从本地内存中调用目标病人的DCE三维序列以及DWI三维序列。
图3a是多模态MRI 3D图像中DCE以及DWI的某层图像,如图所示,DCE图像可以比较清晰的看出乳房及其内部的乳腺病理组织,以及胸腔及其内部的心脏等内脏。DWI序列仅能较清晰的看出乳房及其内部的乳腺病理组织,由于病理组织与心脏的组成不同,图中胸腔及其内部的心脏为低信号,即黑色。故在本实施例中采用DCE三维序列以及DWI三维序列作为多模态磁共振三维图像,进行病理组织预测,可以直接得到乳房及其内部的乳腺病理组织的图像,而胸腔及其内部的心脏将会显示为黑色,故胸腔及其内部的心脏不会影响本方案对乳腺病理组织的判断,进而不需要在获取多模态磁共振三维图像后,对这些图像进行局部部位的剪裁,得到乳房图像以减少心脏的影响。
在本实施例中,PACS还将对获取的DCE以及DWI图像进行配准操作,以提升病理组织识别的精确度。
PACS将目标病人注射造影剂之前的DCE三维序列记为DCE_T0,将注射造影剂之后的DCE-MRI三维序列记为DCE_Ti,将DWI三维序列记为DWI_bi。
其中,DCE_Ti中的i表示病人注射造影剂之后第i个时间点的DCE序列。
其中,DWI_bi中的b表示弥散敏感因子,i表示第i个b值的DWI序列,弥散敏感因子越大,病理组织和正常组织之间的对比度越大。
图3b是本实施例提供的具体实施例流程示意图,包括肿瘤预测部分与模型训练部分,如图3b的所示,在模型训练部分中PACS将获取的DWI_bi与DCE_Ti根据获取的DCE_T0进行配准,得到配准好的训练图像,利用这些图像对3D Unet模型进行训练,得到训练好的3D Unet模型。在肿瘤预测部分中PACS将获取的DWI_bi与DCE_Ti根据获取的DCE_T0进行配准,使得DWI_bi、DCE_Ti以及DCE_T0的分辨率、图像尺寸大小以及坐标系相同,然后将配准的数据输入训练好的3D Unet模型,进行病理区域的定位。以下为详细步骤:
在本实施例中,DCE_Ti中的i取值为3,DWI_bi中的b取值为600,i取值为800。
PACS以当前世界坐标系作为参考坐标系,将DCE_T0、DCE_T3以及DWI_b800的坐标系配准到当前世界坐标系下。
然后,PACS将目标病人注射造影剂之前的DCE三维序列的体素间距作为参考体素间距,对其他DCE三维序列以及DWI三维序列进行配准。即,使用DCE_T0的体素间距对DCE_T3以及DWI_b800序列的体素间距进行配准。
以下是具体步骤:
(1)DCE_T0配准:
PACS获取的DCE_T0坐标系原点为(x
0,y
0,z
0)=(-182.3,233.1,-35.28)。利用旋转矩阵和坐标系原点,把DCE_T0的坐标转换到世界坐标系(x’,y’,z’):
(x’,y’,z’)=DCE_T0*R+(x
0,y
0,z
0)
其中,坐标系原点(x
0,y
0,z
0)保存在PACS内存中,当PACS获取DWI_b800图像时,可以从图像中读出。以空间直角坐标系,即右手坐标系,作为当前世界坐标系,故取旋转矩阵R为旋转矩阵:
[1,0,0]
[0,1,0]
[0,0,1]
PACS获取的DCE_T0尺寸为(x,y,z)=(448,448,72)体素。其中,DCE_T0在x,y,z方向上的体素间距为(0.84,0.84,1.6),记为参考体素间距。
(2)DCE_T3配准:
由于在实际图像获取过程中,DCE_Ti图像的坐标系参数、尺寸参数以及体素间距一致,即DCE_T0和DCE_T3的参数一致,因此可以认为DCE_T3已经和DCE_T0数据实现了配准,因此,(x’,y’,z’)也是DCE_T3在世界坐标系的下的坐标。
(3)DWI_b800配准:
DWI_b800的坐标系原点(x0,y0,z0)=(-176.1,69.86,-49.61)。DWI_b800的坐标转换到世界坐标系(x’,y’,z’):
(x’,y’,z’)=DCE_T0*R+(x
0,y
0,z
0)
其中,R为旋转矩阵,坐标系原点(x
0,y
0,z
0)保存在PACS内存中,当PACS获取DWI_b800图像时,可以从图像中读出。
DWI_b800的尺寸是(x,y,z)=(192,96,32)体素,其中x,y,z方向上的体素间距分别是(1.875,1.875,4.8)。由于参考体素间距为(0.84,0.84,1.6),由于体素间距配准后的尺寸需要四舍五入到整数,故DWI_b800的x方向的体素长度是192×1.875÷0.84=429。
类似的,DWI_b800的y方向的体素长度是96×1.875÷0.84=214,z方向的体素长度是32×4.8÷1.6=96。
因此,DWI_b800通过3D数据线性插补法变换到(429,214,96)后,需要对尺寸进一步剪裁,即把尺寸(429,214,96)的DWI_b800,填补到(448,448,72)的3D空矩阵中,缺失的地方使用0填补,多出的地方则删除。
以上的DCE_T0、DCE_T3以及DWI_b800均为三维图像,经过以上三步后即得到了配准后相同尺寸大小、相同坐标系的三组磁共振三维图像。
(二)提取多个磁共振三维图像的图像特征。
使用2D U-Net模型处理三维图像时,由于其在处理三维图像的连贯性上表现欠佳,无法很好的利用3D信息,相较于3D U-Net(Three Dimensions U-Net)模型存在着预处理繁琐、效率低下以及输出结果不够准确等问题,故在本实施例中,使用3D U-Net模型处理三维图像。
图3c为3D U-Net模型的结构示意图,如图所示,3D U-Net模型基于U-Net模型,在该网络模型中所有的二维操作都会被替换为三维操作,例如三维卷积,三维池化,三维上采样等等。与U-Net模型类似,3D U-Net模型也是编码端-解码端的结构,编码端用于分析三维图像的全局信息并且对其进行特征提取与分析,具体地,在本实施例中该3D U-Net模型已事先训练好,其包含及使用如下卷积操作:
a.每一层神经网络都包含了两个三维大小是3*3*3的卷积。
b.批标准化(Batch Normalization,BN)使得网络更好的收敛。
c.线性整流函数(Rectified Linear Unit,ReLU)跟从在每一个卷积后。
d.使用2*2*2三维大小的最大进行下采样,其步长(stride)为2。
而与之相对应的,解码端用于修复目标细节,解码端最后一层之前则包含及执行下面的操作:
a.每一层神经网络都包含了一个2*2*2三维大小的反卷积层进行上采样,其步长为2。
b.两个3*3*3的卷积层跟从在每一个反卷积层后。
c.线性整流函数跟从在每一个卷积后。
e.于此同时,需要把在编码端相对应的网络层的结果作为解码端的部分输入,从而采集高体素特征信息,以便图像可以更好的合成。
对于三维图像,不需要单独输入每个二维切片,而是可以输入整个三维图像到3D U-Net模型中。故将上一步获取的DCE_T0、DCE_T3以及DWI_b800三维图像作为输入数据的3个通道,输入训练好的3D U-net模型,即可提取其相应图像特征。
(三)对多个磁共振三维图像的图像特征进行融合处理,得到融合特征。
在3D U-Net模型解码端的最后一层,使用1*1的卷积核进行卷积运算,将每个64维的特征向量映射到网络的输出层。在上一步中,提取获得了DCE_T0三维图像的图像特征、DCE_T3三维图像的图像特征以及DWI_b800三维图像的图像特征,这些图像特征在3D U-Net模型解码端的最后一层被映射到网络的输出层,即得到融合特征。
(四)根据融合特征确定磁共振三维图像中体素对应的病理类型。
获取融合特征后,在3D U-Net模型的输出层可以根据该融合特征确定磁共振三维图像中体素对应的病理类型的概率。
在本实施例中,为了降低融合特征的取值范围对最终结果的影响,平衡特征的取值范围,提高识别病理组织的精确度,需要使用最大最小值归一化法对特征的范围事先进行归一化操作,将特征取值归一化到[0,1]区间:
x’=(x-x
min)/(x
max-x
min)
其中,x
max为样本数据的最大值,x
min为样本数据的最小值,x’为归一化后的结果。
根据上述步骤通过融合特征确定磁共振三维图像中体素对应的病理类型的概率后,采用预设的病理字典,查询该概率对应的病理类型,从而确定磁共振三维图像中体素对应的病理类型。
其中,预设的病理字典保存在PACS的本地内存中,通过调用该预设的病理字典,可以确定磁共振三维图像中体素对应的病理类型。
表3是预设的病理字典的格式示例,如表所示,求得的概率分别为0、(0,x]、(x,y)、[y,1)以及1。其中,概率为0对应的病理类型为A,概率大于0且小于等于x对应的病理类型为B,概率大于x且小于y对应的病理类型为C,概率大于等于y且小于1对应的病理类型为D,概率为1对应的病理类型为E。
表3
概率 | 0 | (0,x] | (x,y) | [y,1) |
病理类型 | 内脏 | 非内脏非肿瘤 | 乳腺良性肿瘤 | 乳腺恶性肿瘤 |
(五)从磁共振三维图像中选择病理类型为预设病理类型的目标体素,得到目标体素的位置信息。
在步骤(四)后可以确定磁共振3D图像中所有体素的病理类型,从磁共振3D图像中选择病理类型为乳腺恶性肿瘤类型的体素作为目标体素,表4是目标体素的位置信息示例,其中,该目标体素的位置信息可以包括所有乳腺恶性肿瘤类型的目标体素在3D图像中的坐标值。
表4
目标体素编号 | 0x01 | 0x02 | 0x03 | ... | 0x0N |
坐标值 | (a,b,c) | (d,e,f) | (g,h,i) | ... | (m,n,o) |
(六)基于目标体素的位置信息定位出目标病理区域。
其中,若病理类型为乳腺恶性肿瘤类型,则目标病理区域为乳腺恶性肿瘤区域;若病理类型为乳腺肿瘤类型,则目标病理区域为乳腺肿瘤区域,等等。
在本实施例中,从多个磁共振三维图像中选取DCE_T0图像以及预设3D空图像作为目标磁共振三维图像。
该预设3D空图像的尺寸与分辨率与DCE_T0图像的尺寸与分辨率相同,每个像素的类型均为灰度值、数值大小均为0。其中,该预设3D空图像保存在PACS本地内存中。
根据步骤(五)中获得的目标体素的位置信息,将所有的目标体素的坐标值应用于DCE_T0图像,在DCE_T0图像上得到相应的体素集合,将这些体素的数值设为类型是灰度值、数值是1,以在DCE_T0图像上高亮显示病理组织的轮廓,以提高显示图像的可读性。
类似的,将所有的目标体素的坐标值应用于预设3D空图像,在预设3D空图像上得到相应的体素集合,将这些体素的数值设为类型是灰度值、数值是1,以在预设3D空图像上高亮显示病理组织的轮廓,以提高显示图像的可读性。
然后,将所有目标体素的坐标值数值进行均值计算,并四舍五入得到病理组织的中心点坐标(X,Y),根据该中心点坐标,将位于坐标(X,N)与(N,Y)的体素值数值设为0.8,以在DCE_T0图像以及预设3D空图像上显示中心点的定位,并在预设3D空图像上显示中心点的数值。
图3d是本实施例提供的输出结果,如图所示,包括了在DCE_T0图像上显示的病理组织轮廓的三视图,以及在预设3D空图像上的病理组织轮廓侧视图,所有病理组织轮廓均为高亮显示,并在所有图像上用“十”字显示出中心点,且在预设3D空图像上标示出中心点的数值。
由上可知,PACS从本地内存中获取病人胸腔部位的多个磁共振三维图像,其中,提取多个磁共振三维图像的图像特征;然后对多个磁共振三维图像的图像特征进行融合处理,得到融合特征,根据融合特征确定磁共振三维图像中体素对应的病理类型;再从磁共振三维图像中选择病理类型为乳腺肿瘤类型的目标体素,得到目标体素的位置信息,基于目标体素的位置信息定位出乳腺肿瘤区域。在本申请实施例中由于不同的病理组织的生理特征(如含水量、脂肪比例、含血量等)不同,PACS可以提取不同模态的磁共振三维图像所提供的不同的图像特征(如血液特征、水分子特征、脂肪特征等),根据这些图像特征PACS得到包含了多模态信息的融合特征,直接根据融合特征定位出乳腺肿瘤区域。由此,该方案可以降低将其它组织及器官误判为病理组织的概率,从而提升病理区域定位的精确度。
为了更好地实施以上方法,本申请实施例还提供一种图像区域定位装置,该图像区域定位装置具体可以集成在电子设备中,该电子设备可以包括磁共振图像采集设备、磁共振成像设备、医学影像数据处理设备以及医学影像数据存储设备等等。
例如,如图4所示,该图像区域定位装置可以包括获取模块401、提取模块402、融合模块403、分类模块404、筛选模块405以及定位模块406,如下:
获取模块401,获取目标部位的多个三维图像,其中,多个三维图像包括多个不同模态的三维图像;
提取模块402,用于提取多个三维图像的图像特征;
融合模块403,用于对多个三维图像的图像特征进行融合处理,得到融合特征;
分类模块404,用于根据融合特征确定三维图像中体素对应的体素类型;
筛选模块405,用于从三维图像中选择体素类型为预设体素类型的目标体素,得到目标体素的位置信息;
定位模块406,用于基于目标体素的位置信息定位出目标区域。
(1)在一些实施例中,提取模块402可以包括预处理模块和提取子模块,如下:
预处理模块,用于对多个三维图像进行预处理操作,得到多个预处理后的三维图像;
提取子模块,用于提取多个预处理后的三维图像的图像特征。
在一些实施例中,提取子模块,可以具体用于:
获取参考坐标系以及多个三维图像的原坐标系;
将多个三维图像的原坐标系变换为参考坐标系。
在另一些实施例中,提取子模块,可以具体用于:
获取参考体素间距以及多个三维图像的原体素间距;
将多个三维图像的原体素间距变换为参考体素间距。
在另一些实施例中,提取子模块,还可以具体用于:
获取参考尺寸大小以及多个三维图像的原尺寸大小;
将多个三维图像的原尺寸大小变换为参考尺寸大小。
(2)在一些实施例中,融合模块403可以包括权重获取模块和加权模块,如下:
权重获取模块,用于获取图像特征对应的预设特征权重;
加权模块,用于基于预设特征权重,对多个三维图像的图像特征进行加权处理。
在一些实施例中,加权模块可以具体用于:
确定多个三维图像中相同位置的体素对应的多个图像特征;
对多个图像特征进行预设特征权重的加权处理,得到加权处理后的多个图像特征;
对加权处理后的多个图像特征进行累加操作,得到体素的融合特征
(3)在一些实施例中,分类模块404可以包括,如下:
确定模块,用于确定三维图像中体素对应的融合特征;
概率模块,用于计算体素对应的融合特征属于各个体素类型的概率,得到体素属于各个体素类型的概率;
分类子模块,用于根据各个体素类型的概率,确定体素对应的体素类型。
(4)在一些实施例中,定位模块406可以包括图像选取模块和定位子模块,如下:
图像选取模块,用于从多个三维图像中选取目标三维图像;
定位子模块,用于基于目标体素的位置信息,在目标三维图像上对目标区域进行定位。
在一些实施例中,定位子模块,可以具体用于:
基于目标体素的位置信息,在目标三维图像上确定对应的体素,得到对应体素;
将对应体素的体素值设为预设数值,以标识目标区域。
在另一些实施例中,定位子模块,可以具体用于:
对所有目标体素的位置信息进行均值计算,得到目标区域的中心点位置信息;
基于中心点位置信息,在目标三维图像上定位出目标区域的中心点。
具体实施时,以上各个模块可以作为独立的实体来实现,也可以进行任意组合,作为同一或若干个实体来实现,以上各个模块的具体实施可参见前面的方法实施例,在此不再赘述。
由上可知,本实施例的图像区域定位装置由获取模块401获取目标部位的多个三维图像;提取模块402提取多个三维图像的图像特征;融合模块403对多个三维图像的图像特征进行融合处理,得到融合特征;分类模块404根据融合特征确定三维图像中体素对应的体素类型;筛选模块405从三维图像中选择体素类型为预设体素类型的目标体素,得到目标体素的位置信息;定位模块406基于目标体素的位置信息定位出目标区域。本方案可以提取不同模态的三维图像所提供的不同的图像特征,可以根据这些图像特征得到包含了多模态信息的融合特征,直接根据融合特征定位区域。由此,该方案可以降低误判率,从而提升区域定位的精确度。
此外,本发明实施例还提供一种医学图像处理设备,包括图像采集单元、处理器和存储器,存储器存储有多条指令。该医学图像处理设备可以具有图像采集、分析图像、定位病灶等一体化的功能。
该医学图像采集单元可以用于采集生命体目标部位的多个三维图像;
该存储器可以用于存储图像数据以及多条指令;
该处理器可以用于读取存储器存储的多条指令,来执行以下步骤:
其中,处理器可以从存储器中加载指令,用于获取目标部位的多个三维图像,其中,多个三维图像包括多个不同模态的三维图像;
提取多个三维图像的图像特征;
对多个三维图像的图像特征进行融合处理,得到融合特征;
根据融合特征确定三维图像中体素对应的体素类型;
从三维图像中选择体素类型为预设体素类型的目标体素,得到目标体素的位置信息;
基于目标体素的位置信息定位出目标区域。
在一些实施例中,当执行步骤根据融合特征确定三维图像中体素对应的体素类型时,处理器具体执行确定三维图像中体素对应的融合特征;计算体素对应的融合特征属于各个体素类型的概率,得到体素属于各个体素类型的概率;根据体素属于各个体素类型的概率,确定体素对应的体素类型;
在一些实施例中,当执行步骤对多个三维图像的图像特征进行融合处理时,处理器具体执行获取图像特征对应的预设特征权重;基于预设特征权重,对多个三维图像的图像特征进行加权处理;
在一些实施例中,当执行步骤基于预设特征权重,对多个三维图像的图像特征进行加权处理时,处理器具体执行确定多个三维图像中相同位置的体素对应的多个图像特征;对多个图像特征进行预设特征权重的加权处理,得到加权处理后的多个图像特征;对加权处理后的多个图像特征进行累加操作,得到体素的融合特征;
在一些实施例中,当执行步骤基于目标体素的位置信息定位出目标区域时,处理器具体执行从多个三维图像中选取目标三维图像;基于目标体素的位置信息,在目标三维图像上对目标区域进行定位。
在一些实施例中,当执行步骤基于目标体素的位置信息,在目标三维图像上对目标区域进行定位时,处理器具体执行基于目标体素的位置信息,在目标三维图像上确定对应的体素,得到对应体素;将对应体素的体素值设为预设数值,以标识目标区域。
在一些实施例中,当执行步骤基于目标体素的位置信息,在目标三维图像上对目标区域进行定位时,处理器具体执行对所有目标体素的位置信息进行均值计算,得到目标区域的中心点位置信息;基于中心点位置信息,在目标三维图像上定位出目标区域的中心点。
在一些实施例中,当执行步骤提取多个三维图像的图像特征时,处理器具体执行对多个三维图像进行预处理操作,得到多个预处理后的三维图像;提取多个预处理后的三维图像的图像特征。
在一些实施例中,当执行步骤对多个三维图像进行预处理操作时,处理器具体执行获取参考坐标系以及多个三维图像的原坐标系;将多个三维图像的原坐标系变换为参考坐标系。
需要说明的是,上述实施例提供的图像区域定位装置,还可以用于定位乳腺肿瘤区域,此时,获取模块401,获取目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;
提取模块402,用于提取所述多个三维图像的图像特征;
融合模块403,对所述多个三维图像的图像特征进行融合处理,得到融合特征;
分类模块404,根据所述融合特征确定所述三维图像中体素对应的体素类型;
筛选模块405,从所述三维图像中选择所述体素类型为乳腺肿瘤类型的目标体素,得到所述目标体素的位置信息;
定位模块406,基于所述目标体素的位置信息定位出乳腺肿瘤区域。
相应的,上述医学图像处理设备的处理器用于读取存储器存储的多条指令,来执行以下步骤:
获取所述目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;
提取所述多个三维图像的图像特征;对所述多个三维图像的图像特征进行融合处理,得到融合特征;
根据所述融合特征确定所述三维图像中体素对应的体素类型;从所述三维图像中选择所述体素类型为乳腺肿瘤类型的目标体素,得到所述目标体素的位置信息;
基于所述目标体素的位置信息定位出乳腺肿瘤区域。
如图5a所示,其示出了本发明实施例所涉及的医学图像处理设备的内部结构示意图,具体来讲:
该医学图像处理设备可以包括一个或者一个以上处理核心的处理器501、一个或一个以上计算机可读存储介质的存储器502、电源503、输入单元504以及图像采集单元505等部件。本领域技术人员可以理解,图5a中示出的医学图像处理设备结构并不构成对医学图像处理设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
处理器501是该医学图像处理设备的控制中心,利用各种接口和线路连接整个医学图像处理设备的各个部分,通过运行或执行存储在存储器502内的软件程序和/或模块,以及调用存储在存储器502内的数据,执行医学图像处理设备的各种功能和处理数据,从而对医学图像处理设备进行整体监控。在一些实施例中,处理器501可包括一个或多个处理核心;优选的,处理器501可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器501中。
存储器502可用于存储软件程序以及模块,处理器501通过运行存储在存储器502的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器502可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据医学图像处理设备的使用所创建的数据等。此外,存储器502可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器502还可以包括存储器控制器,以提供处理器501对存储器502的访问。
医学图像处理设备还包括给各个部件供电的电源503,优选的,电源503可以通过电源管理系统与处理器501逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源503还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
该医学图像处理设备还可包括输入单元504,该输入单元504可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
图像采集单元505包括磁体、梯度子单元和射频子单元等。它们的主要技术性能参数是磁感应强度、磁场均匀度、磁场稳定性、边缘场的空间范围、梯度场的磁感应强度和线性度、射频线圈的灵敏度等,负责磁共振信号的产生、探测与编码,即磁共振三维图像的采集。
图像采集单元505可以在静磁场上叠加一个梯度磁场、并且可以任意改变这个梯度磁场的梯度方向,从而成功进行薄层选择激发和共振频率空间编码。如图5b,是图像采集单元505的原理示意图,图像采集单元505可以包括主磁体、射频子单元、梯度子单元等等物理部件。
其中,主磁体用以产生场强,即主磁场。其类型可分为永磁、常导和超导等等。比如,当人的身体或身体的一部分被放入主磁场中时,与人体组织水内的氢核相联系的核自旋极化。
其中,梯度子单元可以产生梯度磁场来产生核磁信回波信号,可以进行核磁信号的空间定位编码以及流动液体的流速相位编码、在DWI成像时施加扩散敏感梯度场等等。在一些实施例中,梯度子单元可以包括梯度线圈、梯度放大器、数模转换器、梯度控制器、梯度冷却器等等。
其中,射频子单元负责发射、放大、接收,来激发生命体或非生命体内氢原子核产生磁共振信号并接收。射频子单元可以包括射频发生器、射频放大器以及射频线圈。在一些实施例中,为了使得发送的射频信号均匀,医学图像处理设备的射频线圈可以选用正交线圈。在另一些实施例中,为了使得信噪比,可以选用表面线圈。在其它一些实施例中,还可以使用相控阵表面线圈以及一体化相控阵表面线圈等等。
获取生命体或非生命体磁共振三维图像的实际过程可以分为两个步骤。首先是薄层选择激发和空间编码,然后是确定编码容量内所含的有用信息。
在一些实施例中,采用最简单的成像即单个薄层成像,其步骤包括:使待研究薄层中的核选择激发,将由该薄层得到的信息进行二维编码;通过梯度斜率和射频脉冲的宽度,可以测定薄层厚度。
在一些实施例中,单个薄层中的空间编码,可以用二维高分辨频谱学来进行。某薄层中的空间编码方法为先施加相位编码梯度、然后再施加频率编码或读出梯度,施加对象为该薄层中的一系列极化自旋。
具体地,断开薄层选择梯度,并在固定时间周期t,内施加第二个正交梯度Gy。在不同频率时的核过程,即同时决定于它们相对于第二个梯度的位置。相位编码的最终结果即为沿Y方向的距离信息。在相位编码后把该梯度断开,然后施加与前二个梯度都正交的第三个梯度Gx,并且只在选定的适当时间t_x二时施加并进行编码。适当不断改变频率数值,就能够最终提供出沿X轴的空间编码。只要逐渐增加相位编码梯度的数值,这个过程就可以反复进行。
尽管未示出,医学图像处理设备还可以包括显示单元以及冷却系统等,在此不再赘述。
该医学图像处理设备具体可以包括一台或多台仪器。
在一些实施例中,该医学图像处理设备具体可以由一台仪器构成,例如核磁共振仪、核磁共振医学图像处理设备等等。比如,医用磁共振成像设备由中,处理器501、存储器502、电源503、输入单元504以及图像采集单元505嵌入在该医用磁共振成像设备中。
在另一些实施例中,该医学图像处理设备具体还可以由多台仪器构成,例如核磁共振图像采集系统。比如,在核磁共振图像采集系统中,图像采集单元505嵌入核磁共振图像采集系统的核磁共振仪床中,处理器501、存储器502、电源503以及输入单元504嵌入控制台中。
以上各个操作的具体实施可参见前面的实施例,在此不再赘述。
由上可知,本实施例的图像区域定位装置可以由处理器501获取目标部位的多个三维图像;提取多个三维图像的图像特征;然后处理器501对多个三维图像的图像特征进行融合处理,得到融合特征;处理器501根据融合特征确定三维图像中体素对应的体素类型;从三维图像中选择体素类型为预设体素类型的目标体素,得到目标体素的位置信息;基于目标体素的位置信息定位出目标区域。本方案处理器501可以提取不同模态的三维图像所提供的不同的图像特征,可以根据这些图像特征得到包含了多模态信息的融合特征,直接根据融合特征定位区域。由此,该方案可以降低误判率,从而提升目标区域定位的精确度。
本领域普通技术人员可以理解,上述实施例的各种方法中的全部或部分步骤可以通过指令来完成,或通过指令控制相关的硬件来完成,该指令可以存储于一计算机可读存储器中,并由处理器进行加载和执行。
为此,本申请实施例提供一种存储器,其中存储有多条指令,该指令能够被处理器进行加载,以执行本申请实施例所提供的任一种图像区域定位方法中的步骤。例如,该指令可以执行如下步骤:
获取目标部位的多个三维图像,其中,多个三维图像包括多个不同模态的三维图像;
提取多个三维图像的图像特征;
对多个三维图像的图像特征进行融合处理,得到融合特征;
根据融合特征确定三维图像中体素对应的体素类型;
从三维图像中选择体素类型为预设体素类型的目标体素,得到目标体素的位置信息;
基于目标体素的位置信息定位出目标区域。
其中,该存储器可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
由于该存储器中所存储的指令,可以执行本申请实施例所提供的任一种图像区域定位方法中的步骤,因此,可以实现本申请实施例所提供的任一种图像区域定位方法所能实现的有益效果,详见前面的实施例,在此不再赘述。
以上对本申请实施例所提供的一种图像区域定位方法、装置和医学图像处理设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本申请的限制。
Claims (22)
- 一种图像区域定位方法,包括:获取目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;提取所述多个三维图像的图像特征;对所述多个三维图像的图像特征进行融合处理,得到融合特征;根据所述融合特征确定所述三维图像中体素对应的体素类型;从所述三维图像中选择所述体素类型为预设体素类型的目标体素,得到所述目标体素的位置信息;基于所述目标体素的位置信息定位出目标区域。
- 如权利要求1所述图像区域定位方法,根据所述融合特征确定所述三维图像中体素对应的体素类型,包括:确定所述三维图像中体素对应的融合特征;计算所述体素对应的融合特征属于各个体素类型的概率,得到所述体素属于所述各个体素类型的概率;根据所述体素属于所述各个体素类型的概率,确定所述体素对应的体素类型。
- 如权利要求1所述图像区域定位方法,对所述多个三维图像的图像特征进行融合处理,包括:获取图像特征对应的预设特征权重;基于所述预设特征权重,对所述多个三维图像的图像特征进行加权处理。
- 如权利要求3所述图像区域定位方法,基于所述预设特征权重,对所述多个三维图像的图像特征进行加权处理,包括:确定所述多个三维图像中相同位置的体素对应的多个图像特征;对所述多个图像特征进行预设特征权重的加权处理,得到加权处理后的多个图像特征;对所述加权处理后的多个图像特征进行累加操作,得到所述体素的融合特征。
- 如权利要求1所述图像区域定位方法,基于所述目标体素的位置信息定位出目标区域,包括:从所述多个三维图像中选取目标三维图像;基于所述目标体素的位置信息,在所述目标三维图像上对所述目标区域进行定位。
- 如权利要求5所述图像区域定位方法,基于所述目标体素的位置信息,在所述目标三维图像上对所述目标区域进行定位,包括:基于所述目标体素的位置信息,在所述目标三维图像上确定对应的体素,得到对应体素;将所述对应体素的体素值设为预设数值,以标识所述目标区域。
- 如权利要求5所述图像区域定位方法,基于所述目标体素的位置信息,在所述目标三维图像上对所述目标区域进行定位,包括:对所述所有目标体素的位置信息进行均值计算,得到所述目标区域的中心点位置信息;基于所述中心点位置信息,在所述目标三维图像上定位出所述目标区域的中心点。
- 如权利要求1所述图像区域定位方法,提取所述多个三维图像的图像特征,包括:对所述多个三维图像进行预处理操作,得到多个预处理后的三维图像;提取所述多个预处理后的三维图像的图像特征。
- 如权利要求8所述图像区域定位方法,对所述多个三维图像进行预处理操作,包括:获取参考坐标系以及所述多个三维图像的原坐标系;将所述多个三维图像的原坐标系变换为所述参考坐标系。
- 如权利要求8所述图像区域定位方法,对所述多个三维图像进行预处理操作,包括:获取参考体素间距以及所述多个三维图像的原体素间距;将所述多个三维图像的原体素间距变换为所述参考体素间距。
- 如权利要求8所述图像区域定位方法,对所述多个三维图像进行预处理操作,包括:获取参考尺寸大小以及所述多个三维图像的原尺寸大小;将所述多个三维图像的原尺寸大小变换为所述参考尺寸大小。
- 一种图像区域定位装置,包括:获取模块,获取目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;提取模块,用于提取所述多个三维图像的图像特征;融合模块,对所述多个三维图像的图像特征进行融合处理,得到融合特征;分类模块,根据所述融合特征确定所述三维图像中体素对应的体素类型;筛选模块,从所述三维图像中选择所述体素类型为预设体素类型的目标体素,得到所述目标体素的位置信息;定位模块,基于所述目标体素的位置信息定位出目标区域。
- 一种医学图像处理设备,所述医学图像处理设备包括医学图像采集单元、处理器和存储器,其中:所述医学图像采集单元用于采集生命体目标部位的多个三维图像;所述存储器用于存储图像数据以及多条指令;所述处理器用于读取存储器存储的多条指令,来执行以下步骤:获取所述目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;提取所述多个三维图像的图像特征;对所述多个三维图像的图像特征进行融合处理,得到融合特征;根据所述融合特征确定所述三维图像中体素对应的体素类型;从所述三维图像中选择所述体素类型为预设体素类型的目标体素,得到所述目标体素的位置信息;基于所述目标体素的位置信息定位出目标区域。
- 如权利要求13所述医学图像处理设备,其特征在于,当执行步骤根据所述融合特征确定所述三维图像中体素对应的体素类型时,所述处理器具体执行以下步骤:确定所述三维图像中体素对应的融合特征;计算所述体素对应的融合特征属于各个体素类型的概率,得到所述体素属于所述各个体素类型的概率;根据所述体素属于所述各个体素类型的概率,确定所述体素对应的体素类型;
- 如权利要求13所述医学图像处理设备,其特征在于,当执行步骤对所述多个三维图像的图像特征进行融合处理时,所述处理器具体执行以下步骤:获取图像特征对应的预设特征权重;基于所述预设特征权重,对所述多个三维图像的图像特征进行加权处理。
- 一种图像区域定位方法,包括:获取目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;提取所述多个三维图像的图像特征;对所述多个三维图像的图像特征进行融合处理,得到融合特征;根据所述融合特征确定所述三维图像中体素对应的体素类型;从所述三维图像中选择所述体素类型为乳腺肿瘤类型的目标体素,得到所述目标体素的位置信息;基于所述目标体素的位置信息定位出乳腺肿瘤区域。
- 如权利要求16所述图像区域定位方法,基于所述目标体素的位置信息定位出乳腺肿瘤区域,包括:从所述多个三维图像中选取目标三维图像;基于所述目标体素的位置信息,在所述目标三维图像上对所述乳腺肿瘤区域进行定位。
- 如权利要求17所述图像区域定位方法,基于所述目标体素的位置信息,在所述目标三维图像上对所述乳腺肿瘤区域进行定位,包括:基于所述目标体素的位置信息,在所述目标三维图像上确定对应的体素,得到对应体素;将所述对应体素的体素值设为预设数值,以标识所述乳腺肿瘤区域。
- 如权利要求17所述图像区域定位方法,基于所述目标体素的位置信息,在所述目标三维图像上对所述乳腺肿瘤区域进行定位,包括:对所述所有目标体素的位置信息进行均值计算,得到所述乳腺肿瘤区域的中心点位置信息;基于所述中心点位置信息,在所述目标三维图像上定位出所述乳腺肿瘤区域的中心点。
- 一种图像区域定位装置,包括:获取模块,获取目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;提取模块,用于提取所述多个三维图像的图像特征;融合模块,对所述多个三维图像的图像特征进行融合处理,得到融合特征;分类模块,根据所述融合特征确定所述三维图像中体素对应的体素类型;筛选模块,从所述三维图像中选择所述体素类型为乳腺肿瘤类型的目标体素,得到所述目标体素的位置信息;定位模块,基于所述目标体素的位置信息定位出乳腺肿瘤区域。
- 一种医学图像处理设备,所述医学图像处理设备包括医学图像采集单元、处理器和存储器,其中:所述医学图像采集单元用于采集生命体目标部位的多个三维图像;所述存储器用于存储图像数据以及多条指令;所述处理器用于读取存储器存储的多条指令,来执行以下步骤:获取所述目标部位的多个三维图像,其中,所述多个三维图像包括多个不同模态的三维图像;提取所述多个三维图像的图像特征;对所述多个三维图像的图像特征进行融合处理,得到融合特征;根据所述融合特征确定所述三维图像中体素对应的体素类型;从所述三维图像中选择所述体素类型为乳腺肿瘤类型的目标体素,得到所述目标体素的位置信息;基于所述目标体素的位置信息定位出乳腺肿瘤区域。
- 一种计算机可读存储介质,其上存储有计算机可读指令,当所述计算机可读指令被计算机的处理器执行时,使计算机执行权利要求1-11或16-19中的任一项所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/222,471 US12067725B2 (en) | 2019-03-08 | 2021-04-05 | Image region localization method, image region localization apparatus, and medical image processing device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910175745.4 | 2019-03-08 | ||
CN201910175745.4A CN109978838B (zh) | 2019-03-08 | 2019-03-08 | 图像区域定位方法、装置和医学图像处理设备 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/222,471 Continuation US12067725B2 (en) | 2019-03-08 | 2021-04-05 | Image region localization method, image region localization apparatus, and medical image processing device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020182033A1 true WO2020182033A1 (zh) | 2020-09-17 |
Family
ID=67078246
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/077746 WO2020182033A1 (zh) | 2019-03-08 | 2020-03-04 | 图像区域定位方法、装置和医学图像处理设备 |
Country Status (3)
Country | Link |
---|---|
US (1) | US12067725B2 (zh) |
CN (2) | CN110458813B (zh) |
WO (1) | WO2020182033A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113298907A (zh) * | 2021-06-22 | 2021-08-24 | 上饶师范学院 | 一种基于伽马核范数和总变分的核磁图像重建方法 |
CN113674254A (zh) * | 2021-08-25 | 2021-11-19 | 上海联影医疗科技股份有限公司 | 医学图像异常点识别方法、设备、电子装置和存储介质 |
US11527056B2 (en) | 2020-02-28 | 2022-12-13 | Alibaba Group Holding Limited | Image and data processing methods and apparatuses |
CN118096775A (zh) * | 2024-04-29 | 2024-05-28 | 红云红河烟草(集团)有限责任公司 | 烟支外观质量的多维表征与数字化特征测评方法及装置 |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110458813B (zh) * | 2019-03-08 | 2021-03-02 | 腾讯科技(深圳)有限公司 | 图像区域定位方法、装置和医学图像处理设备 |
CN110349170B (zh) * | 2019-07-13 | 2022-07-08 | 长春工业大学 | 一种全连接crf级联fcn和k均值脑肿瘤分割算法 |
CN110349151B (zh) * | 2019-07-16 | 2021-12-03 | 科大讯飞华南人工智能研究院(广州)有限公司 | 一种目标识别方法及装置 |
CN110363168A (zh) * | 2019-07-19 | 2019-10-22 | 山东浪潮人工智能研究院有限公司 | 一种基于卷积神经网络的三维立体图识别系统 |
CN110427954A (zh) * | 2019-07-26 | 2019-11-08 | 中国科学院自动化研究所 | 基于肿瘤影像的多区域的影像组学特征提取方法 |
CN110458903B (zh) * | 2019-07-29 | 2021-03-02 | 北京大学 | 一种编码脉冲序列的图像处理方法 |
CN110533639B (zh) * | 2019-08-02 | 2022-04-15 | 杭州依图医疗技术有限公司 | 一种关键点定位方法及装置 |
CN110634132A (zh) * | 2019-08-30 | 2019-12-31 | 浙江大学 | 基于深度学习的3d ct影像自动生成肺结核量化诊断报告的方法 |
CN111047605B (zh) * | 2019-12-05 | 2023-04-07 | 西北大学 | 一种脊椎ct分割网络模型的构建方法及分割方法 |
CN111068313B (zh) * | 2019-12-05 | 2021-02-19 | 腾讯科技(深圳)有限公司 | 一种应用中的场景更新控制方法、装置及存储介质 |
US11817204B2 (en) * | 2019-12-09 | 2023-11-14 | Case Western Reserve University | Specialized computer-aided diagnosis and disease characterization with a multi-focal ensemble of convolutional neural networks |
CN111144449B (zh) * | 2019-12-10 | 2024-01-19 | 东软集团股份有限公司 | 图像处理方法、装置、存储介质及电子设备 |
CN111210423B (zh) * | 2020-01-13 | 2023-04-07 | 浙江杜比医疗科技有限公司 | Nir图像的乳房轮廓提取方法、系统及装置 |
CN111325743A (zh) * | 2020-03-05 | 2020-06-23 | 北京深睿博联科技有限责任公司 | 基于联合征象的乳腺x射线影像分析方法和装置 |
CN111358484B (zh) * | 2020-03-23 | 2021-12-24 | 广州医科大学附属第一医院(广州呼吸中心) | 核医学肺灌注显像定量分析方法、分析设备及存储介质 |
CN111340139B (zh) * | 2020-03-27 | 2024-03-05 | 中国科学院微电子研究所 | 一种图像内容复杂度的判别方法及装置 |
CN112330674B (zh) * | 2020-05-07 | 2023-06-30 | 南京信息工程大学 | 一种基于脑部mri三维图像置信度的自适应变尺度卷积核方法 |
CN111568422B (zh) * | 2020-05-20 | 2023-12-01 | 科大讯飞股份有限公司 | 影像质量评估方法、指标间关系的获取方法及相关设备 |
CN112270749B (zh) * | 2020-11-20 | 2022-08-12 | 甘州区人民医院 | 一种3d打印盆骨的制造方法 |
CN112815493A (zh) * | 2021-01-11 | 2021-05-18 | 珠海格力电器股份有限公司 | 一种空调控制方法、装置、存储介质及空调 |
CN112767415B (zh) * | 2021-01-13 | 2024-07-30 | 深圳瀚维智能医疗科技有限公司 | 胸部扫查区域自动确定方法、装置、设备及存储介质 |
CN112862786B (zh) * | 2021-02-10 | 2022-08-05 | 昆明同心医联科技有限公司 | Cta影像数据处理方法、装置及存储介质 |
CN113033398B (zh) * | 2021-03-25 | 2022-02-11 | 深圳市康冠商用科技有限公司 | 一种手势识别方法、装置、计算机设备及存储介质 |
CN113066081B (zh) * | 2021-04-15 | 2023-07-18 | 哈尔滨理工大学 | 一种基于三维mri图像的乳腺肿瘤分子亚型检测方法 |
CN113192031B (zh) * | 2021-04-29 | 2023-05-30 | 上海联影医疗科技股份有限公司 | 血管分析方法、装置、计算机设备和存储介质 |
CN113256614B (zh) * | 2021-06-22 | 2021-10-01 | 国家超级计算天津中心 | 一种医学影像处理系统 |
CN113312442B (zh) * | 2021-07-30 | 2021-11-09 | 景网技术有限公司 | 一种智慧城市电子地图生成方法和系统 |
CN113763536A (zh) * | 2021-09-03 | 2021-12-07 | 济南大学 | 一种基于rgb图像的三维重建方法 |
CN113838132B (zh) * | 2021-09-22 | 2023-08-04 | 中国计量大学 | 一种基于卷积神经网络的单分子定位方法 |
CN114299009A (zh) * | 2021-12-27 | 2022-04-08 | 杭州佳量医疗科技有限公司 | 基于医学图像的消融区域确定方法、设备及存储介质 |
CN114494183B (zh) * | 2022-01-25 | 2024-04-02 | 哈尔滨医科大学附属第一医院 | 一种基于人工智能的髋臼半径自动测量方法及系统 |
CN114565815B (zh) * | 2022-02-25 | 2023-11-03 | 包头市迪迦科技有限公司 | 一种基于三维模型的视频智能融合方法及系统 |
CN115474992A (zh) * | 2022-09-21 | 2022-12-16 | 数坤(上海)医疗科技有限公司 | 进针位置确定方法、装置、电子设备及可读存储介质 |
CN115294160B (zh) * | 2022-10-08 | 2022-12-16 | 长春理工大学 | 面向脊柱影像的轻量化分割网络及其构建方法和应用 |
CN115375712B (zh) * | 2022-10-25 | 2023-03-17 | 西南科技大学 | 一种基于双边学习分支实现实用的肺部病变分割方法 |
WO2024123484A1 (en) * | 2022-12-06 | 2024-06-13 | Simbiosys, Inc. | Multi-tissue segmentation and deformation for breast cancer surgery |
CN116016829A (zh) * | 2022-12-28 | 2023-04-25 | 杭州海康慧影科技有限公司 | 一种图像显示方法、装置及系统 |
CN116863146B (zh) * | 2023-06-09 | 2024-03-08 | 强联智创(北京)科技有限公司 | 用于对血管瘤特征进行提取的方法、设备及存储介质 |
CN117132729B (zh) * | 2023-07-24 | 2024-08-27 | 清华大学 | 多模态精细乳腺模型设计方法、装置、设备及介质 |
CN117481672B (zh) * | 2023-10-25 | 2024-06-25 | 深圳医和家智慧医疗科技有限公司 | 一种乳腺组织硬化初期快速筛查智能方法 |
CN118212660B (zh) * | 2024-05-22 | 2024-07-23 | 四川省医学科学院·四川省人民医院 | 一种基于图像识别的智能坐浴系统 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107945168A (zh) * | 2017-11-30 | 2018-04-20 | 上海联影医疗科技有限公司 | 一种医学图像的处理方法及医学图像处理系统 |
CN109166157A (zh) * | 2018-07-05 | 2019-01-08 | 重庆邮电大学 | 一种三维mri脑部医学影像彩色化方法 |
CN109978838A (zh) * | 2019-03-08 | 2019-07-05 | 腾讯科技(深圳)有限公司 | 图像区域定位方法、装置和医学图像处理设备 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006114003A1 (en) * | 2005-04-27 | 2006-11-02 | The Governors Of The University Of Alberta | A method and system for automatic detection and segmentation of tumors and associated edema (swelling) in magnetic resonance (mri) images |
KR102444968B1 (ko) * | 2014-06-12 | 2022-09-21 | 코닌클리케 필립스 엔.브이. | 의료 영상 처리 장치 및 방법 |
CN104574334A (zh) * | 2015-01-12 | 2015-04-29 | 北京航空航天大学 | 一种利用模糊度量和形态学交替算子的红外与可见光图像融合方法 |
CN105512661B (zh) * | 2015-11-25 | 2019-02-26 | 中国人民解放军信息工程大学 | 一种基于多模态特征融合的遥感影像分类方法 |
US10140708B2 (en) * | 2016-01-21 | 2018-11-27 | Riverside Research Institute | Method for gestational age estimation and embryonic mutant detection |
US10152821B2 (en) * | 2016-08-19 | 2018-12-11 | Siemens Healthcare Gmbh | Segmented volume rendering with color bleeding prevention |
CN106909778B (zh) * | 2017-02-09 | 2019-08-27 | 北京市计算中心 | 一种基于深度学习的多模态医学影像识别方法及装置 |
CN106960221A (zh) * | 2017-03-14 | 2017-07-18 | 哈尔滨工业大学深圳研究生院 | 一种基于光谱特征和空间特征融合的高光谱图像分类方法及系统 |
US11361868B2 (en) * | 2017-08-16 | 2022-06-14 | The Johns Hopkins University | Abnormal tissue detection via modal upstream data fusion |
CN109377496B (zh) * | 2017-10-30 | 2020-10-02 | 北京昆仑医云科技有限公司 | 用于分割医学图像的系统和方法及介质 |
CN108062753B (zh) * | 2017-12-29 | 2020-04-17 | 重庆理工大学 | 基于深度对抗学习的无监督域自适应脑肿瘤语义分割方法 |
CN109035261B (zh) * | 2018-08-09 | 2023-01-10 | 北京市商汤科技开发有限公司 | 医疗影像处理方法及装置、电子设备及存储介质 |
CN109410185B (zh) * | 2018-10-10 | 2019-10-25 | 腾讯科技(深圳)有限公司 | 一种图像分割方法、装置和存储介质 |
-
2019
- 2019-03-08 CN CN201910684018.0A patent/CN110458813B/zh active Active
- 2019-03-08 CN CN201910175745.4A patent/CN109978838B/zh active Active
-
2020
- 2020-03-04 WO PCT/CN2020/077746 patent/WO2020182033A1/zh active Application Filing
-
2021
- 2021-04-05 US US17/222,471 patent/US12067725B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107945168A (zh) * | 2017-11-30 | 2018-04-20 | 上海联影医疗科技有限公司 | 一种医学图像的处理方法及医学图像处理系统 |
CN109166157A (zh) * | 2018-07-05 | 2019-01-08 | 重庆邮电大学 | 一种三维mri脑部医学影像彩色化方法 |
CN109978838A (zh) * | 2019-03-08 | 2019-07-05 | 腾讯科技(深圳)有限公司 | 图像区域定位方法、装置和医学图像处理设备 |
CN110458813A (zh) * | 2019-03-08 | 2019-11-15 | 腾讯科技(深圳)有限公司 | 图像区域定位方法、装置和医学图像处理设备 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11527056B2 (en) | 2020-02-28 | 2022-12-13 | Alibaba Group Holding Limited | Image and data processing methods and apparatuses |
US12056743B2 (en) | 2020-02-28 | 2024-08-06 | Alibaba Group Holding Limited | Image and data processing methods and apparatuses |
CN113298907A (zh) * | 2021-06-22 | 2021-08-24 | 上饶师范学院 | 一种基于伽马核范数和总变分的核磁图像重建方法 |
CN113298907B (zh) * | 2021-06-22 | 2022-09-13 | 上饶师范学院 | 一种基于伽马核范数和总变分的核磁图像重建方法 |
CN113674254A (zh) * | 2021-08-25 | 2021-11-19 | 上海联影医疗科技股份有限公司 | 医学图像异常点识别方法、设备、电子装置和存储介质 |
CN113674254B (zh) * | 2021-08-25 | 2024-05-14 | 上海联影医疗科技股份有限公司 | 医学图像异常点识别方法、设备、电子装置和存储介质 |
CN118096775A (zh) * | 2024-04-29 | 2024-05-28 | 红云红河烟草(集团)有限责任公司 | 烟支外观质量的多维表征与数字化特征测评方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
US12067725B2 (en) | 2024-08-20 |
CN110458813A (zh) | 2019-11-15 |
CN109978838B (zh) | 2021-11-30 |
CN110458813B (zh) | 2021-03-02 |
CN109978838A (zh) | 2019-07-05 |
US20210225027A1 (en) | 2021-07-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020182033A1 (zh) | 图像区域定位方法、装置和医学图像处理设备 | |
CN109872364B (zh) | 图像区域定位方法、装置、存储介质和医学影像处理设备 | |
US11967072B2 (en) | Three-dimensional object segmentation of medical images localized with object detection | |
Zhang et al. | ME‐Net: multi‐encoder net framework for brain tumor segmentation | |
Ma et al. | Thyroid diagnosis from SPECT images using convolutional neural network with optimization | |
Pan et al. | Disease-image-specific learning for diagnosis-oriented neuroimage synthesis with incomplete multi-modality data | |
Dou et al. | Automatic detection of cerebral microbleeds from MR images via 3D convolutional neural networks | |
Xu et al. | Contrast agent-free synthesis and segmentation of ischemic heart disease images using progressive sequential causal GANs | |
Ammar et al. | Automatic cardiac cine MRI segmentation and heart disease classification | |
Chetty et al. | A low resource 3D U-Net based deep learning model for medical image analysis | |
US20220222873A1 (en) | Devices and process for synthesizing images from a source nature to a target nature | |
Bonmati et al. | Automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalizing neural network | |
Feng et al. | Convolutional neural network‐based pelvic floor structure segmentation using magnetic resonance imaging in pelvic organ prolapse | |
CN115953420B (zh) | 深度学习网络模型以及医疗图像分割方法、装置和系统 | |
CN111462146A (zh) | 一种基于时空智能体的医学图像多模态配准方法 | |
CN115147600A (zh) | 基于分类器权重转换器的gbm多模态mr图像分割方法 | |
Yan et al. | Label image constrained multiatlas selection | |
Geng et al. | Encoder-decoder with dense dilated spatial pyramid pooling for prostate MR images segmentation | |
Zhang et al. | Rapid surface registration of 3D volumes using a neural network approach | |
CN114119354A (zh) | 医学图像配准训练及使用方法、系统及装置 | |
CN118037791A (zh) | 多模态三维医学图像分割配准模型的构建方法及其应用 | |
Hung et al. | CSAM: A 2.5 D Cross-Slice Attention Module for Anisotropic Volumetric Medical Image Segmentation | |
Raza et al. | CycleGAN with mutual information loss constraint generates structurally aligned CT images from functional EIT images | |
Longuefosse et al. | Lung CT Synthesis Using GANs with Conditional Normalization on Registered Ultrashort Echo-Time MRI | |
Wang et al. | Automatic right ventricular segmentation for cine cardiac magnetic resonance images based on a new deep atlas network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20769493 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20769493 Country of ref document: EP Kind code of ref document: A1 |