CN110475505A - Utilize the automatic segmentation of full convolutional network - Google Patents
Utilize the automatic segmentation of full convolutional network Download PDFInfo
- Publication number
- CN110475505A CN110475505A CN201880020558.2A CN201880020558A CN110475505A CN 110475505 A CN110475505 A CN 110475505A CN 201880020558 A CN201880020558 A CN 201880020558A CN 110475505 A CN110475505 A CN 110475505A
- Authority
- CN
- China
- Prior art keywords
- image
- heart
- training
- machine learning
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000011218 segmentation Effects 0.000 title description 77
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 193
- 210000002216 heart Anatomy 0.000 claims abstract description 179
- 238000012549 training Methods 0.000 claims abstract description 151
- 238000000034 method Methods 0.000 claims abstract description 136
- 210000004369 blood Anatomy 0.000 claims abstract description 85
- 239000008280 blood Substances 0.000 claims abstract description 85
- 210000003540 papillary muscle Anatomy 0.000 claims abstract description 81
- 210000004165 myocardium Anatomy 0.000 claims abstract description 71
- 210000003484 anatomy Anatomy 0.000 claims abstract description 59
- 239000012528 membrane Substances 0.000 claims abstract description 57
- 210000005240 left ventricle Anatomy 0.000 claims description 90
- 230000000747 cardiac effect Effects 0.000 claims description 74
- 238000003860 storage Methods 0.000 claims description 61
- 238000010801 machine learning Methods 0.000 claims description 59
- 210000005241 right ventricle Anatomy 0.000 claims description 54
- 238000005070 sampling Methods 0.000 claims description 54
- 238000013507 mapping Methods 0.000 claims description 51
- 238000003780 insertion Methods 0.000 claims description 35
- 230000037431 insertion Effects 0.000 claims description 35
- 238000003384 imaging method Methods 0.000 claims description 32
- 230000002107 myocardial effect Effects 0.000 claims description 29
- 230000002861 ventricular Effects 0.000 claims description 28
- 238000009826 distribution Methods 0.000 claims description 25
- 238000013184 cardiac magnetic resonance imaging Methods 0.000 claims description 19
- 230000010412 perfusion Effects 0.000 claims description 14
- 230000008878 coupling Effects 0.000 claims description 13
- 238000010168 coupling process Methods 0.000 claims description 13
- 238000005859 coupling reaction Methods 0.000 claims description 13
- 210000001174 endocardium Anatomy 0.000 claims description 13
- 238000012986 modification Methods 0.000 claims description 12
- 230000004048 modification Effects 0.000 claims description 12
- 230000017105 transposition Effects 0.000 claims description 12
- 238000012805 post-processing Methods 0.000 claims description 10
- 238000002059 diagnostic imaging Methods 0.000 claims description 8
- 230000002708 enhancing effect Effects 0.000 claims description 8
- 230000003111 delayed effect Effects 0.000 claims description 7
- 238000005315 distribution function Methods 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 4
- 210000001519 tissue Anatomy 0.000 claims description 4
- 201000000057 Coronary Stenosis Diseases 0.000 claims description 3
- 206010011089 Coronary artery stenosis Diseases 0.000 claims description 3
- 238000013459 approach Methods 0.000 claims description 3
- 208000010125 myocardial infarction Diseases 0.000 claims description 3
- 239000006071 cream Substances 0.000 claims description 2
- 230000000977 initiatory effect Effects 0.000 claims 6
- 238000002595 magnetic resonance imaging Methods 0.000 description 48
- 238000004422 calculation algorithm Methods 0.000 description 46
- 230000008569 process Effects 0.000 description 39
- 230000006870 function Effects 0.000 description 28
- 238000013528 artificial neural network Methods 0.000 description 24
- 238000011160 research Methods 0.000 description 24
- 230000015654 memory Effects 0.000 description 20
- 230000033001 locomotion Effects 0.000 description 19
- 238000012545 processing Methods 0.000 description 18
- 238000001514 detection method Methods 0.000 description 17
- 238000011049 filling Methods 0.000 description 14
- 238000005259 measurement Methods 0.000 description 14
- 230000008859 change Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 12
- 210000004115 mitral valve Anatomy 0.000 description 12
- 238000009877 rendering Methods 0.000 description 11
- 210000000591 tricuspid valve Anatomy 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 210000002837 heart atrium Anatomy 0.000 description 10
- 238000012360 testing method Methods 0.000 description 10
- 239000000047 product Substances 0.000 description 9
- 238000006424 Flood reaction Methods 0.000 description 8
- 230000004913 activation Effects 0.000 description 8
- 210000000709 aorta Anatomy 0.000 description 8
- 210000003205 muscle Anatomy 0.000 description 8
- 230000007547 defect Effects 0.000 description 7
- 230000000670 limiting effect Effects 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 6
- 238000013461 design Methods 0.000 description 6
- 230000004217 heart function Effects 0.000 description 6
- 238000013519 translation Methods 0.000 description 6
- 238000010276 construction Methods 0.000 description 5
- 238000002224 dissection Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 5
- 210000001308 heart ventricle Anatomy 0.000 description 5
- 238000005457 optimization Methods 0.000 description 5
- 238000005096 rolling process Methods 0.000 description 5
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 4
- 241000208340 Araliaceae Species 0.000 description 4
- 229910052688 Gadolinium Inorganic materials 0.000 description 4
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 4
- 235000003140 Panax quinquefolius Nutrition 0.000 description 4
- 238000005520 cutting process Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- UIWYJDYFSGRHKR-UHFFFAOYSA-N gadolinium atom Chemical compound [Gd] UIWYJDYFSGRHKR-UHFFFAOYSA-N 0.000 description 4
- 235000008434 ginseng Nutrition 0.000 description 4
- 210000005246 left atrium Anatomy 0.000 description 4
- 230000000644 propagated effect Effects 0.000 description 4
- 241001269238 Data Species 0.000 description 3
- 241000270295 Serpentes Species 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 3
- 230000008602 contraction Effects 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 238000011478 gradient descent method Methods 0.000 description 3
- 230000006855 networking Effects 0.000 description 3
- 210000002445 nipple Anatomy 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 206010019280 Heart failures Diseases 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000017531 blood circulation Effects 0.000 description 2
- 239000002872 contrast media Substances 0.000 description 2
- 230000003205 diastolic effect Effects 0.000 description 2
- 230000002526 effect on cardiovascular system Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 239000004615 ingredient Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 238000003475 lamination Methods 0.000 description 2
- 210000004072 lung Anatomy 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000003387 muscular Effects 0.000 description 2
- 230000007170 pathology Effects 0.000 description 2
- 210000003102 pulmonary valve Anatomy 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000010008 shearing Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000002604 ultrasonography Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000282465 Canis Species 0.000 description 1
- 208000024172 Cardiovascular disease Diseases 0.000 description 1
- 206010056720 Muscle mass Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 210000001367 artery Anatomy 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 239000000090 biomarker Substances 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 101150083127 brox gene Proteins 0.000 description 1
- 230000024245 cell differentiation Effects 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000004087 circulation Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000002924 energy minimization method Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 208000019622 heart disease Diseases 0.000 description 1
- 206010020871 hypertrophic cardiomyopathy Diseases 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000001802 infusion Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000007914 intraventricular administration Methods 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 210000004218 nerve net Anatomy 0.000 description 1
- 230000000474 nursing effect Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 230000032696 parturition Effects 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 210000005245 right atrium Anatomy 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 231100000241 scar Toxicity 0.000 description 1
- 230000036573 scar formation Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/143—Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/149—Segmentation; Edge detection involving deformable models, e.g. active contour models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Probability & Statistics with Applications (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The system and method divided automatically for anatomical structure (for example, heart).Convolutional neural networks (CNN) can be used for independently dividing the part of the anatomical structure indicated by image data (such as 3D MRI data).CNN uses two paths, i.e. constricted path and path expander.In at least some embodiments, path expander includes convolution operation more less than constricted path.The system and method also from host computer image intensity thresholds, the threshold value by blood with inside endocardial contours papillary muscle and girder flesh distinguish, and autonomous application image intensity threshold limits the profile or mask on the boundary of description papillary muscle and girder flesh.The system and method also calculate the profile or mask for describing the internal membrane of heart and the external membrane of heart using the CNN model of training, and the illness or functional character of cardiac muscle are anatomically positioned using calculated profile or mask.
Description
Technical field
The present disclosure generally relates to the automatic segmentations of anatomical structure.
Background technique
Magnetic resonance imaging (MRI) is normally used for cardiac imaging to assess the patient for suffering from known or doubtful heart disease.
Specifically, cardiac MRI can accurately capture the ability of high-resolution heart film image by it to quantify and heart failure
Exhaust index relevant with similar conditions.These high-definition pictures can come manually or by means of semi-automatic or full-automatic software
Measure the volume of the relevant anatomical region (such as ventricle and muscle) of heart.
Cardiac MRI film sequence is made of one or more spaces slice, wherein each space slice is comprising through whole
Multiple time points (for example, 20 time points) of a cardiac cycle.In general, some subsets of following view are captured as individually
Series: short axle (SAX) view is made of a series of slices of the long axis along left ventricle.Each slice is located at left ventricle
It is orthogonal with the long axis of ventricle in the plane of short axle;Rooms 2 (2CH) view, be display left ventricle and atrium sinistrum or right ventricle and
Long axis (LAX) view of atrium dextrum;Rooms 3 (3CH) view is display left ventricle, atrium sinistrum and aorta or right ventricle, the right side
The LAX view in atrium and aorta;With rooms 4 (4CH) view, left ventricle, atrium sinistrum, right ventricle and atrium dextrum are shown
LAX view.
According to the type of acquisition, these views can be directly captured in scanner (for example, steady state free precession (SSFP)
MRI), or these views can be constructed by the multiplanar reconstruction (MPR) of the volume arranged in different directions (for example, axis
Face, sagittal plane or coronal-plane, such as 4D Flow MRI).SAX view is sliced with multiple spaces, usually covers the entire of heart
Volume, but 2CH, 3CH and 4CH view usually only have single space slice.All series are all films, and have and cover the complete heart
Multiple time points in dynamic period.
The measurement of several important cardiac functions depends on the precise measurement of ventricular volume.For example, ejection fraction (EF) table
Show the fraction of blood that left ventricle (LV) is pumped out in each heartbeat.Abnormal low EF reading is usually related with heart failure.EF's
Measure the ventricle blood of both end diastoles when end-systolic and the LV maximum diastole when depending on LV maximum collapse
Pond body product.
In order to measure the volume of LV, usually it is divided in SAX view centre chamber.The radiologist of check case first can
Time point for being individually sliced by manual cycle and the time point for determining ventricle maximum collapse or diastole respectively determine that heart is received
Contracting latter stage (ES) and end diastole (ED) time point.After determining the two time points, radiologist will be in ventricle can
The profile around LV is drawn on all slices for the SAX series seen.
Once profile is constructed, it can be by summing to the pixel in profile, and multiplied by the face on the direction x and y
Pel spacing (for example, in terms of the every pixel of millimeter) calculates the ventricle area in each slice.It may then pass through to each sky
Between area in slice sum, and determine total ventricle body multiplied by the distance between slice (for example, in terms of millimeter (mm))
Product.Obtain the volume as unit of cubic millimeter.It can also be using the other methods of integral slice area to determine total volume, example
Such as the modification of Simpson's rule, discrete integration is approached without using straightway, but uses secondary segmenting.Usually at ES and ED
Volume is calculated, and can determine ejection fraction and similar measurement by volume.
In order to measure LV blood pool volume, each time point of the radiologist usually at two time points (ES and ED) is in
Profile is constructed along the LV internal membrane of heart (myocardium inner wall) on about 10 space slices, in total about 20 profiles.Although there are some half
Automatic outline tool (for example, using active contour or " snakelike " algorithm), but these still usually require the one of profile
It manually adjusts, especially for the image with noise or artifact.The whole process for constructing these profiles may need 10 points
Clock or longer time relate generally to manually adjust.Exemplary L V endocardial contours are as shown in figure 1 shown in image 100a to 100k,
Show the profile at the single time point on entire SAX heap.From 100a to 100k, it is sliced from the apex of the heart of left ventricle to left ventricle
Base portion.
Although above description is directed to the measurement (carrying out by the endocardial profile of LV) of LV blood pool, it is generally necessary to pair
Right ventricle (RV) blood pool carries out identical cubing to assess the function pathology in right ventricle.Additionally, it is sometimes desirable to measure
The quality of myocardium (heart flesh), this needs to draw the profile of the external membrane of heart (myocardium outer surface).Even if using semi-automatic tool, warp
It tests radiologist abundant all and can spend 10 minutes or the longer time each of constructs and corrects these four profiles
(the LV internal membrane of heart, the LV external membrane of heart, the RV internal membrane of heart, the RV external membrane of heart).When 30 minutes or longer may be needed by constructing all four profiles
Between.
The most apparent consequence of the laborious process is so that it is expensive for interpreting cardiac MRI research.Another important consequence is:
Except not required that they be so, the measurement based on profile not will do it generally, which has limited can be from the cardiac MRI research carried out every time
The diagnostic message of extraction.The profile of full automation generates the workload with cubing not only for radiologist, and
The amount of diagnostic message for that can extract from each research all obviously has significant benefit.
The limitation of method based on active contour
The most basic method for creating ventricle profile is to manually complete the mistake using certain polygon or batten drawing tool
Journey, without the use of any automatic algorithms or tool.In this case, user can for example construct the manual draw of ventricle profile,
Or spline control points are drawn, then spline control points are connected with smooth spline profiles.After initial construction profile, depend on
In the user interface of software, user usually has the ability of modification profile, such as by movement, increase or deletes control point or lead to
It crosses and moves batten line segment.
In order to reduce the heavy degree of the process, most of software packages for supporting ventricular segmentation all include semi-automatic segmentation work
Tool.A kind of algorithm for semi-automatic ventricular segmentation is " snakelike " algorithm (being more formally known as " active contour ").Referring to Kass,
M., Witkin, A. and Terzopoulos, D. (1988), " Snakes:Active contour models. "
International Journal of Computer Vision,1(4),321–331.S-Shaped Algorithm generates deformable sample
Bar line is limited to the intensity gradient surrounded in image by energy minimization method.In fact, this method attempts to take turns
Exterior feature is limited to the high gradient regions (edge) in image, and also makes " bending " in profile or the area of high direction gradient (curvature)
Domain minimizes.Optimum is the smooth profile for being tightly enclosed in the edge of image.The image 200 of Fig. 2 is shown in 4D
In Flow cardiac studies, the exemplary successful result of the S-Shaped Algorithm of endocardium of left ventricle, it illustrates the endocardial profiles of LV
202。
Although S-Shaped Algorithm is common, although and to modify the profile of its acquisition faster than generating profile from the beginning
It is more, but S-Shaped Algorithm have the shortcomings that it is several obvious.In particular, S-Shaped Algorithm needs " seed ".By " the seed wheel of algorithm improvement
It is wide " it must be set by the user or by heuristic setting.Moreover, S-Shaped Algorithm can only understand local circumstance.When profile and image
Imbricate when, the cost function of S-Shaped Algorithm would generally assign contribution (credit);However, having no idea to notify algorithm institute
The edge detected is required;For example, in the internal membrane of heart and other anatomical entities (for example, another ventricle, lung, liver)
There is no clear boundary between boundary.Therefore, which is highly dependent on the correct setting of predictable anatomical structure and seed.
In addition, S-Shaped Algorithm is greedy.The energy function of S-Shaped Algorithm is optimized usually using greedy algorithm such as gradient descent method,
Gradient direction iteration freedom of movement parameter of the gradient descent method along cost function.However, gradient descent method with it is many similar excellent
Change the local minimum that algorithm is easily trapped into cost function.This show as may profile relevant to the false edges in image,
Such as the edge between imaging artefacts or blood pool and papillary muscle.In addition, S-Shaped Algorithm has the representation space of very little.S-Shaped Algorithm
Usually only tens adjustable parameters, therefore do not have the ability for the various possible images for indicating to need to divide.It is many different
Factor can influence perceived ventricle capture image, including anatomical structure (for example, the size of ventricle, shape, pathology, hand
Art history, papillary muscle), imaging scheme (such as contrast agent, pulse train, scanner type, the quality and type of receiving coil are suffered from
Person's positioning, image resolution ratio) and other factors (such as motion artifacts).By the diversity and adjustable parameter of the image recorded
Quantity it is seldom, S-Shaped Algorithm can only in the case of a small number of " performance is good " performance it is good.
Although there are these and other disadvantages, the universal of S-Shaped Algorithm to be primarily due to the fact for S-Shaped Algorithm:
S-Shaped Algorithm can carry out in the case where no any specific " training " using this to implement relatively easy.However, snake
Shape algorithm can not adapt to more complicated case.
The challenge of papillary muscle is excluded from blood pool
Papillary muscle is the muscle inside the internal membrane of heart of both left ventricle and right ventricle.During ventricular contraction on valve
When pressure increase, papillary muscle is for keeping bicuspid valve and tricuspid valve to be closed.Fig. 3 shows illustrative SSFP MRI image 300a
(end diastole) and 300b (end-systolic), it illustrates the papillary muscle of left ventricle and cardiac muscles.It is noted that in the heart
Dirty diastasis (image 300a), significant challenge are to discriminate between papillary muscle and the blood pool where them, and in end-systolic (figure
As 300b), significant challenge is to discriminate between papillary muscle and cardiac muscle.
When carrying out the segmentation (manually or automatically) of ventricle blood pool, papillary muscle may be embodied in profile or arrange from profile
It removes.It is noted that around blood pool profile it is logical be referred to colloquially as " endocardial contours ", no matter papillary muscle be included in profile or
It is excluded from profile.In the latter case, term " internal membrane of heart " is not stringent accurate, because profile does not reflect smoothly
It is incident upon endocardial true value surface;Nevertheless, using term " endocardial contours " for convenient.
Endocardial contours are created on each image usually in SAX heap to measure intraventricular blood volume.Therefore, if from
Papillary muscle is excluded in endocardial contours, so that it may obtain the most accurate measurement of blood volume.But since muscle is large number of and
They are excluded to need except manual contours especially careful by very little when constructing profile, and this dramatically increases the heavy of process
Degree.Therefore, when constructing manual contours, papillary muscle is generally comprised in endocardial contours, is caused to ventricle blood volume slightly
Over-evaluate.Technically, this summation for measuring blood pool volume and nipple muscle size.
The tool of automation or semi-automation can accelerate the process that papillary muscle is excluded from endocardial contours, but they have
Important points for attention.S-Shaped Algorithm (discussed above) is not suitable for excluding papillary muscle in end diastole, because it
Exemplary formula is merely able to draw profile to single join domain in the case where no hole.Although the algorithm is adjustable to be applicable in
Hole in processing profile, but since papillary muscle is more much smaller than blood pool, which must readjust with while handle small
And big join domain.In brief, standard S-Shaped Algorithm can not be used to divide blood pool and exclude in end diastole
Papillary muscle.
In end-systolic, when the adjacent cardiac muscle of most of nipple muscle masses, S-Shaped Algorithm default will be from internal membrane of heart wheel
Most of papillary muscle is excluded in exterior feature, and cannot be comprising them (because almost no or no strong between papillary muscle and cardiac muscle
Spend boundary).Therefore, in normalized form, S-Shaped Algorithm can only include the papillary muscle of end diastole, and only receive in heart
Contracting latter stage excludes them, this causes the measurement result of blood pool volume during cardiac cycle inconsistent.This is S-Shaped Algorithm
One major limitation can prevent the clinical application of its output in a large amount of modified situations of not user.
Another semi-automatic method for constructing blood pool profile is using " flood filling " algorithm.Under the filling algorithm that floods,
User select initial seed point, and be connected to seed point, intensity gradient and with a distance from seed point be no more than threshold value institute
There is pixel to be included in selected mask.Although the filling that floods needs to connect cut zone as S-Shaped Algorithm, floods and fill out
The advantages of filling is that it allows join domain to have hole.Therefore, because papillary muscle can be distinguished according to its intensity with blood pool, so
It can be inputted dynamically by user or formulate the filling algorithm that floods in a manner of hard coded, to exclude papillary muscle from segmentation.
The filling that floods can be used for including papillary muscle in the internal membrane of heart segmentation of end diastole;However, in end-systolic,
Because most of papillary muscle connect with myocardium and (makes two regions almost undistinguishable), the filling that floods can not be used for include
Papillary muscle in internal membrane of heart segmentation.
In addition to other than papillary muscle and cardiac muscle cannot be distinguished in end-systolic, the major defect for the filling that floods is, although
Compared with completely manual segmentation, workload needed for it can reduce significantly cutting procedure, but there is still a need for a large amount of user is defeated
Enter to dynamically determine to flood and fill gradient and distance threshold.It has been discovered by the applicants that although can be with using the fill tool that floods
Building accurately segmentation, but there is still a need for largely manually adjust for segmentation of the building with clinically-acceptable precision.
Divide the challenge of ground sections in short axis view
Usually cardiac segmentation is constructed in short axle or SAX heap image.A major defect being split on SAX heap is
SAX plane is nearly parallel to the plane of bicuspid valve and tricuspid valve.There are two influence for this.Firstly, valve is difficult and from SAX heap
Slice distinguishes.Secondly, it is assumed that SAX heap is not exclusively parallel to valve plane, and heart base portion nearby will have at least a slice,
Part is located at ventricle and part is located at atrium.
Shown in the image 400a and 400b of Fig. 4 left ventricle and atrium sinistrum be found in it is exemplary in single slice
Example.If clinician fail with reference to be projected on corresponding LAX view current SAX slice, SAX slice spans ventricle and
Atrium may be not obvious.In addition, even if LAX view is available, it, can since ventricle and atrium have similar signal strength
It can be difficult to the position that judges position of valve on SAX slice, and should terminate therefore, it is difficult to judge ventricular segmentation.Therefore, close
The segmentation of heart base portion is one of main source of error of ventricular segmentation.
Tick lables
In the 4D Flow workflow of cardiac imaging application, it may be necessary to which user defines different location mark in heart
Region to see different cardiac views (for example, 2CH, 3CH, 4CH, SAX) and divide ventricle.Divide left ventricle and sees
The left heart view desired position 2CH, 3CH and 4CH mark includes the LV apex of the heart, bicuspid valve and aorta petal.Divide RV and sees
Corresponding views desired position mark includes the RV apex of the heart, tricuspid valve and pulmonary valve.
Payer, Christian,Horst Bischof and Martin Urschler's
" Regressing Heatmaps for Multiple Landmark Localization using CNNs ", Proc
Medical Image Computing&Computer Assisted Intervention (MICCAI) 2016, Springer
A kind of existing method of mark of the position location on 3D T1 weighted mri is described in Verlag.The method developed in this article
Referred to herein as " tick lables detect (LandMark Detect) ".Tick lables detection is based on two significant parts.
Firstly, using the modification of U-Net neural network, such as in Ronneberger, Olaf, Philipp Fischer and Thomas
" the U-net:Convolutional networks for biomedical image segmentation " of Brox,
International Conference on Medical Image Computing and Computer-Assisted
Intervention, the 234-241 pages, Springer International Publishing, discussed in 2015.Its
It is secondary, tick lables are encoded using the Gaussian function of any selection criteria difference during the training period.The tick lables of Fig. 5 detect
Neural network 500 replaces the U-Net of maximum pond layer different from using average pond layer.
The method that one limitation of tick lables detection is a lack of processing deletion sites mark.It is assumed that each tick lables
It is all exactly on each image.Another limitation is the absence of the search of the hyper parameter other than kernel and layer size.Also
One limitation is the fixed up-sampling layer parameter not learnt.In addition, tick lables detection depends on limited pretreatment
Strategy, the strategy are to eliminate the mean value (that is, keeping input data placed in the middle) of 3D rendering.
Therefore, it is necessary to solve the system and method for some or all of disadvantages described above.
Summary of the invention
Machine learning system can be summarized as including: at least one non-transitory processor readable storage medium, deposit
Store up at least one of processor-executable instruction or data;And it is communicably connected at least one described non-transitory
At least one processor of device readable storage medium storing program for executing is managed, at least one described processor: receiving the image including more batches of tape labels
The learning data of collection, each image set include indicating the image data of anatomical structure and including at least one label, the mark
The region of the specific part for the anatomical structure that label mark is described in each image of image set;The full convolutional neural networks of training
(CNN) model with using received learning data divide at least part of anatomical structure;And trained CNN model is deposited
Storage is at least one non-transitory processor readable storage medium of machine learning system.CNN model may include shrinking road
Diameter and path expander, constricted path may include multiple convolutional layers and multiple pond layers, and each pond layer is at least one convolution
After layer, path expander may include multiple convolutional layers and multiple up-sampling layers, and each up-sampling layer is at least one convolutional layer
Later and including transposition convolution operation, which executes up-sampling and interpolation with acquistion kernel.It is adopted on each
After sample layer, CNN model may include the Feature Mapping by skipping connection of the equivalent layer in constricted path
The cascade of (feature map).Image data can be indicated through the heart during one or more time points of cardiac cycle
It is dirty.Image data may include ultrasound data or visible light picture data.CNN model may include constricted path, the contraction road
Diameter may include the first convolutional layer with 1 to 2000 Feature Mappings.CNN model may include multiple convolutional layers, and every
A convolutional layer may include the convolution kernel that size is 3 × 3 and stride is 1.CNN model may include multiple pond layers, and every
A pond layer may include 2 × 2 maximum pond layers that stride is 2.CNN model may include four pond layers and four up-samplings
Layer.CNN model may include multiple convolutional layers, and CNN model can be used zero padding operation and input is padded to each convolution
Layer.CNN model may include multiple nonlinear activation function layers.
At least one processor can be expanded by least some of the image set of more batches of tape labels of modification image data
Fill learning data.
At least one processor can according to flip horizontal, flip vertical, shearing displacement, shift amount, zoom amount, rotation amount,
At least one of intensity level or contrast are come at least some of the image set of modifying more batches of tape labels image data.
CNN model may include the multiple super ginsengs being stored at least one non-transitory processor readable storage medium
Number, and at least one processor can configure CNN model according to various configurations, and each configuration includes the numerical value of hyper parameter
Various combination;For each of various configurations, the precision of CNN model is verified;And it is based at least partially on and passes through verifying
Determining precision selects at least one configuration.
Whether at least one processor can lack the multiple portions of anatomical structure for each image set identification image set
Any one of part label;And for being identified as lacking the image set of at least one label, modify training loss
Function is to consider identified missing label.Image data may include volumetric image, and each label may include body
Product label mask or profile.Each convolutional layer of CNN model may include size be N × N × K pixel convolution kernel, wherein N and
K is positive integer.Each convolutional layer of CNN model may include the convolution kernel that size is N × M pixel, and wherein N and M is positive integer.
Image data can be indicated through the heart during one or more time points of cardiac cycle, wherein the figure of more batches of tape labels
The subset of image set includes the label for excluding papillary muscle.For each processed image, CNN model can use at least one figure
The data of picture, at least one described image can be on spatially adjacent with the processed image image or time and institute
State adjacent at least one of the image of processed image.For each processed image, CNN model can use space
The data of upper at least one image adjacent with the processed image, and can use the time go up with it is described processed
The data of at least one adjacent image of image.For each processed image, CNN model can use at least one time
Information or phase information.Image data may include steady state free precession (SSFP) magnetic resonance imaging (MRI) data or 4D Flow
At least one of MRI data.
The method for operating machine learning system may include: at least one non-transitory processor readable storage medium,
It can store at least one of processor-executable instruction or data;And be communicably connected to it is described at least one non-face
At least one processor of when property processor readable storage medium, and this method can be summarized as including: to pass through at least one
Processor, receives the learning data including more batches of tape label image sets, and each image set includes the picture number for indicating anatomical structure
According to, and each image set includes at least one label, the dissection knot described in each image of the tag identifier image set
The region of the specific part of structure;By at least one processor, full convolutional neural networks (CNN) model of training is with using being received
Learning data divide at least part of anatomical structure;And by least one processor, trained CNN model is stored
In at least one non-transitory processor readable storage medium of machine learning system.Training CNN model may include training
CNN model including constricted path and path expander, the constricted path may include multiple convolutional layers and multiple pond layers, often
A pond layer is after at least one convolutional layer, and the path expander may include multiple convolutional layers and multiple up-samplings
Layer, each up-sampling layer is after at least one convolutional layer and may include transposition convolution operation, which practises
It obtains kernel and executes up-sampling and interpolation.Training CNN model may include trained CNN model with using received learning data come
Divide at least part of the anatomical structure, and after each up-sampling layer, the CNN model may include from receipts
The cascade of the Feature Mapping by skipping connection of equivalent layer in contracting path.Receiving learning data may include receiving to indicate
Through the image data of the heart during one or more time points of cardiac cycle.Training CNN model may include training
CNN model with using received learning data divide at least part of the anatomical structure, and CNN model can wrap
Constricted path is included, the constricted path may include the first convolutional layer with 1 to 2000 Feature Mapping.Training CNN model
It may include the CNN model that training may include multiple convolutional layers, to divide anatomical structure using the received learning data of institute
At least part, and it is the convolution kernel that 3 × 3, stride is 1 that each convolutional layer, which may include size,.Training CNN model can be with
May include the CNN model of multiple pond layers including training, with using received learning data divide anatomical structure extremely
Few a part, and each pond layer may include 2 × 2 maximum pond layers that stride is 2.
Training CNN model may include trained CNN model, to be tied using the received learning data of institute to divide the dissection
At least part of structure, and CNN model may include four pond layers and four up-sampling layers.
Training CNN model may include the CNN model that training may include multiple convolutional layers, to utilize the received study number of institute
According to come at least part for dividing anatomical structure, and CNN model can be used zero padding operation will input filling to each convolution
Layer.
Training CNN model may include trained CNN model to be tied using the received learning data of institute to divide the dissection
At least part of structure, and CNN model may include multiple nonlinear activation function layers.
This method can also include passing through at least one processor, in the image set by modifying more batches of tape labels at least
Some image datas expand learning data.
This method can also include by least one processor, according to flip horizontal, flip vertical, shearing displacement, displacement
At least one of amount, zoom amount, rotation amount, intensity level or contrast modify at least one in the image set of more batches of tape labels
A little image datas.
CNN model may include the multiple super ginsengs being stored at least one non-transitory processor readable storage medium
Number, and can also include that CNN model is configured according to various configurations by least one processor, each configuration includes super
The various combination of the numerical value of parameter;For each of various configurations, CNN model is verified by least one processor
Precision;And by least one processor, it is based at least partially on through the determining precision of verifying and at least one is selected to match
It sets.
This method can also include identifying whether image set lacks for each image set by least one processor
The label of any one of multiple portions of anatomical structure part;And for being identified as lacking the figure of at least one label
Image set modifies training loss function by least one processor to consider identified missing label.Receive learning data
It may include the image data that reception may include volumetric image, and each label may include volume tab mask or wheel
It is wide.
Training CNN model may include the CNN model that training may include multiple convolutional layers, to utilize the received study of institute
Data divide at least part of anatomical structure, and it is N × N that each convolutional layer of the CNN model, which may include size,
The convolution kernel of × K pixel, wherein N and K is positive integer.
Training CNN model may include the CNN model that training may include multiple convolutional layers, to utilize the received study of institute
Data divide at least part of anatomical structure, and each convolutional layer of the CNN model may include having a size of N × M
The convolution kernel of pixel, wherein N and M is positive integer.Receiving learning data may include receiving one that indicates to run through cardiac cycle
Or more heart during time point image data, and wherein the subset of the image set of more batches of tape labels includes excluding cream
The label of head flesh.Training CNN model may include trained CNN model to divide the dissection using the received learning data of institute
At least part of structure, and for each processed image, CNN model can use the data of at least one image,
At least one described image is that spatially adjacent with the processed image image or time are gone up and the processed figure
As adjacent at least one of image.Training CNN model may include trained CNN model to utilize the received learning data of institute
Divide at least part of the anatomical structure, and for each processed image, the CNN model is using spatially
The data of at least one image adjacent with the processed image, and utilize the time upper and the processed image phase
The data of at least one adjacent image.Training CNN model may include trained CNN model with using received learning data come
Divide at least part of the anatomical structure, and for each processed image, CNN model can use at least one
Temporal information or phase information.Receive learning data may include receive may include steady state free precession (SSFP) magnetic resonance at
As the image data of at least one of (MRI) data or 4D Flow MRI data.
Machine learning system can be summarized as including: at least one non-transitory processor readable storage medium, deposit
Store up at least one of processor-executable instruction or data;And it is communicably connected at least one described non-transitory
At least one processor of device readable storage medium storing program for executing is managed, at least one described processor: receiving the picture number for indicating anatomical structure
According to;By the received image data of full convolutional neural networks (CNN) model treatment institute, to generate each image of image data
The probability of each classification of each pixel, each classification correspond in the multiple portions by the anatomical structure of pictorial data representation
One;And for each image of image data, the probability using each classification generated is each of multiple classifications
Classification generating probability figure;And the probability graph of generation is stored at least one non-transitory processor readable storage medium.CNN
Model may include constricted path and path expander, and constricted path may include multiple convolutional layers and multiple pond layers, Mei Gechi
Change layer after at least one convolutional layer, and path expander may include multiple convolutional layers and multiple up-sampling layers, Mei Geshang
Sample level is after at least one convolutional layer and may include transposition convolution operation, which is held with acquistion kernel
Row up-sampling and interpolation.Image data can be indicated through the heart during one or more time points of cardiac cycle.
At least one processor can be based at least partially on probability graph generated, automatically make the multiple of anatomical structure
The instruction of at least one of part is shown over the display.After at least one processor can be to processed image data
Reason is to ensure to meet at least one physical constraint.Image data can indicate through cardiac cycle it is one or more when
Between put during heart, and at least one described physical constraint may include at least one of the following: myocardial volume
Whole time points is identical or right ventricle and left ventricle cannot be overlapped.For each image of image data, at least one
Multiple probability graphs can be converted to mark by the classification for setting the classification of each pixel to have maximum probability by a processor
Sign mask.For each image of image data, when the probability of all categories of pixel is lower than determining threshold value, at least one
Processor can set background classification for the classification of each pixel.For each image of image data, when pixel is not this
When a part of the maximum join domain of classification associated by pixel, at least one processor can be set the classification of each pixel
It is set to background classification.Each label mask of image data can be converted to corresponding spline profiles by least one processor.
At least one processor can be such that profile generated and image data display together over the display automatically.At least one processing
Device can receive user's modification at least one profile in shown profile;And modified profile is stored at least
In one non-transitory processor readable storage medium.At least one processor can use profile generated to determine dissection
The volume of at least one of the multiple portions of structure.Anatomical structure may include heart, at least one processor can use
Profile generated determines the volume at least one of the multiple portions of multiple time point hearts of cardiac cycle.At least one
A processor can automatically determine cardiac cycle based on the time point for being confirmed as having minimum volume and maximum volume respectively
Which in multiple time points corresponds to the cardiac systolic stage latter stage and diastolic phase latter stage of cardiac cycle at time point.Extremely
A few processor can make the identified volume of at least one of multiple portions of anatomical structure be shown in display
On.Image data may include volumetric image.Each convolutional layer of CNN model may include that size is N × N × K pixel volume
Product core, wherein N and K is positive integer.
The method for operating machine learning system, which may include: at least one non-transitory processor
Readable storage medium storing program for executing, at least one of storage processor executable instruction or data, and be communicably connected to described
At least one processor of at least one non-transitory processor readable storage medium, and this method can be summarized as include:
By at least one processor, the image data for indicating anatomical structure is received;By at least one processor, pass through full convolution mind
Through the received image data of network (CNN) model treatment institute, with each of each pixel of each image for generating image data
The probability of classification, each classification correspond to by one in the multiple portions of the anatomical structure of pictorial data representation;And for
Each image of image data, by least one processor, the probability using each classification generated is in multiple classifications
Each classification generating probability figure;And by least one processor, the probability graph of generation is stored at least one and non-is faced
In when property processor readable storage medium.It may include by may include by the received image data of CNN model treatment institute
The received image data of CNN model treatment institute of constricted path and path expander, the constricted path may include multiple convolution
Layer and multiple pond layers, each pond layer is after at least one convolutional layer, and the path expander may include multiple volumes
Lamination and multiple up-sampling layers, each up-sampling layer is after at least one convolutional layer and including transposition convolution operation, this turn
It sets convolution operation acquistion kernel and executes up-sampling and interpolation.Receiving image data may include receiving to indicate through cardiac cycle
One or more time points during heart image data.
This method can also include being based at least partially on probability graph generated, automatically by least one processor
Ground shows that the instruction of at least one of multiple portions of anatomical structure over the display.
This method can also include being post-processed processed image data to ensure to meet by least one processor
At least one physical constraint.Receive image data may include receive indicate through cardiac cycle it is one or more when
Between put during heart image data, and at least one described physical constraint may include at least one in following
A: the volume of the cardiac muscle is identical at all time points or right ventricle and left ventricle cannot be overlapped.
This method can also include each image for image data, by least one processor, by will be each
The classification of pixel is set as the classification with maximum probability, and multiple probability graphs are converted to label mask.
This method can also include each image for image data, determine when the probability of all categories of pixel is lower than
Threshold value when, by least one processor, set background classification for the classification of each pixel.
This method can also include each image for image data, when pixel is not classification associated by the pixel
When a part of maximum join domain, by least one processor, background classification is set by the classification of each pixel.
This method can also include being converted to each label mask of image data accordingly by least one processor
Spline profiles.
This method can also include showing that profile generated together with image data by least one processor
Show over the display.
This method can also include that user's modification of the profile shown by least one is received by least one processor;
And the profile of modification is stored at least one non-transitory processor readable storage medium by least one processor.
This method can also include that the more of anatomical structure are determined using profile generated by least one processor
The volume of at least one of a part.
Anatomical structure may include heart, and this method can also include by least one processor, using giving birth to
At profile determine at least one of the multiple portions of multiple time point hearts of cardiac cycle volume.
This method can also include by least one processor, based on being confirmed as respectively with minimum volume and maximum
The time point of volume automatically determines which heart receipts of the time point corresponding to cardiac cycle in multiple time points of cardiac cycle
Latter stage in contracting stage and diastolic phase latter stage.
This method can also include by least one processor, so that at least one of the multiple portions of anatomical structure
Identified volume show over the display.Receiving image data may include receiving volumetric image data.Pass through CNN model
To may include by wherein each convolutional layer may include the received image data of processing institute that size is N × N × K pixel convolution
The CNN model of core received image data to handle, wherein N and K is positive integer.
Machine learning system can be summarized as including: at least one non-transitory processor readable storage medium, deposit
Store up at least one of processor-executable instruction or data;And it is communicably connected at least one described non-transitory
At least one processor of device readable storage medium storing program for executing is managed, at least one described processor: receiving multiple 3D MRI image collection, multiple
Each of image set image set represents the anatomical structure of patient;Multiple annotations of multiple 3D MRI image collection are received, each
The tick lables of the anatomical structure for the patient that annotation instruction is described in correspondence image;Training convolutional neural networks (CNN) model
To predict the position of the multiple tick lables using 3D MRI image;And trained CNN model is stored in machine learning system
In at least one non-transitory processor readable storage medium of system.At least one processor can train full convolutional neural networks
(CNN) model is to predict the positions of the multiple tick lables using the 3D MRI image.At least one processor can instruct
Practice CNN model, the output of the CNN model is one group or more than one set of space coordinate, and described one group or more than one set of space are sat
One position in each group of the multiple tick lables of mark of target.CNN model may include constricted path, be later one
A or more full articulamentum.CNN model may include constricted path and path expander, and constricted path may include multiple convolution
Layer and multiple pond layers, each pond layer is after one or more convolutional layers, and path expander may include multiple volumes
Lamination and multiple up-sampling layers, each up-sampling layer after one or more convolutional layers and including transposition convolution operation,
The transposition convolution operation executes up-sampling and interpolation with acquistion kernel.
For each of one or more tick lables of anatomical structure, at least one processor can at least portion
Point ground defines 3D label mapping, each 3D label mapping based on received multiple annotate of the received 3D MRI image collection of institute and institute
Tick lables can be encoded in a possibility that specific location on the 3D label mapping, wherein at least one processing
Device can train CNN model to divide one or more position marks using 3D MRI image and 3D label mapping generated
Will.Image in each of multiple set can indicate the heart of the patient at the difference corresponding time point of cardiac cycle
It is dirty, and each annotation can indicate the tick lables of the patient's heart described in corresponding image.
At least one processor can receive 3D MRI image collection;Pass through the received 3D MRI figure of CNN model treatment institute
As to detect at least one of one or more tick lables;And make at least one in the multiple tick lables detected
A display is over the display.At least one processor can by the received 3D MRI image of CNN model treatment institute, and
At least one of output point or label mapping.At least one processor can be received by the CNN model treatment institute
3D MRI image at multiple time points to detect at least one of multiple tick lables;And make at multiple time points
At least one of multiple tick lables detected are shown over the display.CNN model can use and a received 3D
The associated phase information of MRI image.
The method for operating machine learning system, which may include: at least one non-transitory processor
Readable storage medium storing program for executing can store at least one of processor-executable instruction or data, and communicably be connected to
At least one processor of at least one non-transitory processor readable storage medium, and this method can be summarized as wrapping
It includes: by least one processor, receiving multiple 3D MRI image collection, each of multiple images collection image set represents patient
Anatomical structure;By at least one processor, multiple annotations of multiple 3D rendering collection are received, each annotation instruction is in corresponding diagram
The tick lables of the anatomical structure of the patient described as in;Pass through at least one processor, training convolutional neural networks (CNN) mould
Type is to predict the positions of the multiple tick lables using 3D MRI image;And it, will be trained by least one processor
CNN model is stored at least one non-transitory processor readable storage medium of machine learning system.Training CNN model can
To include trained full convolutional neural networks (CNN) model to predict the position of the multiple tick lables using the 3D MRI image
It sets.Training CNN model may include training output for one group or the CNN model of more than one set of space coordinate, and described one group or more
One position in each group of the multiple tick lables of mark of one group of space coordinate.Training CNN model may include,
Training may include constricted path, be later the CNN model of one or more full articulamentums.Training CNN model may include,
Training may include the CNN model of constricted path and path expander, and constricted path may include multiple convolutional layers and multiple ponds
Layer, each pond layer is after at least one convolutional layer, and on the path expander may include multiple convolutional layers and is multiple
Sample level, after at least one convolutional layer and including transposition convolution operation, which uses each up-sampling layer
Acquistion kernel executes up-sampling and interpolation.
This method can also include: each tick lables in multiple tick lables for anatomical structure, by least
One processor is based at least partially on the received multiple annotations of the received 3D MRI image collection of institute and institute and reflects to define 3D label
It penetrates, each 3D label mapping encodes tick lables in a possibility that specific location on 3D label mapping;It receives more
A 3D MRI image collection may include receiving multiple 3D MRI image collection, and the image of each in multiple set can indicate
Patient's heart at the difference corresponding time points of cardiac cycle, and each annotation can indicate to describe in correspondence image
The tick lables of patient's heart.
This method can also include: to receive 3D MRI image collection by least one processor;It is handled by least one
Device is by the received 3D MRI image of CNN model treatment to detect at least one of the multiple tick lables;And it is logical
Crossing at least one processor makes the display of at least one of multiple tick lables detected over the display.
This method can also include: to pass through the received 3D MRI image of CNN model treatment institute by least one processor
To detect at least one of multiple tick lables at multiple time points;And by least one processor, so that more
At least one of multiple tick lables detected at a time point are shown over the display.
Training CNN model may include that training can use phase information associated with the received 3D MRI image of institute
CNN model.
Magic magiscan can be summarized as including: at least one non-transitory processor readable storage medium,
Its storage processor executable instruction or data, cardiac MRI image data and the initial wheel for describing the heart internal membrane of heart and the external membrane of heart
At least one of wide or mask;With at least one processor, it is communicably connected at least one non-transitory processor
Readable storage medium storing program for executing, in operation, at least one described processor: access cardiac MRI image data and a series of initial wheel
Wide and mask;From host computer image intensity thresholds, the threshold value by inside blood and endocardial contours papillary muscle and girder flesh area
It separates;And autonomous application image intensity threshold defines the profile or mask on the boundary of description papillary muscle and girder flesh.In order to count
Image intensity thresholds are calculated, at least one processor can be by the distribution of the intensity value in endocardial contours and endocardial contours and the heart
The distribution of the intensity value in the region between epicardium contours is compared.Experience intensity distribution can be used at least one processor
Density Estimator calculates each distribution of intensity value.Image intensity thresholds can be determined as the first He by least one processor
The image pixel intensities of the point of intersection of second probability-distribution function, the first probability-distribution function are used for the pixel group in endocardial contours,
Second probability-distribution function is for the pixel group in the region between endocardial contours and epicardial contours.Describe the heart internal membrane of heart
Initial profile or mask may include papillary muscle and girder flesh inside endocardial contours.At least one processor can calculate blood
The coupling part of pool area simultaneously abandons calculated one or more coupling parts from blood pool region.At least one processing
The coupling part abandoned from blood pool region can be converted into papillary muscle and girder flesh region by device.At least one processor can be with
From all coupling parts abandoned in blood pool region in addition to the maximum coupling part in blood pool region.At least one processor can
To allow the calculated profile of institute or the mask on the boundary for describing papillary muscle and girder flesh to be edited by user.
Machine learning system can be summarized as including: at least one non-transitory processor readable storage medium, deposit
In convolutional neural networks (CNN) model for storing up processor-executable instruction or data, the medical imaging data of heart and training
At least one;With at least one processor, it is communicably connected at least one non-transitory processor readable storage medium,
In operation, the heart for describing heart in medical imaging data at least one described processor: is calculated using the CNN model of training
The profile or mask of inner membrance and the external membrane of heart;And using calculated profile or mask anatomically position cardiac muscle illness or
Functional character.At least one processor can calculate ventricle insertion point, and at the ventricle insertion point, right ventricular wall is attached to the left heart
Room.At least one processor can be based on the profile or mask and description endocardium of right ventricle of describing the left ventricle external membrane of heart or the right heart
The degree of approach of the profile of one or two of the room external membrane of heart or mask calculates ventricle insertion point.
At least one processor can calculate one or more two-dimensional cardiac images based on two points in cardiac image
In ventricle insertion point, in the two points, left ventricle epicardial border and endocardium of right ventricle boundary or the right ventricle external membrane of heart
One or two of boundary is begun to deviate from.At least one processor can be between the long axis view based on acquired left ventricle
Give the description of the left ventricle external membrane of heart mutually to calculate ventricle insertion point.At least one processor can be based on the left ventricle external membrane of heart
Intersection between 3 Room long shaft plane of profile and the left heart calculates at least one ventricle insertion point.At least one processor can be with base
Intersection between 4 Room long shaft plane of left ventricle epicardial contours and the left heart calculates at least one ventricle insertion point.At least one
A processor can based on one in left 3 Room long shaft plane of the heart and right ventricle epicardial contours or endocardium of right ventricle profile or
Intersection between two calculates at least one ventricle insertion point.At least one processor can be based on left 4 Room long shaft plane of the heart
Intersection between one or two of right ventricle epicardial contours or endocardium of right ventricle profile calculates at least one heart
Room insertion point.
At least one processor can permit the position of one or more ventricle insertion points of user's Manual description.At least one
A processor can be used the combination of profile and ventricle insertion point and myocardium illness or functional character be presented with standardized format
Anatomy positioning.Standardized format can be one or two of 16 or 17 segment models of cardiac muscle.The medical imaging number of heart
According to can be functional cardiac image, myocardial delayed enhancing one or more of image or myocardial perfusion imaging.The doctor of heart
Learning imaging data can be cardiac magnetic resonance images.
Trained CNN model can have been based on identical with will carry out the image type of reasoning using the CNN model of training
Annotation cardiac image is trained.Trained CNN model can have been based on functional cardiac image, myocardial delayed enhancing image
Or one or more in myocardial perfusion imaging are trained.The data that the training of trained CNN model is based on can be
Cardiac magnetic resonance images.Trained CNN model can be based on different come the image type of reasoning from by the CNN model of training is used
Annotation cardiac image be trained.Trained CNN model can have been based on functional cardiac image, myocardial delayed enhancing figure
As or myocardial perfusion imaging in one or more be trained.The data that the training of trained CNN model is based on can be with
It is cardiac magnetic resonance images.
At least one processor can finely tune the CNN mould of training according to CNN model for the data of the same type of reasoning
Type.In order to finely tune trained CNN model, at least one processor can train again some of the CNN model of the training or
All layers.At least one processor can to describe heart the internal membrane of heart and the external membrane of heart profile or mask application post-processing, with
Minimize the amount for being present in the non-cardiac muscular tissue being identified as in the heart area of cardiac muscle.After being carried out to profile or mask
Morphological operation can be applied to be identified as the heart area of cardiac muscle to reduce its area by processing, at least one processor.
Morphological operation may include one or more in corroding or expanding.In order to be post-processed to profile or mask, at least one
A processor, which can be modified, to be applied to by the threshold value of the probability graph of trained CNN model prediction, only to identify following myocardium pixel,
For the pixel, trained CNN model indicates that the pixel is that the probability of a part of cardiac muscle is higher than threshold value.Probability map values can turn
The threshold value for being changed to class label is greater than 0.5.In order to post-process to profile or mask, at least one processor can will describe the heart
The center of the vertex of the profile of flesh towards or away from the ventricle of heart is mobile, to reduce the area of identified cardiac muscle.Cardiac muscle
Illness or functional character may include myocardial scar formation, myocardial infarction, coronary artery stenosis or perfusion one of feature or more
It is a variety of.
Detailed description of the invention
In the accompanying drawings, identical appended drawing reference indicates similar element or movement.The size of element and relative position in figure
It is not drawn necessarily to scale.For example, the shape and angle of various elements are not drawn necessarily to scale, and one in these elements
It can be arbitrarily enlarged and positioned a bit to improve attached drawing legibility.In addition, the specific shape for the element drawn not necessarily purport
In any information of the expression about the true form of particular element, and may only be selected in the case of in attached drawing.
Fig. 1 is the example of multiple LV internal membranes of heart segmentation at the single time point of entire SAX (short axle) heap.From a left side to
The right side is sliced from the apex of the heart of left ventricle to the base portion of left ventricle from top to bottom.
Fig. 2 is the example of the LV endocardial contours generated using S-Shaped Algorithm.
Fig. 3 is two SSFP images, and it illustrates ventricle, cardiac muscle and the papillary muscles inside endocardium of left ventricle.Left side
SSFP image shows end diastole, and the SSFP image on right side shows end-systolic.
Fig. 4 is to show the two images for the challenge that ventricle and atrium are distinguished on SAX plane base slice.
Fig. 5 is U-Net network structure used in tick lables detection.
Fig. 6 is the figure of the DeepVentricle network structure of the embodiment shown according to one, is had for each
Two convolutional layers of pond layer and four ponds/up-sampling operation.
Fig. 7 is the embodiment shown according to one, constructs the lightning memory for being trained using SSFP data and reflects
Penetrate the flow chart of database (LMDB).
Fig. 8 is the embodiment shown according to one, the stream of the pipelined process for training convolutional neural networks model
Cheng Tu.
Fig. 9 is to show the embodiment shown according to one, the stream of the process of the reasoning assembly line for SSFP data
Cheng Tu.
Figure 10 is the screen of the SSFP the reasoning results in the endocardial application of LV at a time point and slice index
Screenshot.
Figure 11 is the screen of the SSFP the reasoning results in the application of the LV external membrane of heart at a time point and slice index
Screenshot.
Figure 12 is the screen of the SSFP the reasoning results in the endocardial application of RV at a time point and slice index
Screenshot.
Figure 13 is the screenshot capture of the SSFP calculating parameter in the application from the ventricle divided automatically.
Figure 14 is two chambers for depicting the parallel lines with instruction SAX heap plane, the screenshot capture of three chambers and four chamber views.
Figure 15 depicts the screenshot capture of two chambers, three chambers and four chamber views in left figure, shows not parallel with right ventricle
A series of segmentation planes.Right figure depicts the reconstruction image of the highlighted plane seen in two chambers, three chambers and the four chamber views.
Figure 16 is to show the screenshot capture of segmentation RV.Point (right figure) in profile defines spline curve, and is stored in
In database.Outline projection is to LAX view (left figure).
Figure 17 is screenshot capture, and it illustrates segmentation RVs identical with Figure 16 to be sliced, but two chambers, three chambers and four chamber views
Each of all slightly rotate with the prominent segmentation plane with depth effect.
Figure 18 shows the embodiment shown according to one, constructs the sudden strain of a muscle for being trained using 4D Flow data
The schematic diagram of electric memory mapping database (LMDB).
Figure 19 is to show multiplanar reconstruction (upper left), the RV generated by SAX plane, available label and image data
The figure of Endo mask (upper right), LV EPi mask (lower-left) and LV Endo mask (bottom right).These masks can store one
In a array, and it can store in the LMDB under single unique key together with image.
Figure 20 is analogous to the figure of Figure 19, in addition to lacking LV EPi mask.
Figure 21 is to show the flow chart of the reasoning assembly line for 4D Flow of the embodiment according to shown in one.
Figure 22 is the screenshot capture for describing the reasoning in the application that the LV internal membrane of heart studies 4D Flow.
Figure 23 is the screenshot capture for describing the reasoning in the application that the LV external membrane of heart studies 4D Flow.
Figure 24 is the screenshot capture for describing the reasoning in the application that the RV internal membrane of heart studies 4D Flow.
Figure 25 is to show the embodiment shown according to one, and the left ventricle apex of the heart (LVA) is positioned using network application
Screenshot capture.
Figure 26 is to show the embodiment shown according to one, and right ventricular apex (RVA) is positioned using network application
Screenshot capture.
Figure 27 is to show the embodiment shown according to one, and the screen of bicuspid valve (MV) is positioned using network application
Screenshot.
Figure 28 is to show the embodiment shown according to one, the flow chart of the process for constructing tranining database.
Figure 29 is to show coding to the figure of the position of the tick lables on the image of image progress Gauss assessment.
Figure 30 is the flow chart of the image of the embodiment shown according to one and the pretreated stream waterline of tick lables.
Figure 31 is to depict the pretreatment input picture for patient and encode the exemplary of mitral valve position mark
Multiple screenshot captures.From top to bottom, sagittal plane, axial plane and coronal-plane view are from left to right shown.
Figure 32 is to depict the pretreatment input picture for patient and encode the exemplary of tricuspid valve tick lables
Multiple screenshot captures.From top to bottom, sagittal plane, axial plane and coronal-plane view are from left to right shown.
Figure 33 is to show the figure for the position that predicted position mark is exported according to network.
Figure 34 is to show the example image for the flow information being superimposed upon on anatomic image.
Figure 35 is according to a non-limiting embodiment shown, for implementing one or more function described herein
The block diagram of the illustrative processor-based equipment of energy.
Figure 36 is the figure with the full convolution coder-decoder architecture for skipping connection, this skips connection using than shrinking
The smaller path expander in path.
Figure 37, which is shown, compares left ventricle (LV) Endo, LV Epi and right ventricle in ED (left column) and ES (right column)
(RV) the opposite absolute volume error (RAVE) between the FastVentricle and DeepVentricle of each of Endo
Block diagram.
Figure 38 shows a stochastic inputs (left side), uses for DeepVentricle and FastVentricle (centre)
Gradient decline optimizes it, and to fit label mapping, (right, RV Endo is red, and LV Endo is cyan, and LV Epi is
Blue).
Figure 39 show the low RAVE research for DeepVentricle and FastVentricle different slices and when
Between the example of neural network forecast put.
Figure 40 is to show the image of the relevant portion for the cardiac anatomy seen on cardiac MRI.
Figure 41 is to show the image of the internal endocardial contours for including papillary muscle.
Figure 42 is the image shown from the internal blood pool for excluding papillary muscle or endocardial contours.
Figure 43 is the flow chart for describing a kind of embodiment of process of papillary muscle and girder flesh.
Figure 44 is the flow chart for calculating a kind of embodiment of papillary muscle and girder muscular strength threshold value.
Figure 45, which is shown, calculates the overlapping of the pixel distribution between blood pool and cardiac muscle.
Figure 46 is the flow chart for identifying and showing a kind of embodiment of process of myocardium defect.
Figure 47 is to show the image of the position of ventricle insertion point.
Specific embodiment
In the following description, certain concrete details are provided to obtain to the thorough of disclosed each embodiment
Understand.However, those skilled in the relevant art are, it will be recognized that can be one or more in these no details
In the case of, or using other methods, component, material etc. realize embodiment.In other cases, with computer system,
Server computer, and/or the relevant known structure of communication network are not specifically shown or described in detail to avoid to embodiment
Unnecessarily vague description.
Unless the context otherwise requires, otherwise in entire disclosure and claims, word " comprising " and "comprising"
It is synonym, and is inclusive or open (i.e., however not excluded that other unlisted element or method movement).
Refer to that " embodiment " or " embodiment " mean to combine embodiment description through this specification
Specific feature, structure or characteristic include at least one embodiment.Therefore, the phrase occurred everywhere through this specification
" in one embodiment " or " in embodiments " it is not necessarily all referring to identical embodiment.In addition, specific special
Sign, structure or characteristic can combine in any suitable manner in one or more embodiments.
As used in this specification and subsidiary claims, unless the context clearly indicates otherwise, before element
Face does not use the meaning that numeral-classifier compound may include " one or more ", "at least one" and " one or more ".Also should
It is noted that term "or" is usually to include that the meaning of "and/or" is come using unless the context clearly determines otherwise.
Title provided herein and abstract do not explain the range or meaning of embodiment just for the sake of convenient.
The automatic ventricular segmentation of SSFP
DeepVentricle structure
Fig. 6 show for heart SSFP research ventricular segmentation convolutional neural networks (CNN) structure 600, herein by
Referred to as DeepVentricle.Network 600 includes two paths: left side is constricted path 602 comprising convolutional layer 606 and pond
Layer 608, right side is path expander 604 comprising up-sampling layer or transposition convolutional layer 610 and convolutional layer 606.
The quantity of free parameter in network 600 determines the entropy capacity of model, is substantially that model can be remembered
Information content.A big chunk in these free parameters is present in network 600 in each layer of convolution kernel.Network 600 is matched
It is set to so that the doubles and spatial resolution of Feature Mapping halve after each pond layer 608.In each up-sampling
After layer 610, the quantity of Feature Mapping halves, and spatial resolution doubles.It, can be by first layer using the program
Quantity (for example, 1 to 2000 Feature Mapping) is come the quantity for the Feature Mapping each of being fully described on network 600 layer.Extremely
In few some embodiments, the quantity of the Feature Mapping in first layer is 128.It was found that improving mould using additional Feature Mapping
The precision of type, and with increased computation complexity, the moderate cost that memory uses and housebroken model disk uses.Just
The other values of the quantity of beginning Feature Mapping be also likely to be it is enough, this depend on housebroken data volume and available computational resources
Expectation compromise between model performance.
In at least some embodiments, network 600 includes two convolutional layers 606 before each pond layer 608, volume
The size of product core is 3 × 3 and stride is 1.Also the difference of these parameters (number of plies, convolution kernel size, convolution stride) can be used
Combination.It is searched for based on hyper parameter, it is found that four ponds and up-sampling operation are best for the effect data checked, although knot
Fruit is only sensitive to quantity appropriateness.
It is (this to lack referred to as " effective " filling of filling in the case where any filling of no application carrys out input picture
(" valid " padding)), the convolution greater than 1 × 1 can reduce the size of output Feature Mapping naturally, because only that (image_
Size-conv_size+1) convolution may be adapted to across given image.Using effective filling, for the defeated of 572 × 572 pixels
Enter image, output segmentation mapping is only 388 × 388 pixels.Therefore segmentation complete image needs flush system method, and can not
Divide the boundary of original image.It is sharp before each convolution in the network 600 according at least some embodiments of the disclosure
With width zero padding (conv_size-2), so that segmentation mapping is always with resolution ratio having the same is inputted, (referred to as " identical " is filled out
Fill (" same " padding)).Also effective filling can be used.
Using pondization operation to Feature Mapping carry out down-sampling may be by means of in the space of original image have compared with
Wide-field convolution learns the important steps of more advanced abstract characteristics.In at least some embodiments, network 600 utilizes step
2 × 2 maximum pondizations that width is 2 operate to carry out down-sampled images after every group of convolution.Also acquistion down-sampling can be used, i.e.,
To the input volume convolution for 2 × 2 convolution that stride is 2, but it will increase computation complexity in this way.In general, pond also can be used
Change the various combination of size and stride.
When carrying out Pixel-level segmentation in full convolutional network, needing to carry out activation volume up-sampling, to be restored to it original
Resolution ratio.In order to increase the resolution ratio of activation volume in network, some systems can be used up-sampling operation, then carry out 2 × 2
Convolution finally carries out two 3 × 3 convolution then by skipping Feature Mapping of the connection splicing from corresponding shrinkage layer.At least
In some embodiments, network 600 replaces up-sampling and 2 × 2 convolution, the single transposition convolution with single transposition convolutional layer 610
Layer 610 executes up-sampling and interpolation using acquistion kernel, to improve the ability that model differentiates fine detail.After the operation
Be to skip connection splicing, as in Fig. 6 from constricted path 602 to shown in the block arrow of path expander 604.After splicing, it applies
Two 3 × 3 convolutional layers.
In at least some embodiments, amendment linear unit (ReLU) is for all activated after convolution.It can also make
With other nonlinear activation functions, including PReLU (parameter ReLU) and ELU (index linear unit).
Model hyper parameter
Model hyper parameter can store at least one the non-transitory processor readable storage medium read during the training period
In matter (for example, configuration file).The parameter of descriptive model may include:
Num_pooling_layers: the sum of pond (and up-sampling) layer;
Pooling_type: the type (for example, maximum pondization operation) of pondization operation to be used;
Num_init_filters: the quantity (convolution kernel) of first layer filtering;
Num_conv_layers: the quantity of the convolutional layer between each pondization operation;
Conv_kernel_size: the side length of convolution kernel, as unit of pixel;
Dropout_prob: by network by specific node on given propagated forward/back-propagation algorithm of batch
Activation be set as zero probability;
Border_mode: in the method that the leading zero input feature vector of convolution maps;
Activation: the nonlinear activation function used after each convolution;
Weight_init: the method for initializing weight in network;
Batch_norm: whether batch standard is used after each of down-sampling/constriction of network is non-linear
Change;
Batch_norm_momentum: the standard deviation on average value and each feature base (per-feature basis)
Batch standardized calculation in momentum;
Down_trainable: whether allow down-sampling part study when seeing new data of network;
Bridge_trainable: whether allow to bridge convolution study;
Up_trainable: the up-sampling part of network whether is allowed to learn;With
Out_trainable: whether allow to generate the final convolution study of Pixel-level probability.
The parameter for describing training data to be used may include:
Score size of the image relative to original image in crop_frac:LMDB;
Height: the height of image, as unit of pixel;With
Width: the width of image, as unit of pixel.
Description supplements the parameters of the data used during the training period
Horizontal_flip: whether input/label pair is overturn at random in the horizontal direction;
Vertical_flip: whether input/label pair is overturn at random in vertical direction;
Shear_amount: clip image/label pair positive/negative limit value;
Shift_amount: mobile image/label pair largest score value;
Zoom_amount: enlarged drawing/label pair largest score value;
Rotation_amount: rotation image/label pair positive/negative limit value;
Zoom_warping: whether scaling and distortion are used together;
Brightness: change the positive/negative limit value of brightness of image;
Contrast: change the positive/negative limit value of picture contrast;With
Alpha, beta: the first parameter and the second parameter of description flexible deformation intensity.
Describing trained parameter includes:
Batch_size: the exemplary quantity of network is shown in each propagated forward/back-propagation algorithm;
Max_epoch: via the maximum number of iterations of data;
Optimizer_name: the title of majorized function to be used;
Optimizer_lr: the value of learning rate;
Objective: the objective function used;
Early_stopping_monitor: monitoring with determine model training when the parameter of deconditioning;With
After early_stopping_patience:early_stopping_monitor value does not increase, stopping
Before model training, the period to be waited (epoch) number.
In order to select optimal models, random search can be executed to these hyper parameters, and can choose and test with highest
Demonstrate,prove the model of precision.
Tranining database
In at least some embodiments, pretreatment image/segmentation mask pair sudden strain of a muscle of the storage for training can be used
Electric memory mapping database (LMDB).This database structure has many excellent compared with the other modes of storage training data
Point.These advantages include: that key mapping is lexicographic for speed;Image/segmentation mask is to the lattice required with training
Formula storage, therefore they do not need further to pre-process in training;And read image/segmentation mask to be calculate cost
Very low processing.
Training data can usually be stored with various other formats, including the name file on disk and be based on each image
True value (ground truth) database real-time generation mask.These methods may be implemented it is identical as a result, although they
Training process may be slowed down.
It can be the LMDB new for input/target of training pattern each unique group of creation.Which ensure that figure
As that will not slow down during pretreated training.
The processing of missing data during training
It is different from the model of cell differentiation task (foreground and background) of two classifications was only related in the past, it is disclosed herein
SSFP model tries to differentiate four classifications, i.e. background, the LV internal membrane of heart, the LV external membrane of heart and the RV internal membrane of heart.To achieve it, net
Network output may include three probability graphs, each non-background classes one.During the training period, by the true of every one kind in three classifications
Value binary mask is supplied to network together with pixel data.Network losses can be determined that the total of the loss of three classifications
With.If lacking any one of three true value masks in image (means that we do not have data, and true value is a blank
Mask), then the mask may be ignored when calculating loss.
It clearly states in the training process and lacks Truth data.For example, even if the LV external membrane of heart and the endocardial profile position RV
It sets unknown, network can also be trained on the image for defining LV endocardial contours.It cannot consider the more basic of missing data
Structure can only be trained in the subset (such as 20%) of the training image of all three type profiles with definition.With
This mode, which reduces amount of training data, will lead to precision significant decrease.Therefore, by clearly correcting loss function to consider to lack
Data are lost, using complete amount of training data, thus the function of keeping e-learning more powerful.
The creation of tranining database
Fig. 7 shows the process 700 for creating SSFP LMDB.At 702, extracts and take turns from SSFP Truth data library 704
Wide information.These profiles are stored in Truth data library 704 as the dictionary of profile X position and Y location, are cut with specific SSFP
Piece position and time point are associated.706, from corresponding SSFP DICOM (digital imaging and communications in medicine) image 708
Pixel data is matched with according to boolean's (Boolean) mask of the information creating.At 710, system passes through standardized images, sanction
It cuts image/mask and adjustment image/mask size comes pretreatment image and mask.In at least some embodiments,
MRI is standardized so that their average value is zero, and make the 1st and the 99th percentile of a collection of image fall in -0.5 and
At 0.5, i.e., their " usable range " falls in -0.5 to 0.5.Image can be cut and be sized, so that ventricle profile accounts for
According to the larger proportion of image.This leads to more total prospect class pixels, makes it easier to the fine detail of resolution ventricle (especially
Corner) and model is helped to restrain, it is all these all with lesser computing capability.
At 712, the unique key for SSFP LMDB is defined as cascading the combination of example UID and SOP example UID.
It will include time point, slice indexes and the image and mask metadata of LMDB key store in a data frame at 714.716
Place, for each key, by standardized, clipped and adjusted size of image and clipped and adjusted size of mask
It is stored in LMDB.
DeepVentricle training
Fig. 8 shows the process 800 for illustrating model training.At least in some embodiments, it is based on TensorFlow structure
The Open Source Code wrapper Keras built can be used for training pattern.But using original TensorFlow, Theano, Caffe,
Identical result may be implemented in Torch, MXNet, MATLAB or other tensor math libraries.
In at least some embodiments, data set can be divided into training set, verifying collection and test set.Training set is used for mould
The update of type gradient, verifying collection " keeps " model (for example, stopping for early) of data for assessing in the training process, in training
Test set is not used completely in the process.
At 802, training is called.At 804, primary a batch ground reads image and mask number from LMDB training set
According to.At 806, at least some embodiments, as described above, according to the super ginseng of the distortion being stored in model hyper parameter file
It counts be distorted image and mask.At 808, which is handled by network.At 810, loss/gradient is calculated.In
At 812, weight is updated according to specified optimizer and optimizer learning rate.In at least some embodiments, can make
Rule is updated with cross entropy loss function pixel-by-pixel and Adam (Adam) to calculate loss.
At 814, system can determine whether period (epoch) completes.If period does not complete, which is returned to
Movement 804 is to read another batch of training data.At 816, if period completes, measurement is calculated on verifying collection.For example, this
Kind measurement may include verifying loss, verifying precision, relative accuracy, f1 relative to the initial model for only predicting most of classifications
Scoring, precision and recall rate.
At 818, verifying loss can be monitored to determine whether model improves.At 820, if model is changed really
Into can then save the weight of model at this time.At 822, zero can be reset as by early stopping counter, and can be at 804
Start the training to another period.Measurement (such as verifying precision) other than verifying loss can be used for instruction assessment models
Energy.At 824, if model is not improved after period, the early counter that stops increases by 1.At 826, if counted
Device does not reach its limit, then starts the training to another period at 804.At 828, if counter has reached its pole
Limit then stops the training to model.This " early stopping " method is for preventing overfitting, but there are other prevention overfittings
Method, such as horizontal or L2 regularization is exited using lesser model, increase.
The data of test set are not will use when training pattern.Data from test set can be used to show showing for segmentation
Example, but the information is not used to training or relative to each other arranged model.
Reasoning
Reasoning is the process using trained model prediction new data.In at least some embodiments, it can be used
Network application (or " Web App ") makes inferences.Fig. 9 shows an illustrative assembly line or process 900, can be with by it
New SSFP research is predicted.At 902, after user is loaded with research in network application, user can be with
Inference service (for example, by clicking " profile for generating missing " icon) is called, (not yet constructing for any missing is automatically generated
) profile.This profile may include such as LV Endo, LV Epi or RV Endo.In at least some embodiments, when
It is loaded in the application by user when studying or when research is uploaded onto the server by user for the first time, can call and push away automatically
Reason.If executing reasoning in uplink time, prediction can be stored at that time non-transitory processor readable storage medium
In, but just shown until user opens research.
Inference service is responsible for stress model, generates profile and display it to user.After calling reasoning at 902, In
Inference service device is sent images at 904.At 906, the production model or network used by inference service, which is loaded into, to be pushed away
It manages on server.The network can be selected from the corpus for the model trained during hyper parameter search in advance.It can be based on essence
Degree, memory usage and the compromise executed between speed select network.Alternatively, user can be existed by user preference option
It is selected between " quick " or " accurate " model.
At 908, pass through inference service device single treatment a batch image.At 910, using with training period discussed above
Between used identical parameter (for example, standardization, cut) is pre-processed to image.In at least some embodiments,
It is distorted using inference time, and for example to 10 distortion copies of each input picture using average the reasoning results.This function
The reasoning results are generated, these results have robustness to the minor change in brightness, contrast, direction etc..
Slice position in requested batch and execution reasoning at time point.At 912, the forward direction by network is calculated
It propagates.For given image, which is the probability that each pixel generates each classification during propagated forward, this obtains one
Group probability graph, one probability graph of each classification, being worth is 0 to 1.By the way that the classification of each pixel is set as reflecting with highest label
Probability graph is converted to single label mask by the classification for penetrating probability.
At 914, system can execute post-processing.For example, at least some embodiments, if pixel is all general
Rate is below 0.5, then the pixel class of the pixel is arranged to background.In addition, in order to remove false prediction pixel, label reflects
It hits and is not belonging to any pixel of such maximum join domain and can be converted into background.In at least some embodiments,
By comparing the adjacent segmentation figure in time and space and it can remove exceptional value and remove ghost image element.Alternatively, because given
Ventricle may the occasional join domain different as two appear in single slice, for example, because RV is in heart base portion
It is nearby non-convex, it is possible to allow multiple join domains, but lesser region or far from crossing over slice position and time
The regions of centroid of all detection zones can be removed.
In at least some embodiments, post-processing can be executed at 914 to meet one or more physical limit items
Part.For example, post-processing may insure that in the myocardial volume at all time points be identical.To achieve it, system can be with
Dynamically adjust the threshold value for carrying out binaryzation to it before the internal membrane of heart and external membrane of heart probability graph are converted to profile.Example
Such as, adjustable threshold value is so that the difference for the myocardial volume reported using nonlinear least square method is minimized.As physics
Another example of restrictive condition, post-processing behavior may insure that RV and LV are not overlapped.To achieve it, system can be only
Any given pixel is allowed to belong to a classification, this is the classification with highest inferred probabilities.User can have config option
To enable or disable selected restrictive condition.
It at 916, is processed if not all batches, then new batch is added to processing stream at 908
Waterline has carried out reasoning in all slice positions and all time points until.
In at least some embodiments, once constructing label mask, for the ease of checking, user interaction and data
Library storage, mask can be converted into spline profiles.The first step is by all pixels in label mask border by mask
Be converted to polygon.Then, one group of batten is used for using being converted to the polygon based on the Corner Detection Algorithm in following
Control point: " the An improved method of angle detection on of A.Rosenfeld and J.S.Weszka
Digital curves ", Computers, IEEE Transactions on, C-24 (9): 940-941, in September, 1975.It comes from
The typical polygon of one of these masks will have hundreds of vertex.Corner Detection attempts to be reduced to one group about 16
Spline control points.Which reduce storage demands, and obtain the smoother segmentation of appearance.
At 918, these battens are stored in database, and user is shown in network application.If user repairs
Change batten, then modified batten more new database can be used.
In at least some embodiments, by calculating body according to all vertex building volume mesh in given point in time
Product.It sorts on each slice of 3D volume on vertex.To each vertex in profile, the first top connected in each profile is generated
The cubic spline of the opening of point connects second batten etc. on the second vertex, until obtaining the cylindrical vertex for defining grid
Grid.Then the internal volume of polygonal mesh is calculated.Based on calculated volume, which indicates end-systole and relaxes at time point
Opening latter stage is the time independently determination for being based respectively on minimum and maximum volume, and these time points are labeled to user.
Figure 10,11 and 12 respectively illustrate the LV Endo profile 1002 at single time point and slice position, LV Epi
The example images 1000,1100 and 1200 of the reasoning results in the application of profile 1102 and RV Endo profile 1202.
While profile (for example, profile 1002,1102 and 1202) is shown to user, system-computed is simultaneously shown to user
Ventricular volume and multiple calculated measured values when ED and ES.Exemplary interfaces 1300 are shown in Figure 13, are shown
Multiple calculated measured values.In at least some embodiments, these measured values include: stroke output (SV) 1302,
It is the blood volume projected in a cardiac cycle from ventricle;Ejection fraction (EF) 1304, be in a cardiac cycle from
The score for the blood pool that ventricle projects;Cardiac output (CO) 1306 is the Mean Speed that blood leaves ventricle;ED mass 1308,
It is the myocardial mass (i.e. the external membrane of heart-internal membrane of heart) of diastasis ventricle;It is the heart of end-systole ventricle with ES mass 1310
Myoplasm amount.
For 4D Flow data, DeepVentricle structure identical with above-mentioned SSFP data, hyper parameter can be used
Searching method and tranining database.Training 4D Flow model can be identical as SSFP operation discussed above, but creates LMDB
It can be different from the embodiment of 4D Flow with reasoning.
The building of 4D Flow data tranining database
It acquires and stores with the direction SAX in view of SSFP DICOM file, 4D Flow DICOM is collected and stored as axial direction
Slice.In order to construct the SAX multiplanar reconstruction (MPR) of data, user can need to place for left heart and/or right heart relevant
Tick lables.Then these tick lables be used to define unique SAX for each ventricle limited by ventricular apex and valve
Plane.Figure 14 shows the SAX set of planes 1400 (also referred to as SAX heap) for LV, wherein each SAX plane is for two chamber views
It 1402, is parallel for three chamber views 1404 and four chamber views 1406.
If desired, the application can also allow for user to possess not parallel SAX plane.Figure 15 shows the one of SAX heap
Group view 1500, wherein for two chamber views 1502, three chamber views 1504, four chamber views 1506 and reconstruction image 1508, for
The segmentation plane of RV is not parallel.This is because segmentation plane does not intersect and is parallel to the feelings of valve plane with valve plane
Under condition, it is slightly easier to segmentation ventricle.But it's not necessary, and can obtain in the case where not using this feature
Accurate result.
As shown in the image 1600 and 1700 respectively in Figure 16 and Figure 17, image data in each SAX plane
It is split on multiplanar reconstruction.The point 1602 on profile 1604 in image 1606 defines batten and is stored in data
Content in library.Profile 1604 is projected to two chamber LAX views 1608, three chamber LAX views 1610 and four chamber LAX views 1612
In.Figure 17 shows images 1702,1704,1706 and 1708, and wherein the same slice of Figure 16 is divided, but two chamber views
1704, each of three chamber views 1706 and four chamber views 1708 slightly rotate flat with the prominent segmentation with depth effect
Face.
Figure 18 shows according to clinician's annotation the process 1800 for constructing trained LMDB.4D Flow annotation can be deposited
Storage is in MongoDB 1802.At 1804 and 1806, system extracts profile and tick lables respectively.Profile is stored as a series of
Define (x, y, z) point of profile batten.Tick lables be stored as each tick lables single four-dimensional coordinate (x, y, z,
t)。
At 1808, in order to which profile is converted to boolean's mask, system-computed spin matrix is to rotate to x-y for profile point
In plane.The system can also define sampling grid, i.e. one group of (x, y, z) point in the original plane of profile.System passes through phase
Same spin matrix rotation profile and sampling grid, so that they are in an x-y plane.Which point of sampling grid now determined that
In define profile the vertex 2D in be a simple task.This is the simple computation geometrical issues of 2D polygon.
4D Flow DICOM is stored in database 1810.At 1812, system is used to be marked from the position of movement 1806
Will annotation and the 4D Flow DICOM from database 1810, image is defined and generated along SAX heap.In general, should
SAX heap is different from there is defined the original SAX heaps of true value profile.System define heap with connect the left ventricle apex of the heart (LVA) and two
The line of cusp (MV) is orthogonal.The combination (such as right ventricular apex (RVA) and tricuspid valve (TV)) of other tick lables appropriate can also
To work similarly.
In at least some embodiments, system defines the quantity (such as 14) of the slice between LVA and MV because this with
The quantity of slice in most of SSFP SAX heaps is similar.Also the slice of different number can be used.More slices will increase
The diversity of training set, although actual disk size can be than multifarious increase faster.Perfect number of the intended result to slice
Mesh is insensitive.
Four slices can be attached to the SAX heap by LVA, and four additional slice passes through MV.Which ensure that whole
A ventricle is in SAX heap.As a result may exact magnitude to the slice additionally used it is insensitive.By ensuring aorta petal (AV)
It is positioned at the right side of the line of connection LVA and MV, SAX heap may be oriented such that RV always in the left side of image (such as in heart
In MR as routine).Although the consistency of orientation may be critically important to the result obtained, exact choice direction is to appoint
Meaning.
At 1814, at least some embodiments, in order to simplify and accelerate trained and inference speed, research is given
All available profiles are interpolated on single non-curve SAX heap.Once the plane of SAX heap is defined, just for by crude sampling net
Linear interpolation is arranged in each ventricle of lattice description and time point, i.e., a series of (x, y, z) point and their corresponding masks.So
System will be in the general SAX heap of the true value Masking interpolation in original SAX heap to research afterwards.Show in the view 1900 of Figure 19
Such example is gone out, it illustrates multiplanar reconstruction 1902, RV Endo mask 1904, LV Epi mask 1906 and LV Endo
Mask 1908.Come when indicating label lacks using label (sentinel) in the true value mask of interpolation.It is shown in FIG. 20
Such exemplary visual view 2000, it illustrates multiplanar reconstruction 2002, RV Endo mask 2004, missing LV
Epi mask 2006 and LV Endo mask 2008.
In at least some embodiments, training can will be executed in mask projection to axial plane and in axial plane and push away
Reason, rather than by true value mask projection to public SAX heap.Similar precision may be implemented in this, but may result in slight point
Resolution loss, since it is desired that the profile of reasoning is projected back in SAX heap again to show in the user interface of application.
At 1816, system executes pretreatment operation.For example, pretreatment movement may include standardized images, cutting figure
Picture/mask, and adjustment image/mask size.
At 1818, system is that the unique key that 4D Flow LMDB is defined is time index, slice index, side (" right side "
Or " left side "), layer (" Endo " or " Epi "), upload ID, working space ID (unique identifier of the annotation of a people) and
32 words of the character string combinations of workflow_key (unique identifier for completing the workflow of work for wherein giving user)
Symbol hash.Alternatively, any one in many other unique keys of each image/mask pair can be used.At 1820, it is
System will include that the image and mask metadata at time point, slice index and LMDB key store in a data frame.Normalized, sanction
Cut with adjusted size of image and clipped and adjusted size of mask for each key storage 1822 in LMDB.
The DeepVentricle reasoning of 4D Flow data
As SSFP DeepVentricle reasoning discussed above, network application can be used for 4D Flow data
Reasoning.Figure 21 shows the assembly line of process 2100, and system predicts new 4D Flow research by the assembly line.In
At 2102, after user is loaded with research in network application, user can be by similar with above-mentioned reasoning assembly line
And assembly line as shown in Figure 9 calls inference service.Tick lables have manually or automatically been defined (for example, by following
The automated location tag lookup algorithm of discussion).
The position of tick lables is used to construct the standard LV SAX heap for executing reasoning.As described above, the building mode of SAX heap
It is identical as the mode of SAX heap is constructed during training.At 2104, calculated according to the position of tick lables every in description SAX heap
Metadata needed for a MPR.The plane of each MPR is limited by the point in plane and the normal of plane completely, but the system exists
Also by the vector of connection bicuspid valve and aorta petal in the embodiment, to ensure that image orientation is correct.That is, the right heart
Room is located at the left side of image.Another group of tick lables can also be enough to ensure that right ventricle is located at image such as bicuspid valve and tricuspid valve
Left side.
At 2106, MPR metadata is then communicated to calculation server, and the distributed version for saving data is (each
Calculate node has the time point of several data).At 2108, each node presents requested at its available time point
MPR.At 2110, then, (it includes time point, orientation, position and slice together with their metadata for the MPR image of generation
Index) temporally point be evenly distributed on multiple inference service devices.At 2112, network is loaded into each inference node
On.
At 2114, by each inference node single treatment a batch image.At 2116, image is pretreated.2118
Place calculates propagated forward.At 2120, prediction is post-processed, and in a manner of identical with above-mentioned SSFP embodiment
To construct spline profiles.
At 2122, after all batches have been processed, batten generated is forwarded back to network server, wherein
Batten is connected with the reasoning results from other inference nodes.If profile lacks, network server passes through in contiguous slices
Between carry out interpolation to ensure that volume is continuous (that is, the profile not lacked among volume).At 2124, network server
Profile is saved in the database, profile is then presented to user by network application.If user edits batten, batten is more
New version can be stored in the side of the version automatically generated original in database.It at least some embodiments, will be manual
The profile of editor is compared with its original version automatically generated can be used for only to the reasoning results for needing manual correction
Training or fine tuning model again.
Figure 22,23 and 24 respectively illustrate the LV Endo (profile 2202) at single time point and slice position respectively,
The image 2200,2300 and 2400 using reasoning of LV Epi (profile 2302) and RV Endo (profile 2402).With SSFP mono-
Calculating volume and multiple calculated measured values at sample, ED and ES can be presented to the user (referring to Figure 13).
Three-dimensional end-to-end convolutional coding structure
Another method of the end-to-end segmentation of ventricle is in the whole process using volumetric image, volume mask and 3D volumes
Product core.The description and operation of this embodiment all follow strictly SSFP embodiment discussed above, but have some keys
Difference.Therefore, for simplicity, following discussion is concentrated mainly in these differences.
In addition to convolution kernel be (N × M × K) pixel rather than only other than (N × M) pixel, for the embodiment
DeepVentricle structure with it is discussed above almost the same, wherein N, M and K are positive integers, can be equal to each other or not
Together.Model parameter also looks like, but in order to fully describe the shape of volume input picture, it may be necessary to describe training number
According to when increase depth component.
As other embodiments, training LMDB is used for the embodiment.LMDB for the embodiment can be with
It is constructed in the mode similar with 4D Flow embodiment discussed above.However, for the embodiment, using more
Slice is to define SAX, so that the slice spacings between contiguous slices are similar to the slice spacings of pixel separation on the direction x and y
(that is, pixel separation is almost Three-Dimensional Isotropic).As long as the ratio in all researchs between pel spacing remains unchanged,
Anisotropic pel spacing may realize similar result.Then SAX MPR and mask, which are spatially sliced, is ranked up, and
These slices are connected into a coherent volumetric image.It is carried out according to above-mentioned assembly line identical referring to described in Fig. 8
Model training.
Reasoning assembly line is also very similar to the embodiment of 4D Flow.But in this embodiment, reasoning it
Before, it needs adjacent MPR connecting into a volumetric images.
Exclude papillary muscle
The another embodiment of the automatic parted pattern of DeepVentricle is that wherein only have the blood pool of ventricle to be divided simultaneously
And papillary muscle is left out.In practice, because of papillary muscle very little and in irregular shape, for convenience's sake, they
It is typically included in cut zone.This excludes structure, hyper parameter and the tranining database of the embodiment of papillary muscle from blood pool
It is similar to above-mentioned SSFP embodiment.However, in this embodiment, true value divide database include exclude papillary muscle without
It is left ventricle and the endocardium of right ventricle annotation comprising them.
Because from endocardial contours exclude papillary muscle segmentation for building for be it is heavy, training data
Amount may be significantly smaller than the segmentation amount of can be obtained for being not excluded for papillary muscle.In order to compensate for that, can be used first through
The convolutional neural networks of data training, wherein papillary muscle is included in internal membrane of heart segmentation and excludes except external membrane of heart segmentation.
This enables the network to the general size and shape that each classification is divided in study.Then, which is excluding papillary muscle from segmentation
Lesser one group of data on be finely adjusted.The result is that obtaining dividing with previous segmentation the same category but by papillary muscle from the internal membrane of heart
The parted pattern of exclusion.Which results in than anterior papillary muscle include in endocardial contours when the more accurate heart of available measurement
Room blood pool cubing.
The synthesis of other views of automatic volume
Traditional image classification or segmentation neural network structure is once to single image, possible multichannel (for example, RGB)
Image, possible volumetric image are operated.The 2D method of standard include network once to from 3D volume individually be sliced into
Row operation.In this case, the information being individually sliced is only from for the data in the slice to be classified or are segmented.
The problem of this method, is, will not be integrated in the reasoning to the slice of concern around the background of the time point or slice.
Standard 3D method carries out volume predictions using 3D kernel and in conjunction with volume information.However, this method is very slow, need a large amount of
Computing resource is for trained and reasoning.
Some mixed methods discussed below can be used for the time-and-space background of memory optimization/calculating and model availability it
Between compromise.Space background is particularly useful for the ventricular segmentation close to heart base portion, and wherein bicuspid valve and tricuspid valve are single
It is difficult to differentiate between on 2D slice.Time background and the consistency for strengthening segmentation can be all useful to all parts of segmentation.
In first method, which is interpreted 2D problem, once predicts single slice, and contiguous slices
(spatially, on the time, or both) is interpreted additional " channel " of image.For example, in time t=5 and slice=10, it can
To construct the image in the channel 9-, wherein following time/slice combination data are bundled in following 9 channels: t=4, slice
=9;T=4, slice=10;T=4, slice=11;T=5, slice=9;T=5, slice=10;T=5, slice=11;T=
6, slice=9;T=6, slice=10;And t=6, slice=11.In this configuration, network is run using 2D convolution, but
The data from neighbouring room and time position are combined, and by standard neural network technology come composite signal, the technology is logical
It crosses and carrys out construction feature mapping with the linear combination for the input channel for learning interior nuclear convolution.
Second method is directed to some complexity of cardiac MRI, although its orthogonal (or inclination) for can be used for acquiring data
Any situation of plane.In standard SSFP cardiac MRI, short axle (SAX) heap is tracked down and recovered with one or more long axis (LAX) planes one
It takes.LAX plane is orthogonal with SAX heap, and LAX plane usually has significant higher sky on the direction along long axis of left ventricle
Between resolution ratio.That is, there is worse resolution ratio than original LAX image by the LAX image that the MPR of SAX heap is constructed, because
Spacing is obviously coarseer than pel spacing in LAX plane between SAX piece.Since the spatial resolution of long axis direction is higher, with SAX
Image is compared, and the valve in LAX image is more easily seen.
Therefore, it is possible to use two stages ventricular segmentation model.In the first stage, ventricle is divided into one or more LAX
Plane.Due to the high spatial resolution of these images, segmentation can be very accurate.The disadvantage is that LAX plane only by single plane and
It is not made of volume.If this LAX segmentation is projected to SAX heap, LAX is segmented on each SAX image and is shown as line.If
Line is across multiple LAX views (for example, 2CH, 3CH, 4CH;Referring to following title " for defining the valve of manual LV/RV volume
The interface of plane ") divide and assembles, then it can accurately construct this line.The line can be used for limiting SAX segmentation, and SAX segmentation is to pass through
What the different models operated to SAX image generated.SAX parted pattern is using original SAX DICOM data and by LAX mould
The projection line of type prediction is as input, to be predicted.The LAX line of prediction is especially helped for guiding and limiting SAX prediction
Model near heart and valve plane base portion, wherein when individually observing SAX heap, segmentation is usually fuzzy.
The technology can be used for any cardiac imaging, including wherein once obtain whole volume 4D Flow (and regardless of
Extracting, gathering SAX and LAX image), and advantage is that while in two chain models, but is only needed using 2D kernel.
The automatic time of volume or the use of flow information
The research of SSFP film includes the data (3 spaces, 1 time) of 4 dimensions, and 4D Flow research includes 5 dimensions
Data (3 spaces, 1 time, 4 information channels).This 4 information channels are anatomical structure (i.e. signal strength), x-axis phase
Position, y-axis phase and z-axis phase.The simplest method of building model is the signal strength being used only at each 3d space point, and
And do not include temporal information, or for 4D Flow, do not include flow information.This simple model is needed shape
Input is used as the 3D data cube of (x, y, z).
In order to utilize all available data, at least some embodiments, the simultaneously angle of incidence and phase data are gone back.For
At least some reasons, this is particularly useful.Firstly, because the movement of heart usually follows predictable mould in cardiac cycle
Formula, so the relative motion of pixel can specifically facilitate identification anatomic region.Secondly, being usually noted for cardiac cycle
About 20 time points, it means that heart only slightly moves between frames.Known prediction should only somewhat change between frames
It can be used as the mode of adjustment model output.Third can be used flow information and come location structure, such as valve, have
The fairly regular flow pattern changed between low discharge and high flow capacity.
In order to which simultaneously angle of incidence data, time can be used as additional " channel " and is added in intensity data.In the embodiment party
In formula, model then by 3D data binary large object (blob) or shape that shape be (X, Y, NTIMES) for (X, Y, Z,
NTIMES 4D data binary large object) is as input, and wherein NTIMES is the quantity at the time point to be included.This can be
Several time points near all time points, or the time point of concern.If may be needed including all time points
Or data must be filled with some " packaging type " time points, because the time indicates cardiac cycle and is substantially periodically
's.Then the model can be related to 2D/3D convolution, wherein additional " channel " of the time point as data, or include 3D/4D volumes
Product.In the previous case, output can be the 2D/3D at the single time of concern.In the latter case, output can be
3D/4D, and may include and the data at same time point included in input.
The phase data obtained in 4D Flow can also combine in a similar way, use each of phase (x, y, z)
Additional channel of the direction as input data, or phase amplitude is used only as single additional channel.In no time but have
In the case where all three components of flowing, input has shape (X, Y, Z, 4), wherein 4 indicate image pixel intensities and three phases
Component.In the case where having time, which is (X, Y, Z, NTIMES, 4).In such an embodiment, model therefore into
The dimension of row 4 or 5 dimension convolution.
Automate 4D Flow tick lables
The system and method being discussed herein can also detect the region of multiple cardiac position marks in 3D MRI automatically.It should
System processing has the MRI of the different groups of different location, orientation and imaged heart appearance.Moreover, the system is effectively handled
Based on there is the database of imperfect annotation come the problem of study.More specifically, system solves the problem when on only training set
Each input volumetric image when located some tick lables, the problem of each tick lables in detection image.
In general, assembly line is a kind of end-to-end machine learning algorithm, needed for being exported automatically from original 3D rendering
Tick lables position.Advantageously, which does not need pretreatment from the user or priori knowledge.In addition, volumetric image
In the tick lables that detect can be used for along 2CH, 3CH, 4CH and SAX view projection images.This can be constructed automatically in this way
A little views, without user intervention.
Firstly, discussing the first embodiment of solution.In this embodiment, the use of cardiac position mark has
The neural networks of many layers positions.The structure be three-dimensional (3D) and use 3D convolution.This explanation focuses on detection three
Left ventricular location mark (the LV apex of the heart, bicuspid valve and aorta petal) and three right ventricle tick lables (the RV apex of the heart, tricuspid valve and lungs
Arterial valve).It should be noted, however, that this method can be used for if a part that these annotations can be used as true value obtains
More different cardiac position marks are detected using comparable result.
Similar to previously described DeepVentricle structure, the tick lables detection method of the disclosure is based on convolution
Neural network.Information and their annotation needed for extracting tick lables detection from the database flag of clinical image
(i.e. the positions of tick lables).Figure 25,26 and 27 respectively illustrate the image 2500,2600,2700 of three patients, wherein the left heart
The room apex of the heart, bicuspid valve and right ventricular apex have used network application (such as network application discussed above) to position respectively.Please
Pay attention to the annotation for lacking aorta petal, pulmonary valve and tricuspid valve in this example.
Data processing pipeline is described first.This section describe in detail building band annotate image database method with
And the specific method encoded for the position to tick lables.Secondly, proposing the structure of machine learning method.It proposes
How the 3D rendering of input is converted to the prediction of the position of tick lables by network.Third, describing how model to be trained for can
Use data.Finally, describing reasoning assembly line in detail.It is illustrated how Application of Neural Network in unused figure before
Picture, to predict the region of all six tick lables.
Data processing pipeline
For the machine learning method proposed, the database of 4D Flow data is used comprising the three-dimensional (3D) of heart
Magnetic resonance image (MRI) is stored as a series of two-dimentional (2D) DICOM images.In general, being obtained about through single cardiac cycle
20 3D volumetric images, each snapshot for corresponding to heartbeat.Therefore initial data base corresponds to different patients when different
Between the stage 3D rendering.A series of tick lables annotations can be presented in each 3D MRI, from zero tick lables to six positions
Mark is placed by the user of network application.It is annotated if there is tick lables, is then stored as the vector that coordinate is (x, y, z, t),
It indicates the position (x, y, z) of the tick lables in the 3D MRI corresponding to time point t.
Figure 28 shows process 2800, can handle the 2D DICOM slice 2802 of 4D Flow image later and deposits
Store up the note 2 804 in MongoDB database.
At 2806, tick lables coordinate is extracted from MongoDB database.Then, at 2808, by according to 2D
DICOM image stacks along the position (stacking 2D image along depth dimension to construct 3D volume) of z-axis and comes from the single time
The 2D DICOM image of point extracts 3D MRI from this series of 2D DICOM image.This results in the complete views for indicating heart
The volume 3D rendering of figure.LMDB is constructed with the annotated 3D rendering in the position by least one tick lables.This means that
There is no the image of true value tick lables to be not included in LMDB.
At 2810, label mapping is defined, passing through will be in neural network used in later phases, with intelligible side
Formula encodes annotation information.The position of tick lables at each position of 3D volume by indicating that the position is marked in position
A possibility that at the position of will, is encoded.For this purpose, constructing the distribution of 3D gaussian probability, during the position with true value tick lables is
The heart, standard deviation correspond to the interrater reliability of such tick lables in all training datas observed.
In order to understand interrater reliability, a specific tick lables, such as the LV apex of the heart are considered.For the LV apex of the heart by multiple
Each research of user or " evaluator " annotation, calculates the standard deviation of the LV apex of the heart coordinate of all users.By to each position
Mark repeats the process, defines the standard deviation of the Gauss for encoding each tick lables.The process allows with methodization side
The parameter is arranged in formula.In the different advantages using this method, it should be noted that the standard deviation of each tick lables is different
, and depend on the complexity of position location mark.Specifically, more difficult tick lables have in destination probability figure
Biggish Gauss standard is poor.In addition, standard deviation is different along x, y and z axes, which reflects the anatomical structures due to heart
And/or the resolution ratio of image, along a direction rather than the uncertainty in another direction may be bigger.
It is standard deviation (arbitrary value, parameter search) and available comparable to note that alternative strategy can also be used for defining
As a result.Figure 29 is shown from the image in the case of the position to 2D of the tick lables identified using the intersection point 2902 in view 2904
This transformation of Gauss 2906 in the view 2908 of upper assessment.
At 2812, once defining three-D volumes for both MRI and label mapping, image is with regard to processed.In general,
Target is standardized to the image size and appearance of future training.
Figure 30 shows the process 3000 for pretreated stream waterline.At 3006 and 3008,3D MRI3002 and label
Mapping 3004 be adjusted to respectively predefined size $ n_x times n_y times n_z $ so that all MRI can be by
It is supplied to identical neural network.At 3010, the intensity of MRI pixel is sheared between the 1st percentile and the 99th percentile.
This means that image pixel intensities will be saturated at the intensity value for corresponding to the 1st percentile and the 99th percentile.This is eliminated may be by
Abnormal pixel intensity caused by human factor.At 3012, then intensity is scaled to 0 to 1.At 3014, then, using pair
The adaptive histogram equalization being limited than degree standardizes intensity histogram, to maximize the contrast in image and make figure
As interior strength difference minimizes (strength difference may be caused by such as magnetic field bump).Finally, image is occupied at 3016
In with zero-mean.Other strategies can be used for the standardization of image intensity, such as the variance criterion of input is turned to 1, and
And it can produce similar result.The assembly line generates the pretreatment image 3018 and label 3020 that can be provided to network.
Figure 31 and 32 shows the example images 3100 and 3200 of two patients of pretreatment 3D MRI and code tag.
Specifically, Figure 31 shows the pretreated input picture of a patient and the sagittal view of coding mitral valve position mark
3102, axial plane Figure 31 04 and anterior view 3106, and Figure 32 shows the pretreated input picture and coding two of another patient
Sagittal view 3202, axial plane Figure 32 04 and the anterior view 3206 of cusp tick lables.As shown in Figure 31 and Figure 32, tricuspid valve is determined
The uncertainty of position is greater than the uncertainty of mitral valve location.Moreover, uncertain be different between the axles.
Back to Figure 28, at 2814, upload ID is defined as identification and is stored in trained LMDB database at 2816
The key to (MRI, label mapping).Finally, this is written into LMDB to (MRI, label mapping) at 2818.
Network structure
As described above, deep neural network is for detecting tick lables.The network using pretreated 3D MRI as input,
And export six 3D label mappings, each tick lables one.Structure used in this embodiment it is similar with above structure or
It is identical.Network is made of two symmetric paths: constricted path and path expander (see Fig. 6).
Since in available training data and not all mark is all available, the system and method for the disclosure are advantageously
The missing information in label is handled, while still being able to predict all tick lables simultaneously.
For the network of tick lables detection and DeepVentricle embodiment discussed above in three main aspects
Upper difference.Firstly, structure is three-dimensional: network single treatment 3D MRI generates 3D label mapping for each tick lables.Its
It is secondary, 6 classifications of neural network forecast, each tick lables one.Third, the parameter that selects can be with after hyper parameter search
DeepVentricle parameter is different, and is specifically chosen for solving the problems, such as at hand.In addition, discussed above for defining
The standard deviation of label mapping is considered hyper parameter.The output of network is 3D mapping, to the position where tick lables
It is encoded.The high level of mapping can correspond to the position of possible tick lables, and low value can correspond to unlikely position
Set the position of mark.
Training
Following discussion describes how using the LMDB database of 3D MRI and label mapping pair to train depth nerve net
Network.Overall goal is to adjust the parameter of network, allows the network to predict the cardiac position mark on previously invisible image
Position.The flow chart of training process is as shown in Figure 8 and such as above-mentioned.
Tranining database can be divided into the training set for training pattern, for quantitative model quality verifying collection and
Test set.The division is located at all images from single patient in same group.It ensure that the model is not used for training
Patient verifying.The data from test set are not will use when training pattern.Data from test set can be used to show position
The example of mark positioning is set, but the information is not used to training or relative to each other arranged model.
During the training period, the parameter of neural network is updated using the gradient of loss.In at least some embodiments, may be used
To provide the more rapid convergence of network using the loss in the region where weighting tick lables.More precisely, being damaged calculating
When mistake, compared with the rest part of image, bigger weight can be applied to the image-region close to tick lables.As a result
It is that network is more quickly restrained.But using non-weighting loss can also obtain it is good as a result, although the training time it is longer.
Reasoning
In the case where given new images, figure is pre-processed by mode similar with the mode above by reference to described in Figure 28
The position of tick lables is obtained as carrying out reasoning.More precisely, image can be resized, shears, scale, the histogram of image
Figure is equalized and image can be placed in the middle.Network is that each tick lables export a mapping, six tick lables the case where
Lower output a total of six 3D mapping.These mappings describe the probability that each tick lables are found in specific position.Alternatively, can
Anti- distance function is encoded (that is, high level obtains small distance, low value according to the true value position of tick lables with being considered as mapping
Obtain big distance).
In this way, can be that each tick lables export most by searching for neural network as shown in the chart 3300 of Figure 33
Big value determines the positions 3302 of tick lables.Then the position is projected to the space of original unpretreated 3D input MRI
In for final tick lables positioning (for example, eliminating any spatial distortion for being applied to the volume during reasoning).Note
Meaning, can be used the position coordinates that label mapping is converted to tick lables by other several strategies.It is, for example, possible to use labels to reflect
It penetrates using desired location as 3D probability density.It note that the mode for being maximized and corresponding to and considering density.Or it is maximum in selection
Value or desired value as position before, probability Estimation can be smoothed first.
Data collection
In at least some embodiments, data set is made of the clinical research that previous user uploads in network application.
Annotation can be placed on different images by user.As previously mentioned, the data set is divided into training set, verifying collection and test
Collection.
Assembly line above-mentioned and being shown in FIG. 8 can be used to train in neural network.Batch extracted from training set
Secondary data, which are sequentially provided, arrives neural network.Loss gradient between calculating neural network forecast and the position of true value tick lables is simultaneously
Its backpropagation is updated to the intrinsic parameter of network.As described above, selecting other model hyper parameters using hyper parameter search
(such as network size, shape).
Model accessibility
Housebroken model can be used as a part storage of cloud service on the server.The model can add in reasoning
It is loaded on multiple servers, concurrently to detect tick lables at several time points.The process with it is above-mentioned and be shown in FIG. 9
The method for DeepVentricle it is similar.
User's interaction
When cardiac MRI is uploaded to network application, user can choose " view " button under " heart " part.This meeting
Open the new panel on the right side of the image with " position location mark " button.This button is selected then to be automatically positioned at every point of time
Six tick lables on each 3D rendering.The list of the tick lables positioned is visible in right panel.Select position mark
The focus of image can be taken to the position of the tick lables of prediction by will title, and user is allowed any be considered as necessary and repair
Change.Once user is satisfied, user can choose the Standard View button to construct 2 chamber of standard of heart, 3 chambers, 4 chambers and SAX view
Figure.
In at least some embodiments, the 3D rendering of acquisition is 4D Flow sequence.This means that the phase of signal also by
It obtains, and can be used for quantifying the blood flow velocity in heart and artery, as shown in the image 3400 of Figure 34, it illustrates four
Different views.The information can be used to position the different location mark of heart.In this case, previously described model can
It include flow information to be expanded.
Image preprocessing
In 4D Flow, for each patient, flow rate information can be obtained in each time point of acquisition.In order to fill
Divide and utilize the information, standard deviation along the time axis can be calculated at each voxel of 3D rendering.The amplitude of standard deviation and one
The blood flow variable quantity of the pixel is associated in secondary Heart Beat procedure.Then, according to the standardized stream waterline described before to standard deviation
Image is standardized: being sized, is sheared, scaling, is histogram equalization, placed in the middle.It note that it is contemplated that other several sides
Method encodes the temporal information of flow data.For example, the Fourier transformation of 4D signal can be calculated along last dimension, and can
To use various frequency windows (frequency bin) to encode signal.In more general terms, entire time series can be defeated
Enter to network, cost is to need additional calculating and storage power.
Network extension
The input of neural network can also be extended by additional channel.More precisely, the four-dimension can be defined
(4D) tensor, wherein last dimension will dissect image pixel intensities and uninterrupted or velocity component is encoded to individual channel.It is above-mentioned
Network can be extended to and receive this tensor as input.This needs to extend first layer to receive 4D tensor.Network training pushes away
Reason and user interact subsequent step still with it is previously described similar.
It at least some embodiments, can be by directly predicting coordinate (x, y, z) Lai Shixian of different location mark
The automatic positioning of cardiac position mark.To this end it is possible to use, different network structures.This alternative network can be by shrinking road
Diameter is several full articulamentum compositions later, and the vector that the length with (x, y, z) coordinate is 3 is as the defeated of each tick lables
Out.This is Recurrent networks, rather than divides network.Pay attention to from segmentation network it is different, in Recurrent networks, do not expanded in network
Path.Identical output format also can be used in other structures.In at least some embodiments, if provide 4D data (x,
Y, z, time) as input, then it includes in the output that the time, which can also be used as fourth dimension,.
Assuming that the time is not incorporated into, the output of network is 18 scalars, is corresponded in input picture in six marks
Three coordinates of each.Such structure can be instructed in the mode similar with previously described tick lables detector
Practice.The update uniquely needed is re-forming for loss, to consider network output format (different from using in first embodiment
Probability graph, be spatial point in this embodiment) variation.One reasonable loss function can be network output and true value
L2 (square) distance between tick lables coordinate, but unknown losses function also can be used, if loss function with closed
The amount of note is related, i.e. range error.
The second mind for serving as discriminator network also can be used in first tick lables detection embodiment discussed above
It is extended through network.Discriminator network can be trained, with the position of tick lables distinguished, true value and it is bad,
It is not the position of the tick lables of true value.In that case, the initial network of the embodiment can be used for for each type
Tick lables generate several tick lables suggestions, such as by using prediction tick lables probability distribution all parts most
Big value.Then discriminator network can assess each suggestion, for example, by using the high-resolution patch for surrounding the mark suggestion
On taxonomic structure.Then there will be the suggestion for the maximum probability of true value tick lables as output.This embodiment can
It can help in the case where selecting the position of correct tick lables for example to there is noise or artifact in indefinite situation.
The method of another kind detection cardiac position mark is using intensified learning.In this different frame, it is contemplated that
The intelligent body walked along 3D rendering.Intelligent body is firstly placed at the centre of image.Then intelligent body follows the principles until intelligence
Energy body reaches the position of tick lables.The decision process of intelligent body in each step of the Rule Expression: to the left, to the right, upwards, to
Under, be moved forward or rearward.Deep neural network can be used to learn the rule, deep neural network is close to be learnt using Q-
The Bellman equation of the state action function Q of algorithm.It is then possible to learn a Q letter for the tick lables each to be detected
Number.
In at least some embodiments, neural network can be used and directly to predict to define the plane of normal view
The parameter of position and direction.For example, network can be trained to calculate 3D needed for starting pixels Grid Mobile to long axis view
Rotate angle, translation and size change over.Individual model can be trained to predict different conversions, or single mould can be used
Type exports multiple views.
Define the interface of the valve plane of manual LV/RV volume
In order to more accurately divide left ventricle and right ventricle, identify that valvular position and direction can be advantageous.In
In at least some embodiments, in ventricular segmentation interface above-mentioned, user is able to use available long axis view and carrys out marker bit
In the point in valve plane.Valve plane is determined by these input points with the plane for finding best fit by executing to return.
The normal of the plane is disposed remotely from the apex of the heart of ventricle.Once define plane, positioned at positive side volume any part all from
It is subtracted in the total volume of ventricle.Which ensure that not including any part outside valve when determining ventricular volume.
Illustrative processor-based equipment
Figure 35 shows a kind of environment 3500 comprising be adapted for carrying out various functionality described herein based on processor
Equipment 3504.Although being not required, will implement in the description under normal circumstances of processor-executable instruction or logic
The some parts of mode, such as the program application module, the object or macro that are executed by one or more processors.Related fields
Technical staff is, it will be recognized that described embodiment and other embodiments can use various processor-based systems
To practice, the various processor-based system configurations include handheld device for configuration, for example, smart phone and tablet computer, can
Wearable device, multicomputer system, based on microprocessor or programmable consumer electrical product, personal computer (" PC "),
Network PC, minicomputer, mainframe computer etc..
Processor-based equipment 3504 may include one or more processors 3506, system storage 3508 and be
Various system units including system storage 3508 are connected to processor 3506 by system bus 3510, system bus 3510.Base
It will be mentioned herein with odd number sometimes in the equipment 3504 of processor, but this is not intended to and is limited to embodiment individually
System, because in some embodiments, the system of more or the calculating equipment of other networkings will be related to.Commercial system
Non-limiting example includes but is not limited to the arm processor from each manufacturer, the Core micro process from Intel Company, the U.S.
Device, the Sparc microprocessor from Sun Microsystems company, comes from the PowerPC from IBM
The PA-RISC series microprocessor of Hewlett-Packard company, the 68xxx series microprocessor from motorola inc.
Processor 3506 can be any Logical processing unit, such as one or more central processing unit (CPU), micro-
Processor, digital signal processor (DSP), specific integrated circuit (ASIC), field programmable gate array (FPGA) etc..Unless another
It is described, the construction of each box and operation shown in Figure 35 are conventional designs.Therefore, these boxes do not need herein further
Detailed description, because those skilled in the relevant art can understand them.
Any of bus structures or framework can be used in system bus 3510, including depositing with Memory Controller
Memory bus, peripheral bus and local bus.System storage 3508 includes read-only memory (" ROM ") 1012 and arbitrary access
Memory (" RAM ") 3515.The basic input/output (" BIOS ") 3516 that a part of ROM 3512 can be formed includes
Help transmits the basic routine of information between the element in processor-based equipment 3504, all as during start-up.It is some
The individual bus of data, instruction and electric power can be used in embodiment.
Processor-based equipment 3504 can also include one or more solid-state memories, such as flash memory or solid-state are driven
Dynamic device provides computer readable instructions, data structure, program module and other data for processor-based equipment 3504
Non-volatile memories.Although it is not shown, but processor-based equipment 3504 can use other non-transitory computers or place
Manage device readable medium, such as hard disk drive, CD drive or storage card media drive.
Program module can store in system storage 3508, such as operating system 3530, one or more applications
Program 3532, other programs or module 3534, driver 3536 and program data 3538.
For example, application program 3532 may include translation/rolling 3532a.Such translation/rolling logic may include but
Be not limited to determine indicant (for example, finger, stylus, cursor) when and/or where enter the logics of user interface elements,
The user interface elements include the region with central part He at least one boundary.This translation/rolling logic may include
But it is not limited to determine the logic that at least one element of user interface elements shows mobile be should be at direction and rate, and
And display is updated, so that at least one element shows movement with identified rate on identified direction.For example,
Translation/rolling logic 3532a can be stored as one or more executable instructions.Translation/rolling logic 3532a can wrap
It includes processor and/or logic or instruction can be performed in machine, generate user interface pair to use the data of characterization indicant movement
As, such as the data from touch sensitive dis-play or from computer mouse or trace ball or other users interface equipment.
System storage 3508 can also include signal procedure 3540, for example, for allowing processor-based equipment
3504 access and exchange with other systems the server and/or networking client or browser of data, other systems such as user
Website, company intranet or other networks as described below in computing system, internet.Communication in illustrated embodiment
Program 3540 is based on markup language, such as hypertext markup language (HTML), extensible markup language (XML) or wireless mark
Remember language (WML), and operated using markup language, the markup language is separated using the grammer for being added to file data
Character indicates file structure.Many servers and/or networking client or browser can be commercially available, such as from
Those of the Mozilla company and the Microsoft in Washington in California.
Although being shown in FIG. 35 is to be stored in system storage 3508, operating system 3530, application program
3532, other procedures/modules 3534, driver 3536, program data 3538 and server and/or 3540 (example of signal procedure
Such as browser) it can store in any other various non-transitory processor readable mediums (for example, hard disk drive, light
Disk drive, SSD and/or flash memory).
User can input order and information by indicant, such as by input equipment, such as pass through finger 3544a, touching
A touch screen 3548 for control 3544b input, or the computer mouse by controlling cursor or trace ball 3544c come input order and
Information.Other input equipments may include microphone, control stick, game paddle, tablet computer, scanner, biometric scan equipment
Deng.These and other input equipments (i.e. " I/O equipment ") are connected to processor 3506,3546, the interface by interface 3546
Universal serial bus (" USB ") interface of system bus 3510 is connected to such as touch screen controller and/or by user's input, still
Other interfaces such as parallel port, game port or wireless interface or serial port can be used.Touch screen 3548 can be via
Video interface 3550 (such as video adapter) is connected to system bus 3510, to receive the image shown via touch screen 3548
Data or image information.Although being not shown, processor-based equipment 3504 may include other output equipments, such as loudspeaking
Device, vibrator, tactile actuator etc..
Processor-based equipment 3504 can operate in the network environment using one or more logical connections, with
Via one or more communication channels (such as one or more network 3514a, 3514b) and one or more long-range meters
Calculation machine, server and/or equipment communication.These logical connections can contribute to allow computer communicated it is any of
Method, such as by one or more LAN and/or WAN, such as internet and/or cellular communications networks.This network environment exists
The computer network of wired and wireless enterprise-wide, Intranet, extranet, internet and including telecommunication network, cellular network, seek
It exhales and is well-known in network and the other kinds of communication network of other mobile networks.
When in a network environment in use, processor-based equipment 3504 may include for establishing communication on network
One or more wired or wireless communication interface 3552a, 3552b (for example, cellular radio, WI-FI radio, bluetooth
Radio), such as internet 3514a or cellular network 3514b.
In a network environment, program module, application program or data, or part thereof can store server calculate be
In system (not shown).Those skilled in the relevant art are, it will be recognized that network connection shown in Figure 35 is between the computers
Some embodiments of the mode of communication are established, and other connections can be used, including are wirelessly connected.
For convenience's sake, processor 3506, system storage 3508, network and communication interface 3552a, 3552b are schemed
It is shown as being communicatively coupled each other via system bus 3510, thus the connection between above-mentioned component is provided.It is being based on processor
Equipment 3504 alternate embodiments in, above-mentioned component can be in different modes in a manner of shown in Figure 35 communicably
Connection.For example, one or more above-mentioned components can be directly connected to other component, or can be via intermediate member (not
Show) it is connected to each other.In some embodiments, system bus 3510 is omitted, and is connected using suitable by these portions
Part is connected directly to one another.
FastVentricle
Cardiac magnetic resonance (CMR) imaging is commonly used in assessment cardiac function and structure.After one of CMR is the disadvantage is that check
It handles troublesome.If do not automated, interpreter is usually required to each case to the accurate assessment of cardiac function by CMR
Dozens of minutes are spent to carry out the profile of manual construction ventricular structure.By generate can by the profile suggestion of interpreter slight modifications,
Time needed for automatic building profile can reduce each patient.Full convolutional network (FCN) is a kind of change of convolutional neural networks
Type has been used for quickly propelling the prior art divided automatically, this makes natural selection of the FCN as ventricular segmentation.However,
FCN is calculated cost by it and is limited, and which increase economic cost and reduces the user experience of production system.In order to overcome this
Disadvantage, we have developed FastVentricle frameworks, this is a kind of ventricular segmentation FCN based on the ENet framework developed recently
Framework.FastVentricle is 4 times faster than previous state-of-the-art ventricular segmentation structure, and the memory of FastVentricle operation
It is 6 times fewer than previous state-of-the-art ventricular segmentation framework, while being still able to maintain outstanding clinical precision.
FastVentricle is introduced
The patient of known or doubtful cardiovascular disease receives cardiac MRI usually to assess cardiac function.These scannings are diligently
Room profile annotation, to calculate the heart size of end-systole (ES) and diastasis (ED).According to heart size, phase can be calculated
The diagnosis amount of pass, such as ejection fraction and myocardial mass.Each case manual construction profile can require over 30 minutes,
Therefore radiologist helps to accelerate this process commonly using automation tools.
Figure 36 shows the schematic diagram with the full convolution coder-decoder architecture for skipping connection, this skips connection benefit
With path expander more smaller than constricted path.
Movable contour model is previously to have had been used for the segmentation of ventricle based on didactic dividing method.Referring to Kass,
M.,Witkin,A.,Terzopoulos,D.:Snakes:Active contour models.International
Journal of Computer Vision(1988)321-331;Zhu, W. etc.: A geodesic-active-contour-
based variational model for short-axis cardiac MRI segmentation.Int.Journal
of Computer Math.90(1)(2013).However, the not only table on the image of low contrast of the method based on active contour
It is existing bad and also sensitive to initialization and hyper parameter value.Deep learning method for segmentation is more recently by using full convolution
Network (FCN) and be defined as state-of-the-art technology.Referring to Long, J., Shelhamer, E., Darrell, T.:Fully
convolutional networks for semantic segmentation.In:Proceedings of the IEEE
CVPR.(2015)3431-3440.The general design of the behind FCN is the phase for learning various space scales using down-sampling path
It closes feature, then combines the feature predicted pixel-by-pixel using up-sampling path (referring to Figure 36).DeconvNet takes the lead in using right
The contract expansion framework of title is divided in more detail, and cost is longer training and inference time and provides to bigger calculating
Source demand.Referring to Noh, H., Hong, S., Han, B.:Learning deconvolution network for semantic
segmentation.In:Proceedings of the IEEE ICCV.(2015)1520-1528.U-Net originally develops use
In usually with less training image and needing the biomedical community of higher resolution, between contraction and path expander
The use for skipping connection is increased to retain details.Referring to Ronneberger, O., Fischer, P., Brox, T.:U-net:
Convolutional networks for biomedical image segmentation.In:International
Conference on Medical Image Computing and Computer-Assisted Intervention,
Springer(2015)234-241。
Between down-sampling and up-sampling layer there are one of the full symmetric architectural framework of one-to-one relationship the disadvantage is that, it
May be very slow, especially for big input picture.The ENet FCN of substitution (design) be it is a kind of optimized for speed it is non-
Symmetrical architecture.Paszke, A., Chaurasia, A., etc.: ENet:A deep neural network architecture
for real-time semantic segmentation.arXiv preprint arXiv:1606.02147(2016)。
ENet reduces input size using only a small amount of Feature Mapping using early stage down-sampling.This improves speed, because at image
When full resolution, most of calculated load of network occurs, and the influence to precision is minimum, because of the major part in stage thus
Visual information is all extra.In addition, ENet author shows that the main purpose of path expander in FCN is to constricted path acquistion
Details up-sampled and finely tuned, rather than learn complicated up-sampling feature;Therefore, ENet is utilized smaller than its constricted path
Path expander.ENet also uses bottleneck module, these modules are that have the convolution of small sensing region, is used for Feature Mapping
Project to the lower dimensional space that can apply bigger kernel.Referring to He, K., Zhang, X., Ren, S., Sun, J.:Deep
residual learning for image recognition.In:Proceedings of the IEEE CVPR.
(2016)770-778.Bottleneck also includes the He quoted immediately above, the residual error connection in the paper of K..ENet is also used and bottle
The path of neck path parallel connection, the path only include zero or more pond layer, directly to pass information from high-resolution layer
It is delivered to low resolution layer.Finally, in the entire network, ENet utilizes various inexpensive convolution operations.In addition to more expensive n ×
Except n convolution, ENet also uses cheaper asymmetry (1 × n and n × 1) convolution sum to expand convolution.With reference to Yu, F.,
Koltun,V.:Multi-scale context aggregation by dilated convolutions.arXiv
preprint arXiv:1511.07122(2015)。
Deep learning has been successfully applied to ventricular segmentation.With reference to Avendi, M. etc.: A combined deep-learning
and deformable-model approach to fully automatic segmentation of the left
ventricle in cardiac MRI.MedIA 30(2016);Tran,P.V.:A fully convolutional
neural network for cardiac segmentation in short-axis MRI.arXiv preprint
arXiv:1604.00494(2016).Herein, it is proposed that FastVentricle, is that there is one kind UNet formula to skip company
The ENet modification connect, for dividing LV Endo, LV Epi and RV Endo.More specifically, we are when image size is similar
It increases using from constricted path to path expander a possibility that skip connection.In detail, our outputs in original block
It is added between the input of Section 5 and between the output of Section 1 and the input of Section 4 and skips the connection (name in relation to each section
Claim, please refer to paszke above).In the disclosure, we are by FastVentricle and previous UNet modification
(DeepVentricle) it is compared.Lau, H.K. etc.: DeepVentricle:Automated cardiac MRI
ventricle segmentation using deep learning.Conference on Machine Intelligence
in Medical Imaging(2016).It was found that being used compared with using DeepVentricle to make inferences
FastVentricle makes inferences required time and memory and wants much less, and FastVentricle realize with
The comparable segmentation precision of DeepVentricle.
FastVentricle method
Training data.We use 1143 short axle film steady state free precession (SSFP) scan databases, the database quilt
Annotation is a part of cooperative institution's standard clinical nursing, with training and the model for verifying us.We will count in chronological order
According to being divided into 80% for training, 10% for verifying, and 10% is used as reservation collection.What is discussed in the section part of the disclosure is all
Experiment uses verifying collection.The types of profiles of annotation includes LV Endo, LV Epi and RV Endo.To scanning result at ED and ES
It is annotated.Profile is annotated with different frequencies;The scanning result of 96% (1097) has LV Endo profile, 22% (247)
Scanning result has LV Epi profile, and the scanning result of 85% (966) has RV Endo profile.
Training.In at least some embodiments, we use the Keras deep learning packet with TensorFlow as
Our all models are realized and trained in rear end, although other deep learning packets are also enough.Referring to Chollet, F.:
Keras.https://github.com/fchollet/keras(2015);Abadi,M.,Agarwal,A.,Barham,P.,
Brevdo, E., Chen, Z., Citro, C, Corrado, G.S., Davis, A., Dean, J., Devin, M. etc.:
Tensorflow:Large-scale machine learning on heterogeneous distributed
systems.arXiv preprint arXiv:1603.04467(2016).We modify the intersection entropy loss pixel-by-pixel of standard,
The true value annotation lacked in data set to explain us.We, which have abandoned, carries out calculated loss to the image for lacking true value
Ingredient;The known ingredient lost of our backpropagation true value.This enables us to instruct complete training dataset
Practice, including having the sequence of missing profile.In at least some embodiments, according to Adam Policy Updates weight.With reference to
Kingma,D.,Ba,J.:Adam:A method for stochastic optimization.arXiv preprint
arXiv:1412.6980(2014).In at least some embodiments, we monitor that precision is pixel-by-pixel to determine when model is received
It holds back.For more different models, we are using opposite absolute volume error (RAVE), because volume accuracy is for accurately sending
Raw diagnosis amount is vital.RAVE is defined as | Vpred-Vtruth |/Vtruth, wherein Vtruth is true value volume,
Vpred is the volume that segmentation mask calculating is predicted according to the 2D of collection.Using opposite measurement may insure to give Children heart and
The equal weight of human adult heart.Frustum approximation can be used, volume is calculated according to segmentation mask.
Data prediction.In at least some embodiments, we are standardized all MRI, so that a collection of image
The the 1st and the 99th percentile fall in -0.5 and 0.5, i.e. their " usable range " falls in -0.5 to 0.5.Other standards side
Case, such as self-adapting histogram equilibrium and feasible.We cut and adjust image size so that ventricle profile occupy compared with
The image of large scale;Actually cutting and be sized factor is hyper parameter.Cutting image will increase prospect (ventricle) class and occupy
Image scaled, to be easier to differentiate fine detail and model is helped to restrain.
Figure 37 show compare it is every in LV Endo, LV Epi and RV Endo at the ED (left column) and ES (right column)
The block diagram of opposite absolute volume error (RAVE) between one FastVentricle and DeepVentricle.Box center
Line indicate RAVE median, box both ends display distribution 25% (Q1) and 75% (Q3).It is silent according to Matplotlib
Recognize value and defines whisker.
Hyper parameter search.We search for finely tune ENet the and UNet network architecture using random hyper parameter.Referring to
Bergstra,J.,Bengio,Y.:Random search for hyper-parameter optimization.Journal
of Machine Learning Research 13(Feb)(2012)281-305.In practice, for each UNet and ENet
Framework, i) run the model with the random hyper parameter collection in period of fixed quantity, ii) selected from obtained model corpus
N number of model (wherein N is pre-determined small integer) of collection precision, iii are verified with highest) minimum average RAVE is based on from N
Final mask is selected in a candidate.In at least some embodiments, the hyper parameter of UNet framework includes criticizing standardized make
With, the quantity of losing probability, the convolution number of plies, the quantity of initial filter and pond layer.In at least some embodiments,
The hyper parameter of ENet framework includes the kernel size of asymmetric convolution, the number of repetition of Section 2 of network, initial bottleneck module
Quantity, the quantity of initial filter, Throw ratio, losing probability and whether using skip connection (in relation to the thin of these parameters
Section, please refers to above-cited Paszke).For two kinds of frameworks, at least some embodiments, hyper parameter further includes criticizing greatly
Small, learning rate cuts ratio and image size.
FastVentricle result
Note that these results describe the single embodiment of FastVentricle, and it can use different design ginsengs
Number realizes different results.
Volumetric errors analysis.Figure 37 show for ventricular structure (LV Endo, LV Epi, RV Endo) and stage (ES,
ED every kind) combines the RAVE block diagram for comparing DeepVentricle and FastVentricle, and wherein sample size is in the following table 2
It points out.We have found that the performance of model is closely similar for different structure and stage.In fact, the median of RAVE is: i) right
In LV Endo, DeepVentricle 4.5%, FastVentricle 5.5%, i) for LV Epi,
DeepVentricle is 5.6%, FastVentricle 4.2%, i) for RV Endo, DeepVentricle 7.6%,
FastVentricle is 9.0%.For two models, ES is the more difficult stage because region to be split more it is small simultaneously
And RV Endo is most difficult in structure since it is more complicated for the shape of two models.Although only for ES and ED annotate into
Training is gone, but we can carry out the reasoning of visually pleasant on all time points.Figure 39 show for
The example of the different slices and the neural network forecast at time point of the low RAVE research of DeepVentricle and FastVentricle.It is special
Not, Figure 39 shows the healthy patients (above) low for RAVE and patient's (following figure) with hypertrophic cardiomyopathy
DeepVentricle and FastVentricle prediction.RV Endo is marked with red, and LV Endo is marked with green, and LV Epi is used
Blue marks.The X-axis of grid corresponds to the time index sampled in entire cardiac cycle, and Y-axis is corresponding to (low from the apex of the heart
Slice index) to the slice index of base portion (height slice index) sampling.The apex of the heart of ventricle and the model performance at center are better than base portion
Model performance because model performance is past only from the point of view of the ground sections where (separating ventricle with atrium) valve plane
It is past smudgy.In addition, dividing often than more preferable at ES at ED, because the room at ES is smaller, and when the heart contracts,
Dark papillary muscle tends to mix with cardiac muscle.
It is lastly noted that we are using ENet, use in hyper parameter search in 5 best models of verifying collection precision aspect
Connection is skipped, the value for skipping connection for the problem is demonstrated.
DeepVentricle | FastVentricle | |
The average value of RAVE | 0.089 | 0.093 |
Reasoning obtains the GPU time (millisecond) of each sample | 31 | 7 |
It initializes GPU time (second) | 1.3 | 13.3 |
The quantity of parameter | 19,249,059 | 755,529 |
GPU memory (MB) needed for reasoning | 1,800 | 270 |
The size (MB) of weight file | 220 | 10 |
Precision, model velocity and the computational complexity of table 1:DeepVentricle and FastVentricle.Each sample
Inference time and be GPU memory needed for the reasoning that batch size calculates with 16.
Statistical analysis.We measure the difference between the RAVE distribution of DeepVentricle and FastVentricle
Statistical significance, we have carried out true value annotation to the combination in these stages and anatomical structure.We use Wilcoxon-
Mann-Whitney examine, using with default parameters SciPy0.17.0 embodiment come assess DeepVentricle and
The RAVE of FastVentricle is distributed equal null hypothesis.Table 2 shows result.We have found that showing one without statistic evidence
A model is best, because minimum measurement p value is 0.1.
Computational complexity and inference speed.In order to which clinical and viable commercial, any automatic algorithms all should be than manual
Faster, and light weight enough is easily to use for annotation.As shown in Table 1, it has been found that the embodiment of FastVentricle
It is about 4 times of DeepVentricle in speed, and carrys out reasoning using 1/6th memory.Since model includes more
Multilayer, therefore before being ready to execute reasoning, FastVentricle needs the longer time to be initialized.However, in life
It produces in setting, only model need to once be initialized in configuration server, therefore this additional cost is unessential.
Internal representation.Neural network is inconvenient due to black box, that is, is difficult " inwardly seeing " and understands why to carry out certain
Prediction.This is particularly problematic in medical environment because doctor prefer using them it will be appreciated that tool.We follow following
Result come visualize DeepVentricle when executing reasoning " searching " function: Mordvintsev, A. etc.: Deep
Dream, https: //research.googleblog.com/2015/06/inceptionism-going-deepe r-into-
Neuric.html (2015) accesses the date: 2017-01-17.Make from random noise as model " input " and true segmentation mask
Start for target, we execute backpropagation to update the pixel value in input picture, so that minimization of loss.Figure 38 is shown
This optimum results of DeepVentricle and FastVentricle.It was found that as a doctor, when the internal membrane of heart is very light
It is confident to the prediction of the model and when very high with the contrast of the external membrane of heart.The model seems to have learned to ignore heart week
The anatomical structure enclosed.We also note that optimization input noise of the optimization input than FastVentricle of DeepVentricle
It is smaller, it may be possible to because the model of DeepVentricle it is bigger and under the full resolution of input picture using skipping connection.
DeepVentricle seems also " to have imagined " structure for looking like intra-ventricle papillary muscle.
Table 2: it from the Wilcoxon-Mann-Whitney U statistical value examined and p value and corresponding sample size, uses
Compare DeepVentricle and FastVentricle in every kind of combination on verifying collection for stage and ventricle anatomical structure.
According to data available, we can't see between DeepVentricle and FastVentricle, and there are statistical significance differences
It is different.
Figure 38 shows a stochastic inputs (left side), uses for DeepVentricle and FastVentricle (centre)
Gradient decline optimizes it, and to fit label mapping, (right, RV Endo is red, LV Endo is cyan, LV Epi is
Blue).The image of generation have network when being predicted " searching " many qualities, such as between the internal membrane of heart and the external membrane of heart
The presence of high contrast and papillary muscle.
FastVentricle is discussed
Performance.Although precision may be model most important attribute, algorithm execution speed pair when making clinical decision
It is also most important in the positive user experience of maintenance and minimum architecture cost.In our scope of experiment, Wo Menfa
There is no statistical significant difference between existing DeepVentricle and the precision of 4 speed FastVentricle.This shows
FastVentricle can replace DeepVentricle without having an adverse effect in clinical setting.
FastVentricle conclusion
We illustrate there is the new FCN (FastVentricle) based on ENet for skipping connection can be used for quickly having
Imitate Ground Split cardiac anatomy.Our algorithm is trained on the database of sparse annotation, provides LV for clinician
Endo, LV Epi and RV Endo profile, to calculate important diagnosis amount, such as ejection fraction and myocardial mass.
FastVentricle is 4 times of previous state-of-the-art technology in speed, and memory when running is previous state-of-the-art skill
/ 6th of art.
Papillary muscle and girder flesh divide
When assessing left ventricle by cardiac magnetic resonance, two primary structures are mostly important: the blood pool of cardiac muscle and ventricle is (i.e.
Blood in cardiac ventricles).It is papillary muscle and girder flesh between both structures, they are the adjacent cardiac muscles in cardiac ventricles
With the small muscle of both blood pools.When assessing the volume of ventricle blood pool, whether different mechanisms answers papillary muscle and girder flesh
This includes having different strategies in the volume of blood pool.It technically, should be by papillary muscle and girder in order to assess blood volume
Flesh excludes except the profile for limiting blood pool.However, due to the relatively regular shape and papillary muscle and girder of myocardium inner boundary
The relatively irregular shape of flesh, for convenience, the profile for limiting blood pool are typically considered the inner boundary of cardiac muscle.At that
In the case of, the volume of papillary muscle and girder flesh is included in blood pool volume, and blood pool volume is caused slightly to be over-evaluated.
Figure 40 is the image 4000 for showing the relevant portion of cardiac anatomy comprising around the cardiac muscle of left ventricle
4002.Also show the blood pool 4004 of left ventricle.The profile 4006 (that is, outer surface of heart) of the external membrane of heart, referred to herein as
Epicardial contours 4006 define the outer boundary of the cardiac muscle of left ventricle.Endocardial profile 4008 is (that is, by the blood pool of left ventricle
With the surface that separates of cardiac muscle), referred to herein as endocardial contours define the inner boundary of myocardium of left ventricle.Note that in Figure 40
In, endocardial contours 4008 include internal papillary muscle and girder flesh 4010.Nipple is excluded from the inside of endocardial contours 4008
Flesh and girder flesh 4010 should also be effective.
Figure 41 is image 4100, includes in the inside of endocardial contours 4108 it illustrates papillary muscle and girder flesh 4110
Situation.Also show the cardiac muscle 4102 around left ventricle.Assuming that endocardial contours 4108 constitute the feelings on the boundary of blood pool 4104
Under condition, the measurement volume of blood pool will slightly be over-evaluated, because the volume further includes papillary muscle and girder flesh 4110.
Figure 42 is image 4200, and it illustrates the substitution feelings that papillary muscle and girder flesh 4210 are excluded from endocardial contours 4208
Condition.Also show the blood pool 4204 of the cardiac muscle 4202 and left ventricle around left ventricle.In this case, the estimation of blood pool volume
Will be more acurrate, but profile 4208 is obvious more tortuous, and if hand drawn, describes more troublesome.
It is highly useful for being depicted in the automatic system of endocardial border when excluding papillary muscle and girder flesh from contoured interior
, because such system allows to work with the smallest interpreter to measure the volume of blood pool.However, this system needs to rely on
Complicated image processing techniques is to ensure the division of profile to the variation of human anatomic structure and magnetic resonance (MR) acquisition parameter not
It is sensitive.
Myocardial properties identify and position
Can be executed with cardiac magnetic resonance there are many research of type, each research can assess cardiac anatomy or
The different aspect of function.It is visualized using the imaging of the steady state free precession (SSFP) of not contrast for quantifying cardiac function
Anatomical structure.The biomarker of the Perfusion Imaging of contrast based on gadolinium coronary artery stenosis for identification.Also base is utilized
It is used to assess myocardial infarction in the advanced stage gadolinium enhancing imaging of the contrast of gadolinium.In all these imaging schemes and at other
In scheme, anatomy orientation and the demand to profile tend to be similar.Usually in short-axis direction, (wherein imaging plane is parallel to the left heart
The short axle of room) and long axis direction (the wherein long axis that imaging plane is parallel to left ventricle) the two on obtain image.At all three
The different components of cardiac function and anatomical structure are assessed in imaging scheme using the profile for describing cardiac muscle and blood pool.
Although the imaging scheme of each in the imaging of these types all has differences, the phase between blood pool and cardiac muscle
It is consistent (cardiac muscle is darker than blood pool) mostly to contrast;Therefore, single convolutional neural networks (CNN) model can be used for describing institute
There are three types of the cardiac muscles and blood pool in imaging scheme.The data in all these imaging schemes are grasped on an equal basis using a CNN
Make, rather than individual CNN is used to each imaging scheme, to simplify the management in practice of CNN model, dispose and hold
Row.However, it is necessary to be verified with the image data of true value profile annotation to all imaging schemes that desired CNN works.
Papillary muscle and girder flesh divide
Figure 43 shows a kind of embodiment party of the process 4300 for describing papillary muscle and girder flesh automatically from ventricle blood pool
Formula.Initially, the initial profile 4304 of the inner boundary and outer boundary of cardiac MRI image data 4302 and description cardiac muscle is all available
's.Papillary muscle and girder flesh are located at the inside of initial endocardium of left ventricle profile;That is, they are included in blood pool and are excluded
Except cardiac muscle.According to profile, the mask for limiting cardiac muscle and blood pool (including papillary muscle and girder flesh) is calculated at 4306.Extremely
In few some embodiments, the mask for limiting cardiac muscle and blood pool can get at the beginning of process 4300, and do not need from first
Beginning profile 4304 calculates.
Then calculating at 4308 will be used to describe relative to papillary muscle and girder flesh the intensity threshold of blood pool.Below with reference to
Method 4400 shown in Figure 44 describes at least one embodiment of intensity threshold calculating.
Then the pixel being applied to intensity threshold at 4310 in blood pool mask.Those pixels include blood pool and papillary muscle
With girder flesh.After threshold process, the pixel of high signal intensity is distributed into blood pool class, the pixel of low signal intensity is distributed
To papillary muscle and girder flesh class.
At 4312, at least some embodiments, the pixel of blood pool class is determined using coupling part analysis most
Big coupling part.Due to its high signal intensity as a part of blood pool class (but not be blood pool pixel maximum coupling part
A part) pixel be considered as hole in papillary muscle and girder flesh, and be converted into papillary muscle and girder flesh class.
In at least some embodiments, it is then calculated at 4314 and separates papillary muscle and girder flesh and blood pool and cardiac muscle
Gained boundary, and stored or be shown to user.In at least some embodiments, a part of blood pool will be confirmed as
Pixel be added to determine the net volume of ventricle blood pool.Then the volume can be stored, and be shown to user, or for subsequent
Calculating, such as cardiac ejection fraction.
Figure 44 shows an example embodiment of the process 4400 for calculating image intensity thresholds.It should be appreciated that
Other methods can be used to calculate image intensity thresholds.Initially, cardiac MRI image data 4402 and expression cardiac muscle and blood pool
Mask 4404 is all available.Blood pool mask includes blood pool and papillary muscle and girder flesh.Mask can be by describing myocardium profile
It is inferred to, or can be obtained by other methods.Papillary muscle and girder flesh are included in blood pool mask (referring to fig. 4 1).
The pixel intensity distribution of calculating myocardium and blood pool at 4406.For each of the two distributions, Ke Yi
The Density Estimator of image pixel intensities is calculated at 4408.If the experience of Silverman can be used in data approximate normal distribution
Method determines the kernel bandwidth in density estimation.Such as: referring to Silverman, Bernard W, Density estimation
For statistics and data analysis, volume 26, CRC publishing house, 1986.It alternatively, can be based on data
Distribution uses other bandwidth.
Then, density estimation overlapping is calculated at 4410 (that is, extracting given pixel intensity from myocardium pixel intensity distribution
Probability be equal to from blood pool distribution in extract pixel probability) at image pixel intensities.It can choose the image pixel intensities to be used as blood
The intensity threshold that pond pixel is separated with papillary muscle and girder flesh pixel.
Figure 45 is curve graph 4500, and it illustrates the distribution of the pixel intensity distribution between blood pool and cardiac muscle overlappings 4410.
Show the example distribution of the image pixel intensities in cardiac muscle 4502 and the image pixel intensities in blood pool 4504.Y-axis indicates probability distribution letter
Number, x-axis indicate the image pixel intensities of arbitrary unit.In the embodiment shown, for dividing blood pool and papillary muscle and girder flesh
The threshold value opened is the lap position 4506 between two distributions 4502 and 4504.
Myocardial properties identify and position
Figure 46 shows an embodiment of process 4600, which is identified and determined using the CNN model of pre-training
Position myocardial properties.Initially, cardiac image data 4602 and the CNN model 4604 of pre-training are available.Process 4600 extremely
In a few embodiment, cardiac image data is short axle magnetic resonance (MR) acquisition, but other imaging planes (for example, long axis)
It will be workd similarly with other image modes (for example, computer tomography or ultrasound).In at least some embodiments,
Trained CNN model 4604 has been based on has the data of same type (for example, identical imaging with cardiac image data 4602
Mode, identical contrast agent infusion protocol and identical MR pulse train (if applicable)) it is trained.At other
In embodiment, trained CNN model 4604 has been based on to be trained with the different types of data of cardiac image data 4602.
In some embodiments, the data that the training of CNN model 4604 is based on are the numbers from functional cardiac magnetic resonance imaging
According to (for example, the SSFP imaging sequence for passing through no contrast), and cardiac image data is from heart perfusion or myocardial delayed
Enhance the data of research.
In at least some embodiments, CNN model 4604 is based on and the different types of data of cardiac image data 4602
To train, it is then based on the data of 4602 same type of cardiac image data and finely tunes (that is, by training some or institute again
There is layer, while some weights can be able to maintain and fixed).
Trained CNN model 4604 is used for the reasoning at 4606 and obtains inside and outside myocardial contours.In at least some realities
It applies in mode, CNN model 4604 firstly generates one or more probability graphs, is then converted into profile.At least some
In embodiment, profile is post-processed at 4608, so as to not be that the tissue of myocardium a part is included in and is portrayed as the heart
Probability in the region of flesh minimizes.This post-processing can take various forms.For example, post-processing may include by morphology
Operation (such as morphology erosion) is applied to be identified as the heart area of cardiac muscle to reduce its area.Post-processing can be additionally
Or alternatively include modification threshold value, which exports in the probability graph of trained CNN model, so that being identified as cardiac muscle
Region is limited to following CNN output, these CNN are exported, and it is cardiac muscle that probability graph, which indicates that the region is very likely to,.Post-processing can be with
It additionally or alternatively include keeping the vertex for describing the profile of cardiac muscle mobile towards or away from the center of the ventricle of heart, to reduce
Identify the area of cardiac muscle or any combination of the above process.
In at least some embodiments, after-treatment applications are in describing heart area rather than the mask of profile.
In at least some embodiments, ventricle insertion point is determined at 4610, at the ventricle insertion point, right ventricular wall
It is attached to left ventricle.In at least some embodiments, insertion point is specified manually by the user of software.In other embodiments
In, insertion point is calculated automatically.
Figure 47 is the image 4700 for showing ventricle insertion point.Show left ventricle epicardial contours 4702 and right ventricle profile
4704.Right ventricle profile can be endocardium of right ventricle profile or right ventricle epicardial contours.Indicate lower insertion point 4706 and upper
Insertion point 4708.In at least one embodiment, the automatic system (for example, movement 4610 of Figure 46) of insertion point for identification
Right ventricle profile 4704 is analyzed at a distance from left ventricle epicardial contours 4702.Insertion point position 4706 and 4708 is defined as two
The position of the distance between a profile diverging.
In at least one other embodiment, the position 4706 and 4708 of insertion point is defined as left ventricle external membrane of heart wheel
Wide one of 4702 or right ventricle profile 4704 with limit left heart room 2 (left ventricle and atrium sinistrum) view and left heart room 3 (left ventricle,
Atrium sinistrum and aorta) view plane between intersection point.
At 4612, once depicting cardiac muscle, just myocardial region is positioned and quantified.It can quantify any potential of cardiac muscle
Illness (such as infraction) or feature (such as perfusion feature).Note that the movement 4612 of Figure 46 can be in any stage of the process
It executes, and can determine one or more profiles (for example, movement 4606) and insertion at least some embodiments
Before point (for example, movement 4610).In at least one embodiment of system, user detects manually and describes region-of-interest
(such as with respect to the region of enhancing).In the other embodiments of the system, region-of-interest by automatic system (such as CNN) or other
Image processing techniques detection, describe or not only describe but also detect, but be not limited to it is following discussed in any image processing techniques:
Karim, Rashed etc., " Evaluation of current algorithms for segmentation of scar
tissue from late gadolinium enhancement cardiovascular magnetic resonance of
the left atrium:an open-access grand challenge."Journal of Cardiovascular
Magnetic Resonance 15.1(2013):105.Then the quantization of defect, such as higher-strength or low-intensity are carried out, or
The amount of biology reasoning, such as absolute heart muscle perfusion.For example, with reference to [Christian 2004] Christian, Timothy F.
Deng " Absolute myocardial perfusion in canines measured by using dual-bolus
first-pass MR imaging."Radiology 232.3(2004):677-684.In at least some embodiments, no
It is detection specified defect, but assesses the characteristic (for example, perfusion) of entire myocardial region.
At 4614, once it is determined that the defect or other features of cardiac muscle just utilize the heart at least some embodiments
Cardiac muscle is divided into reference format, such as 17 segment models by flesh profile and insertion point.Such as: referring to Cerqueira, Manuel
D. etc., " Standardized myocardial segmentation and nomenclature for tomographic
imaging of the heart",Circulation 105.4(2002):539-542.In those embodiments, then make
Defect or myocardial properties are positioned with reference format.In at least some embodiments, obtained feature is on display 4616
It is shown to user.
The detailed description of front elaborates the more of equipment and/or method by using block diagram, schematic diagram and embodiment
Kind embodiment.Since these block diagrams, schematic diagram and embodiment include one or more functions and/or operation, this field skill
Art personnel are it will be appreciated that can be by various hardware, software, firmware or its substantially any combination come individually and/or jointly
Implement each of these block diagrams, flow chart and embodiment function and/or operation.In one embodiment, this theme can
To pass through specific integrated circuit (ASIC) Lai Shixian.However, it will be understood by those skilled in the art that embodiment party disclosed herein
Formula can be one or more as what is run on one or more computers in whole or in part in standard integrated circuit
A computer program (for example, as one or more programs run in one or more computer systems) is come equivalent
Ground is implemented, and comes equally as one or more programs run on one or more controllers (such as microcontroller)
Implement, comes equally as one or more programs run on one or more processors (for example, microprocessor)
Implement, equally implement as firmware, or equally implements as its substantially any combination, and according in the disclosure
Hold, designs circuit and/or write will be completely in the technical ability model of those of ordinary skill in the art for the code of software and/or firmware
In enclosing.
It will be appreciated by those skilled in the art that many methods set forth herein or algorithm can use additional movement, it can
To omit some movements, and/or can be to execute movement with specified order in a different order.
In addition, it will be understood by those within the art that the mechanism instructed herein can be used as program product in a variety of manners
It distributes, and regardless of being actually used in the concrete type for executing the signal bearing medium of distribution, illustrative embodiment
It is equally applicable.The example of signal bearing medium includes but is not limited to following: recordable-type media such as floppy disk, hard disk drive, CD
ROM, digital magnetic tape and computer storage.
Various embodiments described above can be combined to provide other embodiments.It is taught without prejudice to specific herein
In the range of leading and defining, all United States Patent (USP)s, United States Patent (USP) mentioned in this specification and/or being listed in request for data page
Application disclosure, U.S. Patent application, foreign patent, foreign patent application and non-patent publications are integrally incorporated this by reference
Text comprising but be not limited to: on July 7th, 2011 U.S. Provisional Patent Application submitted the 61/571908th;In November, 2013
The U.S. Patent Application No. submitted for 20th 14/118964;The PCT Patent Application PCT/ that on July 5th, 2012 submits
No. US2012/045575;On January 17th, 2014 U.S. Provisional Patent Application submitted the 61/928702nd;July 15 in 2016
U.S. Patent Application No. 15/112130 submitted day;The PCT Patent Application PCT/US2015/ that on January 16th, 2015 submits
No. 011851;On November 29th, 2015 U.S. Provisional Patent Application submitted the 62/260565th;On October 31st, 2016 submits
U.S. Provisional Patent Application the 62/415203rd;The U.S. Provisional Patent Application the 62/th that on November 1st, 2016 submits
No. 415666;On November 29th, 2016 PCT Patent Application submitted the PCT/US2016/064028th;With on January 27th, 2017
The U.S. Provisional Patent Application of submission the 62/451482nd.If necessary, the various aspects of embodiment can be modified to adopt
Other embodiments are provided with the system of each patent, application and publication, circuit and design.
According to being discussed in detail above, these and other changes can be carried out to these embodiments.In general, in appended right
In it is required that, used term is not necessarily to be construed as being limited to claim disclosed in description and claims specific real
Mode is applied, but whole models of the equivalent program of all possible embodiment and claim should be interpreted as including
It encloses.Therefore, claim is not limited by the disclosure.
Claims (58)
1. a kind of machine learning system comprising:
At least one non-transitory processor readable storage medium, at least one in storage processor executable instruction or data
It is a;With
At least one processor is communicably connected at least one described non-transitory processor readable storage medium, institute
State at least one processor:
The learning data of the image set including more batches of tape labels is received, each image set includes the image data for indicating anatomical structure
And including at least one label, the spy for the anatomical structure described in each image of at least one tag identifier image set
Determine the region of part;
Full convolutional neural networks (CNN) model of training with using received learning data divide at least one of anatomical structure
Point;And
Trained CNN model is stored at least one non-transitory processor readable storage medium of machine learning system.
2. machine learning system according to claim 1, wherein the CNN model includes constricted path and path expander,
The constricted path includes multiple convolutional layers and multiple pond layers, and each pond layer is after at least one convolutional layer, and institute
Stating path expander includes multiple convolutional layers and multiple up-sampling layers, each up-sampling layer after at least one convolutional layer, and
The transposition convolution operation of up-sampling and interpolation is executed including the use of acquistion kernel.
3. machine learning system according to claim 2, wherein the constricted path includes the first and second tunnels in parallel
Diameter, one in the first and second paths of the parallel connection includes multiple convolutional layers and multiple pond layers, and each pond layer is extremely
Another after a few convolutional layer, and in the first and second paths of the parallel connection only includes zero or more pond
Layer.
4. machine learning system according to claim 2, wherein the initiation layer of the constricted path carries out learning data
Down-sampling, and the layer after initiation layer includes the ratio between convolution more higher than initiation layer and down-sampling operation.
5. machine learning system according to claim 2, wherein the path expander includes volume more less than constricted path
Product operation.
6. machine learning system according to claim 2, wherein deposited between each pair of layer with same space scale operation
It is connected in residual error.
7. machine learning system according to claim 2, wherein convolution include intensive MxM convolution, cascade Nx1 and
1xN convolution and expansion convolution, wherein 1≤M≤11,3≤N≤11.
8. machine learning system according to claim 2, wherein the CNN model includes constricted path and path expander
In layer between skip connection, wherein the image size of the layer is compatible.
9. machine learning system according to claim 8, wherein described to skip the cascade nature that connection includes CNN model
Mapping.
10. machine learning system according to claim 8, wherein it is described skip connection be residual error connection, the residual error connection
Plus or minus the value of the Feature Mapping of CNN model.
11. a kind of method for operating machine learning system, the machine learning system includes: at least one non-transitory processor
Readable storage medium storing program for executing, at least one of storage processor executable instruction or data;With at least one processor, can lead to
Letter it is connected at least one described non-transitory processor readable storage medium, which comprises
The learning data of the image set including more batches of tape labels is received by least one described processor, each image set includes
Indicate the image data of anatomical structure and including at least one label, each figure of at least one tag identifier image set
The region of the specific part for the anatomical structure described as in;
By full convolutional neural networks (CNN) model of at least one described processor training, to utilize the received learning data of institute
To divide at least part of anatomical structure;With
Trained CNN model is stored in at least one non-transitory of machine learning system by least one described processor
In processor readable storage medium.
12. according to the method for claim 11, wherein training CNN model includes that training includes constricted path and expansion road
The CNN model of diameter, the constricted path include multiple convolutional layers and multiple pond layers, and each pond layer is at least one convolutional layer
Later, and the path expander includes multiple convolutional layers and multiple up-sampling layers, and each up-sampling layer is at least one convolution
After layer, and the transposition convolution operation up-sampled with interpolation is executed including the use of acquistion kernel.
13. according to the method for claim 12, wherein training CNN model includes that training includes the CNN mould of constricted path
Type, the constricted path include the first and second paths in parallel, and one in the first and second paths of the parallel connection includes
Multiple convolutional layers and multiple pond layers, each pond layer is after at least one convolutional layer, and described in parallel first and the
Another in two paths only includes zero or more pond layer.
14. according to the method for claim 12, wherein training CNN model includes that training includes the CNN mould of constricted path
The initiation layer of type, the constricted path carries out down-sampling to learning data, and the layer after initiation layer includes more than initiation layer
The ratio between high convolution and down-sampling operation.
15. according to the method for claim 12, wherein training CNN model includes that training includes the CNN mould of path expander
Type, the path expander include convolution operation more less than constricted path.
16. according to the method for claim 12, wherein training CNN model includes that training is with same space scale operation
Each pair of layer between include residual error connection CNN model.
17. according to the method for claim 12, wherein training CNN model includes that wherein convolution includes intensive for training
MxM convolution, cascade Nx1 and 1xN convolution and the CNN model for expanding convolution, wherein 1≤M≤11,3≤N≤11.
18. according to the method for claim 12, wherein training CNN model includes that training includes the CNN mould for skipping connection
Type, it is described to skip between the layer being connected in constricted path and path expander, wherein the image size of the layer is compatible.
19. according to the method for claim 18, wherein training CNN model includes that training includes the CNN mould for skipping connection
Type, it is described to skip the cascade nature mapping that connection includes CNN model.
20. according to the method for claim 18, wherein training CNN model includes that training includes the CNN mould for skipping connection
Type, it is described skip connection be residual error connection, the residual error connection plus or minus CNN model Feature Mapping value.
21. a kind of magic magiscan comprising:
At least one non-transitory processor readable storage medium stores at least one of the following: processor is executable to be referred to
The initial profile or mask of order or data, the internal membrane of heart of cardiac MRI image data and description heart and the external membrane of heart;With
At least one processor is communicably connected at least one described non-transitory processor readable storage medium, In
In operation, at least one described processor:
Access cardiac MRI image data and a series of initial profile or mask;
From host computer image intensity thresholds, the image intensity thresholds are by the papillary muscle and girder flesh inside blood and endocardial contours
It distinguishes;With
Autonomous application image intensity threshold limits the profile or mask on the boundary of description papillary muscle and girder flesh.
22. magic magiscan according to claim 21, wherein in order to calculate image intensity thresholds, it is described extremely
Lack a processor for the region between the distribution and endocardial contours and epicardial contours of the intensity value in endocardial contours
The distribution of intensity value is compared.
23. magic magiscan according to claim 22, wherein at least one described processor use experience is strong
Each of the distribution for spending the Density Estimator of distribution to calculate intensity value.
24. magic magiscan according to claim 22, wherein at least one described processor is by image intensity
Threshold value is determined as the image pixel intensities of the point of intersection of the first and second probability-distribution functions, and the first probability-distribution function is used for the internal membrane of heart
Pixel group in profile, and the second probability-distribution function is for the picture in the region between endocardial contours and epicardial contours
Plain group.
25. magic magiscan according to claim 21, wherein describe the endocardial initial profile of heart or cover
Mould includes the papillary muscle and girder flesh inside endocardial contours.
26. magic magiscan according to claim 21, wherein at least one described processor calculates blood pool area
The coupling part in domain simultaneously abandons one or more in calculated coupling part from blood pool region.
27. magic magiscan according to claim 26, wherein at least one described processor will be from blood pool area
The coupling part abandoned in domain is converted into papillary muscle and girder flesh region.
28. magic magiscan according to claim 26, wherein at least one described processor is from blood pool region
All coupling parts of the middle discarding in addition to coupling part maximum in blood pool region.
29. magic magiscan according to claim 21, wherein at least one described processor allows to describe cream
The calculated profile of institute or mask on the boundary of head flesh and girder flesh are edited by user.
30. a kind of machine learning system comprising:
At least one non-transitory processor readable storage medium stores at least one of the following: processor is executable to be referred to
Convolutional neural networks (CNN) model of order or data, the medical imaging data of heart and training;With
At least one processor is communicably connected at least one described non-transitory processor readable storage medium, In
In operation, at least one described processor:
The profile or mask of the internal membrane of heart and the external membrane of heart of describing heart in medical imaging data are calculated using trained CNN model;
And
The illness or functional character of cardiac muscle are anatomically positioned using calculated profile or mask.
31. machine learning system according to claim 30, wherein at least one described processor calculates ventricle insertion point,
At the ventricle insertion point, right ventricular wall is attached to left ventricle.
32. machine learning system according to claim 31, wherein at least one described processor is based on describing left ventricle
The profile or mask of the external membrane of heart and the profile or mask for describing one or two of endocardium of right ventricle or right ventricle external membrane of heart
The degree of approach calculate ventricle insertion point.
33. machine learning system according to claim 32, wherein at least one described processor is based in cardiac image
Two points calculate the ventricle insertion point in one or more two-dimensional cardiac images, in described two points, outside the left ventricle heart
Membrane boundary is begun to deviate from one or two of endocardium of right ventricle boundary or right ventricle epicardial border.
34. machine learning system according to claim 31, wherein at least one described processor is based on an acquired left side
Give the description of the left ventricle external membrane of heart mutually between the long axis view of ventricle to calculate ventricle insertion point.
35. machine learning system according to claim 34, wherein at least one described processor is based on outside the left ventricle heart
Intersection between 3 Room long shaft plane of film profile and the left heart calculates at least one ventricle insertion point.
36. machine learning system according to claim 34, wherein at least one described processor is based on outside the left ventricle heart
Intersection between 4 Room long shaft plane of film profile and the left heart calculates at least one ventricle insertion point.
37. machine learning system according to claim 34, wherein it is long that at least one described processor is based on left heart Room 3
Intersection between axial plane and one or two of right ventricle epicardial contours or endocardium of right ventricle profile calculates at least
One ventricle insertion point.
38. machine learning system according to claim 34, wherein it is long that at least one described processor is based on left heart Room 4
Intersection between axial plane and one or two of right ventricle epicardial contours or endocardium of right ventricle profile calculates at least
One ventricle insertion point.
39. machine learning system according to claim 30, wherein at least one described processor allows user to retouch manually
Draw one or more positions in ventricle insertion point.
40. machine learning system according to claim 30, wherein at least one described processor uses profile and ventricle
The illness of cardiac muscle or the anatomy positioning of functional character are presented with standardized format for the combination of insertion point.
41. machine learning system according to claim 40, wherein the standardized format is 16 or 17 sections of moulds of cardiac muscle
One or two of type.
42. machine learning system according to claim 30, wherein the medical imaging data of the heart is the functional heart
It is one or more in dirty image, myocardial delayed enhancing image or myocardial perfusion imaging.
43. machine learning system according to claim 42, wherein the medical imaging data of the heart is that heart magnetic is total
Shake image.
44. machine learning system according to claim 30, wherein the CNN model of the training has been based on and will make
It is trained with trained CNN model come the identical annotation cardiac image of the image type of reasoning.
45. machine learning system according to claim 44, wherein the CNN model of the training has been based on functionality
One or more being trained in cardiac image, myocardial delayed enhancing image or myocardial perfusion imaging.
46. machine learning system according to claim 45, wherein what the training of the CNN model of the training was based on
Data are cardiac magnetic resonance images.
47. machine learning system according to claim 30, wherein the CNN model of the training has been based on and will make
It is trained with trained CNN model come the different annotation cardiac image of the image type of reasoning.
48. machine learning system according to claim 47, wherein the CNN model of the training has been based on functionality
One or more being trained in cardiac image, myocardial delayed enhancing image or myocardial perfusion imaging.
49. machine learning system according to claim 48, the data that the training of the CNN model of the training is based on can
To be cardiac magnetic resonance images.
50. machine learning system according to claim 30, wherein at least one described processor is based on and will use
The identical data of data type that CNN model carrys out reasoning finely tune the CNN model of the training.
51. machine learning system according to claim 50, wherein in order to finely tune trained CNN model, described at least one
A processor trains some or all of layers of the CNN model of the training again.
52. machine learning system according to claim 30, wherein at least one described processor is to the heart for describing heart
The profile or mask of inner membrance and the external membrane of heart application post-processing, so as to be present in the non-cardiac muscle being identified as in the heart area of cardiac muscle
The amount of tissue minimizes.
53. machine learning system according to claim 52, wherein described in order to be post-processed to profile or mask
Morphological operation is applied to be identified as the heart area of cardiac muscle to reduce its area by least one processor.
54. machine learning system according to claim 53, wherein the morphological operation includes in corroding or expanding
It is one or more.
55. machine learning system according to claim 52, wherein described in order to be post-processed to profile or mask
The modification of at least one processor is applied to by the threshold value of the probability graph of trained CNN model prediction, only to identify following myocardium picture
Element, for the pixel, trained CNN model indicates that the pixel is that the probability of a part of cardiac muscle is higher than threshold value.
56. machine learning system according to claim 55, wherein the threshold value that the probability map values are converted to class label is big
In 0.5.
57. machine learning system according to claim 52, wherein described in order to be post-processed to profile or mask
The center that at least one processor will describe vertex towards or away from the ventricle of heart of the profile of cardiac muscle is mobile, is known with reducing
The area of other cardiac muscle.
58. machine learning system according to claim 30, wherein the illness or functional character of the cardiac muscle include cardiac muscle
One of cicatrization, myocardial infarction, coronary artery stenosis or perfusion feature or more.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762451482P | 2017-01-27 | 2017-01-27 | |
US62/451,482 | 2017-01-27 | ||
PCT/US2018/015222 WO2018140596A2 (en) | 2017-01-27 | 2018-01-25 | Automated segmentation utilizing fully convolutional networks |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110475505A true CN110475505A (en) | 2019-11-19 |
CN110475505B CN110475505B (en) | 2022-04-05 |
Family
ID=62977998
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880020558.2A Expired - Fee Related CN110475505B (en) | 2017-01-27 | 2018-01-25 | Automatic segmentation using full convolution network |
Country Status (5)
Country | Link |
---|---|
US (3) | US10600184B2 (en) |
EP (1) | EP3573520A4 (en) |
JP (1) | JP2020510463A (en) |
CN (1) | CN110475505B (en) |
WO (1) | WO2018140596A2 (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110739050A (en) * | 2019-12-20 | 2020-01-31 | 深圳大学 | left ventricle full parameter and confidence degree quantification method |
CN110991408A (en) * | 2019-12-19 | 2020-04-10 | 北京航空航天大学 | Method and device for segmenting white matter high signal based on deep learning method |
CN111401373A (en) * | 2020-03-04 | 2020-07-10 | 武汉大学 | Efficient semantic segmentation method based on packet asymmetric convolution |
CN111466894A (en) * | 2020-04-07 | 2020-07-31 | 上海尽星生物科技有限责任公司 | Ejection fraction calculation method and system based on deep learning |
CN111666972A (en) * | 2020-04-28 | 2020-09-15 | 清华大学 | Liver case image classification method and system based on deep neural network |
CN111739000A (en) * | 2020-06-16 | 2020-10-02 | 山东大学 | System and device for improving left ventricle segmentation accuracy of multiple cardiac views |
CN111898211A (en) * | 2020-08-07 | 2020-11-06 | 吉林大学 | Intelligent vehicle speed decision method based on deep reinforcement learning and simulation method thereof |
CN111928794A (en) * | 2020-08-04 | 2020-11-13 | 北京理工大学 | Closed fringe compatible single interference diagram phase method and device based on deep learning |
CN111968112A (en) * | 2020-09-02 | 2020-11-20 | 广州海兆印丰信息科技有限公司 | CT three-dimensional positioning image acquisition method and device and computer equipment |
CN112085162A (en) * | 2020-08-12 | 2020-12-15 | 北京师范大学 | Magnetic resonance brain tissue segmentation method and device based on neural network, computing equipment and storage medium |
CN112734770A (en) * | 2021-01-06 | 2021-04-30 | 中国人民解放军陆军军医大学第二附属医院 | Multi-sequence fusion segmentation method for cardiac nuclear magnetic images based on multilayer cascade |
CN112785592A (en) * | 2021-03-10 | 2021-05-11 | 河北工业大学 | Medical image depth segmentation network based on multiple expansion paths |
CN112932535A (en) * | 2021-02-01 | 2021-06-11 | 杜国庆 | Medical image segmentation and detection method |
CN113327224A (en) * | 2020-02-28 | 2021-08-31 | 通用电气精准医疗有限责任公司 | System and method for automatic field of view (FOV) bounding |
WO2021183473A1 (en) * | 2020-03-09 | 2021-09-16 | Nanotronics Imaging, Inc. | Defect detection system |
WO2021191692A1 (en) * | 2020-03-27 | 2021-09-30 | International Business Machines Corporation | Annotation of digital images for machine learning |
CN113674235A (en) * | 2021-08-15 | 2021-11-19 | 上海立芯软件科技有限公司 | Low-cost photoetching hotspot detection method based on active entropy sampling and model calibration |
CN113838001A (en) * | 2021-08-24 | 2021-12-24 | 内蒙古电力科学研究院 | Ultrasonic full-focus image defect processing method and device based on nuclear density estimation |
CN113838068A (en) * | 2021-09-27 | 2021-12-24 | 深圳科亚医疗科技有限公司 | Method, apparatus and storage medium for automatic segmentation of myocardial segments |
TWI792055B (en) * | 2020-09-25 | 2023-02-11 | 國立勤益科技大學 | Establishing method of echocardiography judging model with 3d deep learning, echocardiography judging system with 3d deep learning and method thereof |
WO2023226793A1 (en) * | 2022-05-23 | 2023-11-30 | 深圳微创心算子医疗科技有限公司 | Mitral valve opening distance detection method, and electronic device and storage medium |
Families Citing this family (195)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015109254A2 (en) | 2014-01-17 | 2015-07-23 | Morpheus Medical, Inc. | Apparatus, methods and articles for four dimensional (4d) flow magnetic resonance imaging |
US10331852B2 (en) | 2014-01-17 | 2019-06-25 | Arterys Inc. | Medical imaging and efficient sharing of medical imaging information |
US10871536B2 (en) | 2015-11-29 | 2020-12-22 | Arterys Inc. | Automated cardiac volume segmentation |
CA3036754A1 (en) | 2016-10-27 | 2018-05-03 | Progenics Pharmaceuticals, Inc. | Network for medical image analysis, decision support system, and related graphical user interface (gui) applications |
US10663711B2 (en) | 2017-01-04 | 2020-05-26 | Corista, LLC | Virtual slide stage (VSS) method for viewing whole slide images |
US10600184B2 (en) | 2017-01-27 | 2020-03-24 | Arterys Inc. | Automated segmentation utilizing fully convolutional networks |
US10580131B2 (en) * | 2017-02-23 | 2020-03-03 | Zebra Medical Vision Ltd. | Convolutional neural network for segmentation of medical anatomical images |
CN106887225B (en) * | 2017-03-21 | 2020-04-07 | 百度在线网络技术(北京)有限公司 | Acoustic feature extraction method and device based on convolutional neural network and terminal equipment |
US10699412B2 (en) * | 2017-03-23 | 2020-06-30 | Petuum Inc. | Structure correcting adversarial network for chest X-rays organ segmentation |
GB201705876D0 (en) | 2017-04-11 | 2017-05-24 | Kheiron Medical Tech Ltd | Recist |
GB201705911D0 (en) * | 2017-04-12 | 2017-05-24 | Kheiron Medical Tech Ltd | Abstracts |
US10261903B2 (en) | 2017-04-17 | 2019-04-16 | Intel Corporation | Extend GPU/CPU coherency to multi-GPU cores |
GB201706149D0 (en) * | 2017-04-18 | 2017-05-31 | King's College London | System and method for medical imaging |
US11468286B2 (en) * | 2017-05-30 | 2022-10-11 | Leica Microsystems Cms Gmbh | Prediction guided sequential data learning method |
US10699410B2 (en) * | 2017-08-17 | 2020-06-30 | Siemes Healthcare GmbH | Automatic change detection in medical images |
KR20200129168A (en) * | 2017-09-27 | 2020-11-17 | 구글 엘엘씨 | End to end network model for high resolution image segmentation |
US10891723B1 (en) | 2017-09-29 | 2021-01-12 | Snap Inc. | Realistic neural network based image style transfer |
EP3471054B1 (en) * | 2017-10-16 | 2022-02-09 | Siemens Healthcare GmbH | Method for determining at least one object feature of an object |
US10783640B2 (en) * | 2017-10-30 | 2020-09-22 | Beijing Keya Medical Technology Co., Ltd. | Systems and methods for image segmentation using a scalable and compact convolutional neural network |
US11551353B2 (en) | 2017-11-22 | 2023-01-10 | Arterys Inc. | Content based image retrieval for lesion analysis |
JP6545887B2 (en) * | 2017-11-24 | 2019-07-17 | キヤノンメディカルシステムズ株式会社 | Medical data processing apparatus, magnetic resonance imaging apparatus, and learned model generation method |
US10973486B2 (en) | 2018-01-08 | 2021-04-13 | Progenics Pharmaceuticals, Inc. | Systems and methods for rapid neural network-based image segmentation and radiopharmaceutical uptake determination |
WO2019147767A1 (en) * | 2018-01-24 | 2019-08-01 | Rensselaer Polytechnic Institute | 3-d convolutional autoencoder for low-dose ct via transfer learning from a 2-d trained network |
US10595727B2 (en) * | 2018-01-25 | 2020-03-24 | Siemens Healthcare Gmbh | Machine learning-based segmentation for cardiac medical imaging |
US10885630B2 (en) | 2018-03-01 | 2021-01-05 | Intuitive Surgical Operations, Inc | Systems and methods for segmentation of anatomical structures for image-guided surgery |
US11024025B2 (en) * | 2018-03-07 | 2021-06-01 | University Of Virginia Patent Foundation | Automatic quantification of cardiac MRI for hypertrophic cardiomyopathy |
US11537428B2 (en) | 2018-05-17 | 2022-12-27 | Spotify Ab | Asynchronous execution of creative generator and trafficking workflows and components therefor |
US20190355372A1 (en) | 2018-05-17 | 2019-11-21 | Spotify Ab | Automated voiceover mixing and components therefor |
US11403663B2 (en) * | 2018-05-17 | 2022-08-02 | Spotify Ab | Ad preference embedding model and lookalike generation engine |
GB2574372B (en) * | 2018-05-21 | 2021-08-11 | Imagination Tech Ltd | Implementing Traditional Computer Vision Algorithms As Neural Networks |
WO2019226270A1 (en) * | 2018-05-21 | 2019-11-28 | Corista, LLC | Multi-sample whole slide image processing in digital pathology via multi-resolution registration and machine learning |
CN111819580A (en) * | 2018-05-29 | 2020-10-23 | 谷歌有限责任公司 | Neural architecture search for dense image prediction tasks |
US11854703B2 (en) | 2018-06-11 | 2023-12-26 | Arterys Inc. | Simulating abnormalities in medical images with generative adversarial networks |
CA3103538A1 (en) * | 2018-06-11 | 2019-12-19 | Socovar, Societe En Commandite | System and method for determining coronal artery tissue type based on an oct image and using trained engines |
WO2019239155A1 (en) | 2018-06-14 | 2019-12-19 | Kheiron Medical Technologies Ltd | Second reader suggestion |
EP3598344A1 (en) * | 2018-07-19 | 2020-01-22 | Nokia Technologies Oy | Processing sensor data |
CN109087298B (en) * | 2018-08-17 | 2020-07-28 | 电子科技大学 | Alzheimer's disease MRI image classification method |
KR102174379B1 (en) * | 2018-08-27 | 2020-11-04 | 주식회사 딥바이오 | System and method for medical diagnosis using neural network performing segmentation |
US11164067B2 (en) * | 2018-08-29 | 2021-11-02 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems, methods, and apparatuses for implementing a multi-resolution neural network for use with imaging intensive applications including medical imaging |
CN109345538B (en) * | 2018-08-30 | 2021-08-10 | 华南理工大学 | Retinal vessel segmentation method based on convolutional neural network |
US10303980B1 (en) * | 2018-09-05 | 2019-05-28 | StradVision, Inc. | Learning method, learning device for detecting obstacles and testing method, testing device using the same |
JP7213412B2 (en) * | 2018-09-12 | 2023-01-27 | 学校法人立命館 | MEDICAL IMAGE EXTRACTION APPARATUS, MEDICAL IMAGE EXTRACTION METHOD, AND COMPUTER PROGRAM |
EP3624056B1 (en) * | 2018-09-13 | 2021-12-01 | Siemens Healthcare GmbH | Processing image frames of a sequence of cardiac images |
CN109308695A (en) * | 2018-09-13 | 2019-02-05 | 镇江纳兰随思信息科技有限公司 | Based on the cancer cell identification method for improving U-net convolutional neural networks model |
WO2020056299A1 (en) * | 2018-09-14 | 2020-03-19 | Google Llc | Deep reinforcement learning-based techniques for end to end robot navigation |
CN109242863B (en) * | 2018-09-14 | 2021-10-26 | 北京市商汤科技开发有限公司 | Ischemic stroke image region segmentation method and device |
CN109272512B (en) * | 2018-09-25 | 2022-02-15 | 南昌航空大学 | Method for automatically segmenting left ventricle inner and outer membranes |
US20200104678A1 (en) * | 2018-09-27 | 2020-04-02 | Google Llc | Training optimizer neural networks |
CN109559315B (en) * | 2018-09-28 | 2023-06-02 | 天津大学 | Water surface segmentation method based on multipath deep neural network |
CN109410318B (en) * | 2018-09-30 | 2020-09-08 | 先临三维科技股份有限公司 | Three-dimensional model generation method, device, equipment and storage medium |
WO2020077202A1 (en) * | 2018-10-12 | 2020-04-16 | The Medical College Of Wisconsin, Inc. | Medical image segmentation using deep learning models trained with random dropout and/or standardized inputs |
CN109446951B (en) | 2018-10-16 | 2019-12-10 | 腾讯科技(深圳)有限公司 | Semantic segmentation method, device and equipment for three-dimensional image and storage medium |
US11651584B2 (en) * | 2018-10-16 | 2023-05-16 | General Electric Company | System and method for memory augmented domain adaptation |
CN109509203B (en) * | 2018-10-17 | 2019-11-05 | 哈尔滨理工大学 | A kind of semi-automatic brain image dividing method |
WO2020081909A1 (en) * | 2018-10-19 | 2020-04-23 | The Climate Corporation | Machine learning techniques for identifying clouds and cloud shadows in satellite imagery |
WO2020080623A1 (en) * | 2018-10-19 | 2020-04-23 | 삼성전자 주식회사 | Method and apparatus for ai encoding and ai decoding of image |
US11720997B2 (en) | 2018-10-19 | 2023-08-08 | Samsung Electronics Co.. Ltd. | Artificial intelligence (AI) encoding device and operating method thereof and AI decoding device and operating method thereof |
WO2020080765A1 (en) | 2018-10-19 | 2020-04-23 | Samsung Electronics Co., Ltd. | Apparatuses and methods for performing artificial intelligence encoding and artificial intelligence decoding on image |
KR102525578B1 (en) | 2018-10-19 | 2023-04-26 | 삼성전자주식회사 | Method and Apparatus for video encoding and Method and Apparatus for video decoding |
WO2020080698A1 (en) | 2018-10-19 | 2020-04-23 | 삼성전자 주식회사 | Method and device for evaluating subjective quality of video |
WO2020080873A1 (en) | 2018-10-19 | 2020-04-23 | Samsung Electronics Co., Ltd. | Method and apparatus for streaming data |
WO2020080665A1 (en) | 2018-10-19 | 2020-04-23 | Samsung Electronics Co., Ltd. | Methods and apparatuses for performing artificial intelligence encoding and artificial intelligence decoding on image |
KR102312337B1 (en) * | 2018-10-19 | 2021-10-14 | 삼성전자주식회사 | AI encoding apparatus and operating method for the same, and AI decoding apparatus and operating method for the same |
CN109508647A (en) * | 2018-10-22 | 2019-03-22 | 北京理工大学 | A kind of spectra database extended method based on generation confrontation network |
JP2022506135A (en) * | 2018-10-30 | 2022-01-17 | アレン インスティテュート | Segmentation of 3D intercellular structures in microscopic images using iterative deep learning flows that incorporate human contributions |
CN109448006B (en) * | 2018-11-01 | 2022-01-28 | 江西理工大学 | Attention-based U-shaped dense connection retinal vessel segmentation method |
WO2020093042A1 (en) * | 2018-11-02 | 2020-05-07 | Deep Lens, Inc. | Neural networks for biomedical image analysis |
CN109523077B (en) * | 2018-11-15 | 2022-10-11 | 云南电网有限责任公司 | Wind power prediction method |
CN113591750A (en) * | 2018-11-16 | 2021-11-02 | 北京市商汤科技开发有限公司 | Key point detection method and device, electronic equipment and storage medium |
CN110009640B (en) * | 2018-11-20 | 2023-09-26 | 腾讯科技(深圳)有限公司 | Method, apparatus and readable medium for processing cardiac video |
WO2020108009A1 (en) * | 2018-11-26 | 2020-06-04 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method, system, and computer-readable medium for improving quality of low-light images |
TW202022713A (en) * | 2018-12-05 | 2020-06-16 | 宏碁股份有限公司 | Method and system for evaluating cardiac status, electronic device and ultrasonic scanning device |
CN109711411B (en) * | 2018-12-10 | 2020-10-30 | 浙江大学 | Image segmentation and identification method based on capsule neurons |
CN111309800A (en) * | 2018-12-11 | 2020-06-19 | 北京京东尚科信息技术有限公司 | Data storage and reading method and device |
US10943352B2 (en) * | 2018-12-17 | 2021-03-09 | Palo Alto Research Center Incorporated | Object shape regression using wasserstein distance |
US10740901B2 (en) * | 2018-12-17 | 2020-08-11 | Nvidia Corporation | Encoder regularization of a segmentation model |
EP3671660A1 (en) * | 2018-12-20 | 2020-06-24 | Dassault Systèmes | Designing a 3d modeled object via user-interaction |
CN111401512B (en) | 2019-01-03 | 2024-06-04 | 三星电子株式会社 | Method and system for convolution in neural networks with variable expansion rate |
TW202040518A (en) | 2019-01-07 | 2020-11-01 | 瑞典商艾西尼診斷公司 | Systems and methods for platform agnostic whole body image segmentation |
CN109584254B (en) * | 2019-01-07 | 2022-12-20 | 浙江大学 | Heart left ventricle segmentation method based on deep full convolution neural network |
CN109872325B (en) * | 2019-01-17 | 2022-11-15 | 东北大学 | Full-automatic liver tumor segmentation method based on two-way three-dimensional convolutional neural network |
CN109903292A (en) * | 2019-01-24 | 2019-06-18 | 西安交通大学 | A kind of three-dimensional image segmentation method and system based on full convolutional neural networks |
CN109949334B (en) * | 2019-01-25 | 2022-10-04 | 广西科技大学 | Contour detection method based on deep reinforced network residual error connection |
WO2020154664A1 (en) * | 2019-01-25 | 2020-07-30 | The Johns Hopkins University | Predicting atrial fibrillation recurrence after pulmonary vein isolation using simulations of patient-specific magnetic resonance imaging models and machine learning |
CN109886159B (en) * | 2019-01-30 | 2021-03-26 | 浙江工商大学 | Face detection method under non-limited condition |
US10373027B1 (en) * | 2019-01-30 | 2019-08-06 | StradVision, Inc. | Method for acquiring sample images for inspecting label among auto-labeled images to be used for learning of neural network and sample image acquiring device using the same |
US11544572B2 (en) * | 2019-02-15 | 2023-01-03 | Capital One Services, Llc | Embedding constrained and unconstrained optimization programs as neural network layers |
DE102019203024A1 (en) * | 2019-03-06 | 2020-09-10 | Robert Bosch Gmbh | Padding method for a convolutional neural network |
CN109949318B (en) * | 2019-03-07 | 2023-11-14 | 西安电子科技大学 | Full convolution neural network epileptic focus segmentation method based on multi-modal image |
EP3716201A1 (en) * | 2019-03-25 | 2020-09-30 | Siemens Healthcare GmbH | Medical image enhancement |
CN110009619A (en) * | 2019-04-02 | 2019-07-12 | 清华大学深圳研究生院 | A kind of image analysis method based on fluorescence-encoded liquid phase biochip |
CN110101401B (en) * | 2019-04-18 | 2023-04-07 | 浙江大学山东工业技术研究院 | Liver contrast agent digital subtraction angiography method |
CN110111313B (en) * | 2019-04-22 | 2022-12-30 | 腾讯科技(深圳)有限公司 | Medical image detection method based on deep learning and related equipment |
US11534125B2 (en) | 2019-04-24 | 2022-12-27 | Progenies Pharmaceuticals, Inc. | Systems and methods for automated and interactive analysis of bone scan images for detection of metastases |
EP3959683A1 (en) | 2019-04-24 | 2022-03-02 | Progenics Pharmaceuticals, Inc. | Systems and methods for interactive adjustment of intensity windowing in nuclear medicine images |
CN110047073B (en) * | 2019-05-05 | 2021-07-06 | 北京大学 | X-ray weld image defect grading method and system |
CN110969182A (en) * | 2019-05-17 | 2020-04-07 | 丰疆智能科技股份有限公司 | Convolutional neural network construction method and system based on farmland image |
CN112396169B (en) * | 2019-08-13 | 2024-04-02 | 上海寒武纪信息科技有限公司 | Operation method, device, computer equipment and storage medium |
US11328430B2 (en) * | 2019-05-28 | 2022-05-10 | Arizona Board Of Regents On Behalf Of Arizona State University | Methods, systems, and media for segmenting images |
CN112102221A (en) * | 2019-05-31 | 2020-12-18 | 深圳市前海安测信息技术有限公司 | 3D UNet network model construction method and device for detecting tumor and storage medium |
CN110298366B (en) * | 2019-07-05 | 2021-05-04 | 北华航天工业学院 | Crop distribution extraction method and device |
US20210015438A1 (en) * | 2019-07-16 | 2021-01-21 | Siemens Healthcare Gmbh | Deep learning for perfusion in medical imaging |
CN110599499B (en) * | 2019-08-22 | 2022-04-19 | 四川大学 | MRI image heart structure segmentation method based on multipath convolutional neural network |
CN110517241A (en) * | 2019-08-23 | 2019-11-29 | 吉林大学第一医院 | Method based on the full-automatic stomach fat quantitative analysis of NMR imaging IDEAL-IQ sequence |
CN110619641A (en) * | 2019-09-02 | 2019-12-27 | 南京信息工程大学 | Automatic segmentation method of three-dimensional breast cancer nuclear magnetic resonance image tumor region based on deep learning |
US10957031B1 (en) * | 2019-09-06 | 2021-03-23 | Accenture Global Solutions Limited | Intelligent defect detection from image data |
CN110598784B (en) * | 2019-09-11 | 2020-06-02 | 北京建筑大学 | Machine learning-based construction waste classification method and device |
JP7408325B2 (en) * | 2019-09-13 | 2024-01-05 | キヤノン株式会社 | Information processing equipment, learning methods and programs |
ES2813777B2 (en) | 2019-09-23 | 2023-10-27 | Quibim S L | METHOD AND SYSTEM FOR THE AUTOMATIC SEGMENTATION OF WHITE MATTER HYPERINTENSITIES IN BRAIN MAGNETIC RESONANCE IMAGES |
JP2022549669A (en) * | 2019-09-24 | 2022-11-28 | カーネギー メロン ユニバーシティ | System and method for analyzing medical images based on spatio-temporal data |
CN110675411B (en) * | 2019-09-26 | 2023-05-16 | 重庆大学 | Cervical squamous intraepithelial lesion recognition algorithm based on deep learning |
US11564621B2 (en) | 2019-09-27 | 2023-01-31 | Progenies Pharmacenticals, Inc. | Systems and methods for artificial intelligence-based image analysis for cancer assessment |
US11900597B2 (en) | 2019-09-27 | 2024-02-13 | Progenics Pharmaceuticals, Inc. | Systems and methods for artificial intelligence-based image analysis for cancer assessment |
US11544407B1 (en) | 2019-09-27 | 2023-01-03 | Progenics Pharmaceuticals, Inc. | Systems and methods for secure cloud-based medical image upload and processing |
US11331056B2 (en) * | 2019-09-30 | 2022-05-17 | GE Precision Healthcare LLC | Computed tomography medical imaging stroke model |
US11545266B2 (en) | 2019-09-30 | 2023-01-03 | GE Precision Healthcare LLC | Medical imaging stroke model |
US11640552B2 (en) * | 2019-10-01 | 2023-05-02 | International Business Machines Corporation | Two stage training to obtain a best deep learning model with efficient use of computing resources |
WO2021064585A1 (en) * | 2019-10-01 | 2021-04-08 | Chevron U.S.A. Inc. | Method and system for predicting permeability of hydrocarbon reservoirs using artificial intelligence |
WO2021067833A1 (en) * | 2019-10-02 | 2021-04-08 | Memorial Sloan Kettering Cancer Center | Deep multi-magnification networks for multi-class image segmentation |
CN112365504A (en) * | 2019-10-29 | 2021-02-12 | 杭州脉流科技有限公司 | CT left ventricle segmentation method, device, equipment and storage medium |
US11232859B2 (en) * | 2019-11-07 | 2022-01-25 | Siemens Healthcare Gmbh | Artificial intelligence for basal and apical slice identification in cardiac MRI short axis acquisitions |
KR20210056179A (en) | 2019-11-08 | 2021-05-18 | 삼성전자주식회사 | AI encoding apparatus and operating method for the same, and AI decoding apparatus and operating method for the same |
US11423544B1 (en) | 2019-11-14 | 2022-08-23 | Seg AI LLC | Segmenting medical images |
US10762629B1 (en) | 2019-11-14 | 2020-09-01 | SegAI LLC | Segmenting medical images |
CN110910368B (en) * | 2019-11-20 | 2022-05-13 | 佛山市南海区广工大数控装备协同创新研究院 | Injector defect detection method based on semantic segmentation |
CN110930383A (en) * | 2019-11-20 | 2020-03-27 | 佛山市南海区广工大数控装备协同创新研究院 | Injector defect detection method based on deep learning semantic segmentation and image classification |
CN111161292B (en) * | 2019-11-21 | 2023-09-05 | 合肥合工安驰智能科技有限公司 | Ore scale measurement method and application system |
EP3828828A1 (en) | 2019-11-28 | 2021-06-02 | Robovision | Improved physical object handling based on deep learning |
CN111179149B (en) * | 2019-12-17 | 2022-03-08 | Tcl华星光电技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN111144486B (en) * | 2019-12-27 | 2022-06-10 | 电子科技大学 | Heart nuclear magnetic resonance image key point detection method based on convolutional neural network |
CN111260705B (en) * | 2020-01-13 | 2022-03-15 | 武汉大学 | Prostate MR image multi-task registration method based on deep convolutional neural network |
CN111402203B (en) * | 2020-02-24 | 2024-03-01 | 杭州电子科技大学 | Fabric surface defect detection method based on convolutional neural network |
CN111311737B (en) * | 2020-03-04 | 2023-03-10 | 中南民族大学 | Three-dimensional modeling method, device and equipment for heart image and storage medium |
CN111281387B (en) * | 2020-03-09 | 2024-03-26 | 中山大学 | Segmentation method and device for left atrium and atrial scar based on artificial neural network |
US11810303B2 (en) * | 2020-03-11 | 2023-11-07 | Purdue Research Foundation | System architecture and method of processing images |
US11763456B2 (en) | 2020-03-11 | 2023-09-19 | Purdue Research Foundation | Systems and methods for processing echocardiogram images |
CN111340816A (en) * | 2020-03-23 | 2020-06-26 | 沈阳航空航天大学 | Image segmentation method based on double-U-shaped network framework |
CN111462060A (en) * | 2020-03-24 | 2020-07-28 | 湖南大学 | Method and device for detecting standard section image in fetal ultrasonic image |
US11704803B2 (en) * | 2020-03-30 | 2023-07-18 | The Board Of Trustees Of The Leland Stanford Junior University | Methods and systems using video-based machine learning for beat-to-beat assessment of cardiac function |
US20210319539A1 (en) * | 2020-04-13 | 2021-10-14 | GE Precision Healthcare LLC | Systems and methods for background aware reconstruction using deep learning |
US11386988B2 (en) | 2020-04-23 | 2022-07-12 | Exini Diagnostics Ab | Systems and methods for deep-learning-based segmentation of composite images |
US11321844B2 (en) | 2020-04-23 | 2022-05-03 | Exini Diagnostics Ab | Systems and methods for deep-learning-based segmentation of composite images |
US20210330285A1 (en) * | 2020-04-28 | 2021-10-28 | EchoNous, Inc. | Systems and methods for automated physiological parameter estimation from ultrasound image sequences |
CN111652886B (en) * | 2020-05-06 | 2022-07-22 | 哈尔滨工业大学 | Liver tumor segmentation method based on improved U-net network |
CN112330674B (en) * | 2020-05-07 | 2023-06-30 | 南京信息工程大学 | Self-adaptive variable-scale convolution kernel method based on brain MRI three-dimensional image confidence coefficient |
US11532084B2 (en) | 2020-05-11 | 2022-12-20 | EchoNous, Inc. | Gating machine learning predictions on medical ultrasound images via risk and uncertainty quantification |
US11523801B2 (en) | 2020-05-11 | 2022-12-13 | EchoNous, Inc. | Automatically identifying anatomical structures in medical images in a manner that is sensitive to the particular view in which each image is captured |
CN111739028A (en) * | 2020-05-26 | 2020-10-02 | 华南理工大学 | Nail region image acquisition method, system, computing device and storage medium |
EP4158540A4 (en) * | 2020-06-02 | 2024-07-03 | Cape Analytics Inc | Method for property feature segmentation |
WO2021255514A1 (en) * | 2020-06-15 | 2021-12-23 | Universidade Do Porto | Padding method for convolutional neural network layers adapted to perform multivariate time series analysis |
US11693919B2 (en) * | 2020-06-22 | 2023-07-04 | Shanghai United Imaging Intelligence Co., Ltd. | Anatomy-aware motion estimation |
CN111798462B (en) * | 2020-06-30 | 2022-10-14 | 电子科技大学 | Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image |
CN111754534B (en) * | 2020-07-01 | 2024-05-31 | 杭州脉流科技有限公司 | CT left ventricle short axis image segmentation method, device, computer equipment and storage medium based on deep neural network |
US11216960B1 (en) * | 2020-07-01 | 2022-01-04 | Alipay Labs (singapore) Pte. Ltd. | Image processing method and system |
US11721428B2 (en) | 2020-07-06 | 2023-08-08 | Exini Diagnostics Ab | Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions |
EP3940629A1 (en) * | 2020-07-13 | 2022-01-19 | Koninklijke Philips N.V. | Image intensity correction in magnetic resonance imaging |
CN112001887B (en) * | 2020-07-20 | 2021-11-09 | 南通大学 | Full convolution genetic neural network method for infant brain medical record image segmentation |
CN111738268B (en) * | 2020-07-22 | 2023-11-14 | 浙江大学 | Semantic segmentation method and system for high-resolution remote sensing image based on random block |
WO2022061840A1 (en) * | 2020-09-27 | 2022-03-31 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for generating radiation therapy plan |
US20220114699A1 (en) * | 2020-10-09 | 2022-04-14 | The Regents Of The University Of California | Spatiotemporal resolution enhancement of biomedical images |
US11601661B2 (en) * | 2020-10-09 | 2023-03-07 | Tencent America LLC | Deep loop filter by temporal deformable convolution |
US11688517B2 (en) | 2020-10-30 | 2023-06-27 | Guerbet | Multiple operating point false positive removal for lesion identification |
US11694329B2 (en) | 2020-10-30 | 2023-07-04 | International Business Machines Corporation | Logistic model to determine 3D z-wise lesion connectivity |
US11749401B2 (en) | 2020-10-30 | 2023-09-05 | Guerbet | Seed relabeling for seed-based segmentation of a medical image |
US11587236B2 (en) | 2020-10-30 | 2023-02-21 | International Business Machines Corporation | Refining lesion contours with combined active contour and inpainting |
US11436724B2 (en) | 2020-10-30 | 2022-09-06 | International Business Machines Corporation | Lesion detection artificial intelligence pipeline computing system |
US11688063B2 (en) | 2020-10-30 | 2023-06-27 | Guerbet | Ensemble machine learning model architecture for lesion detection |
US11636593B2 (en) | 2020-11-06 | 2023-04-25 | EchoNous, Inc. | Robust segmentation through high-level image understanding |
CN112634243B (en) * | 2020-12-28 | 2022-08-05 | 吉林大学 | Image classification and recognition system based on deep learning under strong interference factors |
CN112651987B (en) * | 2020-12-30 | 2024-06-18 | 内蒙古自治区农牧业科学院 | Method and system for calculating coverage of grasslands of sample side |
CN112750106B (en) * | 2020-12-31 | 2022-11-04 | 山东大学 | Nuclear staining cell counting method based on incomplete marker deep learning, computer equipment and storage medium |
CN112712527A (en) * | 2020-12-31 | 2021-04-27 | 山西三友和智慧信息技术股份有限公司 | Medical image segmentation method based on DR-Unet104 |
CN112767413B (en) * | 2021-01-06 | 2022-03-15 | 武汉大学 | Remote sensing image depth semantic segmentation method integrating region communication and symbiotic knowledge constraints |
EP4060612A1 (en) | 2021-03-17 | 2022-09-21 | Robovision | Improved orientation detection based on deep learning |
EP4060608A1 (en) | 2021-03-17 | 2022-09-21 | Robovision | Improved vision-based measuring |
US20220335615A1 (en) * | 2021-04-19 | 2022-10-20 | Fujifilm Sonosite, Inc. | Calculating heart parameters |
CN112989107B (en) * | 2021-05-18 | 2021-07-30 | 北京世纪好未来教育科技有限公司 | Audio classification and separation method and device, electronic equipment and storage medium |
CN113469948B (en) * | 2021-06-08 | 2022-02-25 | 北京安德医智科技有限公司 | Left ventricle segment identification method and device, electronic equipment and storage medium |
EP4343708A1 (en) * | 2021-07-12 | 2024-03-27 | Shanghai United Imaging Healthcare Co., Ltd. | Method and apparatus for training machine learning models, computer device, and storage medium |
US11875559B2 (en) * | 2021-07-12 | 2024-01-16 | Obvio Health Usa, Inc. | Systems and methodologies for automated classification of images of stool in diapers |
CN113284074B (en) * | 2021-07-12 | 2021-12-07 | 中德(珠海)人工智能研究院有限公司 | Method and device for removing target object of panoramic image, server and storage medium |
CN113808143B (en) * | 2021-09-06 | 2024-05-17 | 沈阳东软智能医疗科技研究院有限公司 | Image segmentation method and device, readable storage medium and electronic equipment |
CN113838027A (en) * | 2021-09-23 | 2021-12-24 | 杭州柳叶刀机器人有限公司 | Method and system for obtaining target image element based on image processing |
US20230126963A1 (en) * | 2021-10-25 | 2023-04-27 | Analogic Corporation | Prediction of extrema of respiratory motion and related systems, methods, and devices |
WO2023075480A1 (en) * | 2021-10-28 | 2023-05-04 | 주식회사 온택트헬스 | Method and apparatus for providing clinical parameter for predicted target region in medical image, and method and apparatus for screening medical image for labeling |
US20230162493A1 (en) * | 2021-11-24 | 2023-05-25 | Riverain Technologies Llc | Method for the automatic detection of aortic disease and automatic generation of an aortic volume |
CN114240951B (en) * | 2021-12-13 | 2023-04-07 | 电子科技大学 | Black box attack method of medical image segmentation neural network based on query |
CN114549448B (en) * | 2022-02-17 | 2023-08-11 | 中国空气动力研究与发展中心超高速空气动力研究所 | Complex multi-type defect detection evaluation method based on infrared thermal imaging data analysis |
WO2023183486A1 (en) * | 2022-03-23 | 2023-09-28 | University Of Southern California | Deep-learning-driven accelerated mr vessel wall imaging |
WO2023235653A1 (en) * | 2022-05-30 | 2023-12-07 | Northwestern University | Panatomic imaging derived 4d hemodynamics using deep learning |
CN115100123A (en) * | 2022-06-10 | 2022-09-23 | 北京理工大学 | Brain extraction method combining UNet and active contour model |
CN115471659B (en) * | 2022-09-22 | 2023-04-25 | 北京航星永志科技有限公司 | Training method and segmentation method of semantic segmentation model and electronic equipment |
KR20240057761A (en) * | 2022-10-25 | 2024-05-03 | 주식회사 온택트헬스 | Method for providing information of echocardiography images and device usinng the same |
CN116912489B (en) * | 2023-06-26 | 2024-06-21 | 天津师范大学 | Medical image segmentation method and system based on Fourier priori knowledge |
CN117036376B (en) * | 2023-10-10 | 2024-01-30 | 四川大学 | Lesion image segmentation method and device based on artificial intelligence and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130345580A1 (en) * | 2009-04-06 | 2013-12-26 | Sorin Crm S.A.S. | Reconstruction of a surface electrocardiogram from an endocardial electrogram using non-linear filtering |
CN105474219A (en) * | 2013-08-28 | 2016-04-06 | 西门子公司 | Systems and methods for estimating physiological heart measurements from medical images and clinical data |
CN105828870A (en) * | 2013-12-19 | 2016-08-03 | 心脏起搏器股份公司 | System for measuring an electrical characteristic of tissue to identify a neural target for a therapy |
WO2017091833A1 (en) * | 2015-11-29 | 2017-06-01 | Arterys Inc. | Automated cardiac volume segmentation |
Family Cites Families (107)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2776844B2 (en) | 1988-11-30 | 1998-07-16 | 株式会社日立製作所 | Magnetic resonance imaging system |
US5115812A (en) | 1988-11-30 | 1992-05-26 | Hitachi, Ltd. | Magnetic resonance imaging method for moving object |
JP3137378B2 (en) | 1991-09-25 | 2001-02-19 | 株式会社東芝 | Magnetic resonance imaging equipment |
JP3452400B2 (en) | 1994-08-02 | 2003-09-29 | 株式会社日立メディコ | Magnetic resonance imaging equipment |
JP3501182B2 (en) | 1994-11-22 | 2004-03-02 | 株式会社日立メディコ | Magnetic resonance imaging device capable of calculating flow velocity images |
WO1997029437A1 (en) | 1996-02-09 | 1997-08-14 | Sarnoff Corporation | Method and apparatus for training a neural network to detect and classify objects with uncertain training data |
US6324532B1 (en) | 1997-02-07 | 2001-11-27 | Sarnoff Corporation | Method and apparatus for training a neural network to detect objects in an image |
WO2000067185A1 (en) | 1999-05-05 | 2000-11-09 | Healthgram, Inc. | Portable health record |
US6711433B1 (en) | 1999-09-30 | 2004-03-23 | Siemens Corporate Research, Inc. | Method for providing a virtual contrast agent for augmented angioscopy |
US8166381B2 (en) | 2000-12-20 | 2012-04-24 | Heart Imaging Technologies, Llc | Medical image management system |
US6934698B2 (en) | 2000-12-20 | 2005-08-23 | Heart Imaging Technologies Llc | Medical image management system |
DE10117685C2 (en) | 2001-04-09 | 2003-02-27 | Siemens Ag | Process for processing objects of a standardized communication protocol |
US7139417B2 (en) | 2001-08-14 | 2006-11-21 | Ge Medical Systems Global Technology Company Llc | Combination compression and registration techniques to implement temporal subtraction as an application service provider to detect changes over time to medical imaging |
US7158692B2 (en) | 2001-10-15 | 2007-01-02 | Insightful Corporation | System and method for mining quantitive information from medical images |
US7355597B2 (en) | 2002-05-06 | 2008-04-08 | Brown University Research Foundation | Method, apparatus and computer program product for the interactive rendering of multivalued volume data with layered complementary values |
GB0219408D0 (en) | 2002-08-20 | 2002-09-25 | Mirada Solutions Ltd | Computation o contour |
ATE470199T1 (en) * | 2003-04-24 | 2010-06-15 | Koninkl Philips Electronics Nv | INTERVENTION-FREE LEFT CHAMBER VOLUME DETERMINATION |
US7254436B2 (en) | 2003-05-30 | 2007-08-07 | Heart Imaging Technologies, Llc | Dynamic magnetic resonance angiography |
US7805177B2 (en) | 2004-06-23 | 2010-09-28 | M2S, Inc. | Method for determining the risk of rupture of a blood vessel |
US7292032B1 (en) | 2004-09-28 | 2007-11-06 | General Electric Company | Method and system of enhanced phase suppression for phase-contrast MR imaging |
US7127095B2 (en) | 2004-10-15 | 2006-10-24 | The Brigham And Women's Hospital, Inc. | Factor analysis in medical imaging |
EP1659511A1 (en) | 2004-11-18 | 2006-05-24 | Cedara Software Corp. | Image archiving system and method for handling new and legacy archives |
US7736313B2 (en) | 2004-11-22 | 2010-06-15 | Carestream Health, Inc. | Detecting and classifying lesions in ultrasound images |
US8000768B2 (en) | 2005-01-10 | 2011-08-16 | Vassol Inc. | Method and system for displaying blood flow |
US20070061460A1 (en) | 2005-03-24 | 2007-03-15 | Jumpnode Systems,Llc | Remote access |
US7567707B2 (en) | 2005-12-20 | 2009-07-28 | Xerox Corporation | Red eye detection and correction |
CN101243980B (en) | 2006-12-04 | 2010-12-22 | 株式会社东芝 | X-ray computed tomographic apparatus and medical image processing apparatus |
US7764846B2 (en) | 2006-12-12 | 2010-07-27 | Xerox Corporation | Adaptive red eye correction |
US8369590B2 (en) * | 2007-05-21 | 2013-02-05 | Cornell University | Method for segmenting objects in images |
US8098918B2 (en) | 2007-09-21 | 2012-01-17 | Siemens Corporation | Method and system for measuring left ventricle volume |
US7806843B2 (en) | 2007-09-25 | 2010-10-05 | Marin Luis E | External fixator assembly |
JP5191787B2 (en) | 2008-04-23 | 2013-05-08 | 株式会社日立メディコ | X-ray CT system |
WO2009142167A1 (en) | 2008-05-22 | 2009-11-26 | 株式会社 日立メディコ | Magnetic resonance imaging device and blood vessel image acquisition method |
FR2932599A1 (en) | 2008-06-12 | 2009-12-18 | Eugene Franck Maizeroi | METHOD AND DEVICE FOR IMAGE PROCESSING, IN PARTICULAR FOR PROCESSING MEDICAL IMAGES FOR DETERMINING VO LUMES 3D |
US8379961B2 (en) | 2008-07-03 | 2013-02-19 | Nec Laboratories America, Inc. | Mitotic figure detector and counter system and method for detecting and counting mitotic figures |
WO2010038138A1 (en) | 2008-09-30 | 2010-04-08 | University Of Cape Town | Fluid flow assessment |
JP5422171B2 (en) | 2008-10-01 | 2014-02-19 | 株式会社東芝 | X-ray diagnostic imaging equipment |
US8148984B2 (en) | 2008-10-03 | 2012-04-03 | Wisconsin Alumni Research Foundation | Method for magnitude constrained phase contrast magnetic resonance imaging |
US8301224B2 (en) | 2008-10-09 | 2012-10-30 | Siemens Aktiengesellschaft | System and method for automatic, non-invasive diagnosis of pulmonary hypertension and measurement of mean pulmonary arterial pressure |
EP2355702A1 (en) | 2008-11-13 | 2011-08-17 | Avid Radiopharmaceuticals, Inc. | Histogram-based analysis method for the detection and diagnosis of neurodegenerative diseases |
US20100158332A1 (en) | 2008-12-22 | 2010-06-24 | Dan Rico | Method and system of automated detection of lesions in medical images |
US8457373B2 (en) | 2009-03-16 | 2013-06-04 | Siemens Aktiengesellschaft | System and method for robust 2D-3D image registration |
US10303986B2 (en) | 2009-04-07 | 2019-05-28 | Kayvan Najarian | Automated measurement of brain injury indices using brain CT images, injury data, and machine learning |
US8527251B2 (en) | 2009-05-01 | 2013-09-03 | Siemens Aktiengesellschaft | Method and system for multi-component heart and aorta modeling for decision support in cardiac disease |
JP4639347B1 (en) | 2009-11-20 | 2011-02-23 | 株式会社墨運堂 | Writing instrument |
US20110182493A1 (en) | 2010-01-25 | 2011-07-28 | Martin Huber | Method and a system for image annotation |
US8805048B2 (en) | 2010-04-01 | 2014-08-12 | Mark Batesole | Method and system for orthodontic diagnosis |
JP5926728B2 (en) * | 2010-07-26 | 2016-05-25 | ケイジャヤ、エルエルシー | Visualization adapted for direct use by physicians |
KR101883258B1 (en) | 2010-08-13 | 2018-07-31 | 스미스 앤드 네퓨, 인크. | Detection of anatomical landmarks |
US8897519B2 (en) | 2010-09-28 | 2014-11-25 | Siemens Aktiengesellschaft | System and method for background phase correction for phase contrast flow images |
US8374414B2 (en) | 2010-11-05 | 2013-02-12 | The Hong Kong Polytechnic University | Method and system for detecting ischemic stroke |
CN103262120B (en) | 2010-12-09 | 2017-03-22 | 皇家飞利浦电子股份有限公司 | Volumetric rendering of image data |
US8600476B2 (en) | 2011-04-21 | 2013-12-03 | Siemens Aktiengesellschaft | Patient support table control system for use in MR imaging |
WO2013001410A2 (en) | 2011-06-27 | 2013-01-03 | Koninklijke Philips Electronics N.V. | Anatomical tagging of findings in image data of serial studies |
US9513357B2 (en) | 2011-07-07 | 2016-12-06 | The Board Of Trustees Of The Leland Stanford Junior University | Comprehensive cardiovascular analysis with volumetric phase-contrast MRI |
US9585568B2 (en) | 2011-09-11 | 2017-03-07 | Steven D. Wolff | Noninvasive methods for determining the pressure gradient across a heart valve without using velocity data at the valve orifice |
US8837800B1 (en) | 2011-10-28 | 2014-09-16 | The Board Of Trustees Of The Leland Stanford Junior University | Automated detection of arterial input function and/or venous output function voxels in medical imaging |
US8682049B2 (en) | 2012-02-14 | 2014-03-25 | Terarecon, Inc. | Cloud-based medical image processing system with access control |
US9014781B2 (en) | 2012-04-19 | 2015-04-21 | General Electric Company | Systems and methods for magnetic resonance angiography |
US9165360B1 (en) | 2012-09-27 | 2015-10-20 | Zepmed, Llc | Methods, systems, and devices for automated analysis of medical scans |
US9495752B2 (en) | 2012-09-27 | 2016-11-15 | Siemens Product Lifecycle Management Software Inc. | Multi-bone segmentation for 3D computed tomography |
WO2014120953A1 (en) | 2013-01-31 | 2014-08-07 | The Regents Of The University Of California | Method for accurate and robust cardiac motion self-gating in magnetic resonance imaging |
AU2014271202B2 (en) | 2013-05-19 | 2019-12-12 | Commonwealth Scientific And Industrial Research Organisation | A system and method for remote medical diagnosis |
US9741116B2 (en) * | 2013-08-29 | 2017-08-22 | Mayo Foundation For Medical Education And Research | System and method for boundary classification and automatic polyp detection |
US9406142B2 (en) | 2013-10-08 | 2016-08-02 | The Trustees Of The University Of Pennsylvania | Fully automatic image segmentation of heart valves using multi-atlas label fusion and deformable medial modeling |
US9700219B2 (en) | 2013-10-17 | 2017-07-11 | Siemens Healthcare Gmbh | Method and system for machine learning based assessment of fractional flow reserve |
US9668699B2 (en) | 2013-10-17 | 2017-06-06 | Siemens Healthcare Gmbh | Method and system for anatomical object detection using marginal space deep neural networks |
US20150139517A1 (en) | 2013-11-15 | 2015-05-21 | University Of Iowa Research Foundation | Methods And Systems For Calibration |
JP6301133B2 (en) | 2014-01-14 | 2018-03-28 | キヤノンメディカルシステムズ株式会社 | Magnetic resonance imaging system |
WO2015109254A2 (en) | 2014-01-17 | 2015-07-23 | Morpheus Medical, Inc. | Apparatus, methods and articles for four dimensional (4d) flow magnetic resonance imaging |
US9430829B2 (en) | 2014-01-30 | 2016-08-30 | Case Western Reserve University | Automatic detection of mitosis using handcrafted and convolutional neural network features |
KR20150098119A (en) | 2014-02-19 | 2015-08-27 | 삼성전자주식회사 | System and method for removing false positive lesion candidate in medical image |
US20150324690A1 (en) | 2014-05-08 | 2015-11-12 | Microsoft Corporation | Deep Learning Training System |
US9928588B2 (en) | 2014-05-15 | 2018-03-27 | Brainlab Ag | Indication-dependent display of a medical image |
KR20160010157A (en) | 2014-07-18 | 2016-01-27 | 삼성전자주식회사 | Apparatus and Method for 3D computer aided diagnosis based on dimension reduction |
US9707400B2 (en) | 2014-08-15 | 2017-07-18 | Medtronic, Inc. | Systems, methods, and interfaces for configuring cardiac therapy |
US20160203263A1 (en) | 2015-01-08 | 2016-07-14 | Imbio | Systems and methods for analyzing medical images and creating a report |
KR101974769B1 (en) | 2015-03-03 | 2019-05-02 | 난토믹스, 엘엘씨 | Ensemble-based research recommendation system and method |
US10115194B2 (en) | 2015-04-06 | 2018-10-30 | IDx, LLC | Systems and methods for feature detection in retinal images |
US9633306B2 (en) | 2015-05-07 | 2017-04-25 | Siemens Healthcare Gmbh | Method and system for approximating deep neural networks for anatomical object detection |
US10176408B2 (en) | 2015-08-14 | 2019-01-08 | Elucid Bioimaging Inc. | Systems and methods for analyzing pathologies utilizing quantitative imaging |
JP6450053B2 (en) | 2015-08-15 | 2019-01-09 | セールスフォース ドット コム インコーポレイティッド | Three-dimensional (3D) convolution with 3D batch normalization |
US9569736B1 (en) | 2015-09-16 | 2017-02-14 | Siemens Healthcare Gmbh | Intelligent medical image landmark detection |
US9792531B2 (en) | 2015-09-16 | 2017-10-17 | Siemens Healthcare Gmbh | Intelligent multi-scale medical image landmark detection |
US10192129B2 (en) | 2015-11-18 | 2019-01-29 | Adobe Systems Incorporated | Utilizing interactive deep learning to select objects in digital visual media |
JP7110098B2 (en) | 2015-12-18 | 2022-08-01 | ザ リージェンツ オブ ザ ユニバーシティ オブ カリフォルニア | Interpretation and quantification of features of urgency in cranial computed tomography |
US10163028B2 (en) | 2016-01-25 | 2018-12-25 | Koninklijke Philips N.V. | Image data pre-processing |
DE102016204225B3 (en) | 2016-03-15 | 2017-07-20 | Friedrich-Alexander-Universität Erlangen-Nürnberg | Method for automatic recognition of anatomical landmarks and device |
CN105825509A (en) | 2016-03-17 | 2016-08-03 | 电子科技大学 | Cerebral vessel segmentation method based on 3D convolutional neural network |
US9886758B2 (en) | 2016-03-31 | 2018-02-06 | International Business Machines Corporation | Annotation of skin image using learned feature representation |
CN205665697U (en) | 2016-04-05 | 2016-10-26 | 陈进民 | Medical science video identification diagnostic system based on cell neural network or convolution neural network |
CN106127725B (en) | 2016-05-16 | 2019-01-22 | 北京工业大学 | A kind of millimetre-wave radar cloud atlas dividing method based on multiresolution CNN |
CN106096632A (en) | 2016-06-02 | 2016-11-09 | 哈尔滨工业大学 | Based on degree of depth study and the ventricular function index prediction method of MRI image |
CN106096616A (en) | 2016-06-08 | 2016-11-09 | 四川大学华西医院 | A kind of nuclear magnetic resonance image feature extraction based on degree of depth study and sorting technique |
US9589374B1 (en) | 2016-08-01 | 2017-03-07 | 12 Sigma Technologies | Computer-aided diagnosis system for medical images using deep convolutional neural networks |
US10582907B2 (en) * | 2016-10-31 | 2020-03-10 | Siemens Healthcare Gmbh | Deep learning based bone removal in computed tomography angiography |
US10600184B2 (en) | 2017-01-27 | 2020-03-24 | Arterys Inc. | Automated segmentation utilizing fully convolutional networks |
US10373313B2 (en) | 2017-03-02 | 2019-08-06 | Siemens Healthcare Gmbh | Spatially consistent multi-scale anatomical landmark detection in incomplete 3D-CT data |
BR112019022447A2 (en) | 2017-04-27 | 2020-06-09 | Bober Miroslaw | system and method for automated funduscopic image analysis |
WO2018222755A1 (en) | 2017-05-30 | 2018-12-06 | Arterys Inc. | Automated lesion detection, segmentation, and longitudinal identification |
CN107341265B (en) | 2017-07-20 | 2020-08-14 | 东北大学 | Mammary gland image retrieval system and method fusing depth features |
US11551353B2 (en) | 2017-11-22 | 2023-01-10 | Arterys Inc. | Content based image retrieval for lesion analysis |
US10902591B2 (en) | 2018-02-09 | 2021-01-26 | Case Western Reserve University | Predicting pathological complete response to neoadjuvant chemotherapy from baseline breast dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) |
KR101952887B1 (en) | 2018-07-27 | 2019-06-11 | 김예현 | Method for predicting anatomical landmarks and device for predicting anatomical landmarks using the same |
KR102575569B1 (en) | 2018-08-13 | 2023-09-07 | 쑤저우 레킨 세미컨덕터 컴퍼니 리미티드 | Smeiconductor device |
JP7125312B2 (en) | 2018-09-07 | 2022-08-24 | 富士フイルムヘルスケア株式会社 | MAGNETIC RESONANCE IMAGING APPARATUS, IMAGE PROCESSING APPARATUS, AND IMAGE PROCESSING METHOD |
US10646156B1 (en) | 2019-06-14 | 2020-05-12 | Cycle Clarity, LLC | Adaptive image processing in assisted reproductive imaging modalities |
-
2018
- 2018-01-25 US US15/879,732 patent/US10600184B2/en active Active
- 2018-01-25 CN CN201880020558.2A patent/CN110475505B/en not_active Expired - Fee Related
- 2018-01-25 US US15/879,742 patent/US10902598B2/en active Active
- 2018-01-25 JP JP2019540646A patent/JP2020510463A/en active Pending
- 2018-01-25 EP EP18745114.1A patent/EP3573520A4/en not_active Withdrawn
- 2018-01-25 WO PCT/US2018/015222 patent/WO2018140596A2/en unknown
-
2020
- 2020-02-25 US US16/800,922 patent/US20200193603A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130345580A1 (en) * | 2009-04-06 | 2013-12-26 | Sorin Crm S.A.S. | Reconstruction of a surface electrocardiogram from an endocardial electrogram using non-linear filtering |
CN105474219A (en) * | 2013-08-28 | 2016-04-06 | 西门子公司 | Systems and methods for estimating physiological heart measurements from medical images and clinical data |
CN105828870A (en) * | 2013-12-19 | 2016-08-03 | 心脏起搏器股份公司 | System for measuring an electrical characteristic of tissue to identify a neural target for a therapy |
WO2017091833A1 (en) * | 2015-11-29 | 2017-06-01 | Arterys Inc. | Automated cardiac volume segmentation |
Non-Patent Citations (2)
Title |
---|
ARXIV.ORG: "A Deep Neural Network Architecture for", 《CORNELL UNIVERSITY LIBRARY》 * |
STIAAN WIEHMAN等: "Semantic Segmentation of Bioimages Using Convolutional Neural Networks", 《IEEE》 * |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110991408A (en) * | 2019-12-19 | 2020-04-10 | 北京航空航天大学 | Method and device for segmenting white matter high signal based on deep learning method |
CN110991408B (en) * | 2019-12-19 | 2022-09-06 | 北京航空航天大学 | Method and device for segmenting white matter high signal based on deep learning method |
CN110739050B (en) * | 2019-12-20 | 2020-07-28 | 深圳大学 | Left ventricle full-parameter and confidence coefficient quantification method |
CN110739050A (en) * | 2019-12-20 | 2020-01-31 | 深圳大学 | left ventricle full parameter and confidence degree quantification method |
CN113327224A (en) * | 2020-02-28 | 2021-08-31 | 通用电气精准医疗有限责任公司 | System and method for automatic field of view (FOV) bounding |
CN111401373A (en) * | 2020-03-04 | 2020-07-10 | 武汉大学 | Efficient semantic segmentation method based on packet asymmetric convolution |
CN111401373B (en) * | 2020-03-04 | 2022-02-15 | 武汉大学 | Efficient semantic segmentation method based on packet asymmetric convolution |
TWI798655B (en) * | 2020-03-09 | 2023-04-11 | 美商奈米創尼克影像公司 | Defect detection system |
US11416711B2 (en) | 2020-03-09 | 2022-08-16 | Nanotronics Imaging, Inc. | Defect detection system |
WO2021183473A1 (en) * | 2020-03-09 | 2021-09-16 | Nanotronics Imaging, Inc. | Defect detection system |
WO2021191692A1 (en) * | 2020-03-27 | 2021-09-30 | International Business Machines Corporation | Annotation of digital images for machine learning |
US11205287B2 (en) | 2020-03-27 | 2021-12-21 | International Business Machines Corporation | Annotation of digital images for machine learning |
CN111466894A (en) * | 2020-04-07 | 2020-07-31 | 上海尽星生物科技有限责任公司 | Ejection fraction calculation method and system based on deep learning |
CN111666972A (en) * | 2020-04-28 | 2020-09-15 | 清华大学 | Liver case image classification method and system based on deep neural network |
CN111739000A (en) * | 2020-06-16 | 2020-10-02 | 山东大学 | System and device for improving left ventricle segmentation accuracy of multiple cardiac views |
CN111739000B (en) * | 2020-06-16 | 2022-09-13 | 山东大学 | System and device for improving left ventricle segmentation accuracy of multiple cardiac views |
CN111928794A (en) * | 2020-08-04 | 2020-11-13 | 北京理工大学 | Closed fringe compatible single interference diagram phase method and device based on deep learning |
CN111928794B (en) * | 2020-08-04 | 2022-03-11 | 北京理工大学 | Closed fringe compatible single interference diagram phase method and device based on deep learning |
CN111898211A (en) * | 2020-08-07 | 2020-11-06 | 吉林大学 | Intelligent vehicle speed decision method based on deep reinforcement learning and simulation method thereof |
CN112085162A (en) * | 2020-08-12 | 2020-12-15 | 北京师范大学 | Magnetic resonance brain tissue segmentation method and device based on neural network, computing equipment and storage medium |
CN112085162B (en) * | 2020-08-12 | 2024-02-09 | 北京师范大学 | Neural network-based magnetic resonance brain tissue segmentation method, device, computing equipment and storage medium |
CN111968112A (en) * | 2020-09-02 | 2020-11-20 | 广州海兆印丰信息科技有限公司 | CT three-dimensional positioning image acquisition method and device and computer equipment |
CN111968112B (en) * | 2020-09-02 | 2023-12-26 | 广州海兆印丰信息科技有限公司 | CT three-dimensional positioning image acquisition method and device and computer equipment |
TWI792055B (en) * | 2020-09-25 | 2023-02-11 | 國立勤益科技大學 | Establishing method of echocardiography judging model with 3d deep learning, echocardiography judging system with 3d deep learning and method thereof |
CN112734770A (en) * | 2021-01-06 | 2021-04-30 | 中国人民解放军陆军军医大学第二附属医院 | Multi-sequence fusion segmentation method for cardiac nuclear magnetic images based on multilayer cascade |
CN112734770B (en) * | 2021-01-06 | 2022-11-25 | 中国人民解放军陆军军医大学第二附属医院 | Multi-sequence fusion segmentation method for cardiac nuclear magnetic images based on multilayer cascade |
CN112932535A (en) * | 2021-02-01 | 2021-06-11 | 杜国庆 | Medical image segmentation and detection method |
CN112932535B (en) * | 2021-02-01 | 2022-10-18 | 杜国庆 | Medical image segmentation and detection method |
CN112785592A (en) * | 2021-03-10 | 2021-05-11 | 河北工业大学 | Medical image depth segmentation network based on multiple expansion paths |
CN113674235B (en) * | 2021-08-15 | 2023-10-10 | 上海立芯软件科技有限公司 | Low-cost photoetching hot spot detection method based on active entropy sampling and model calibration |
CN113674235A (en) * | 2021-08-15 | 2021-11-19 | 上海立芯软件科技有限公司 | Low-cost photoetching hotspot detection method based on active entropy sampling and model calibration |
CN113838001A (en) * | 2021-08-24 | 2021-12-24 | 内蒙古电力科学研究院 | Ultrasonic full-focus image defect processing method and device based on nuclear density estimation |
CN113838001B (en) * | 2021-08-24 | 2024-02-13 | 内蒙古电力科学研究院 | Ultrasonic wave full focusing image defect processing method and device based on nuclear density estimation |
CN113838068A (en) * | 2021-09-27 | 2021-12-24 | 深圳科亚医疗科技有限公司 | Method, apparatus and storage medium for automatic segmentation of myocardial segments |
WO2023226793A1 (en) * | 2022-05-23 | 2023-11-30 | 深圳微创心算子医疗科技有限公司 | Mitral valve opening distance detection method, and electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US20200193603A1 (en) | 2020-06-18 |
EP3573520A2 (en) | 2019-12-04 |
WO2018140596A2 (en) | 2018-08-02 |
US10600184B2 (en) | 2020-03-24 |
JP2020510463A (en) | 2020-04-09 |
WO2018140596A3 (en) | 2018-09-07 |
US10902598B2 (en) | 2021-01-26 |
US20180218497A1 (en) | 2018-08-02 |
US20180218502A1 (en) | 2018-08-02 |
EP3573520A4 (en) | 2020-11-04 |
CN110475505B (en) | 2022-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110475505A (en) | Utilize the automatic segmentation of full convolutional network | |
CN108603922A (en) | Automatic cardiac volume is divided | |
Khan et al. | Deep neural architectures for medical image semantic segmentation | |
Zotti et al. | Convolutional neural network with shape prior applied to cardiac MRI segmentation | |
US9968257B1 (en) | Volumetric quantification of cardiovascular structures from medical imaging | |
Khened et al. | Densely connected fully convolutional network for short-axis cardiac cine MR image segmentation and heart diagnosis using random forest | |
US20220028085A1 (en) | Method and system for providing an at least 3-dimensional medical image segmentation of a structure of an internal organ | |
Biffi et al. | Explainable anatomical shape analysis through deep hierarchical generative models | |
Yang et al. | A deep learning segmentation approach in free‐breathing real‐time cardiac magnetic resonance imaging | |
Wang et al. | MMNet: A multi-scale deep learning network for the left ventricular segmentation of cardiac MRI images | |
Yan et al. | Cine MRI analysis by deep learning of optical flow: Adding the temporal dimension | |
Shoaib et al. | An overview of deep learning methods for left ventricle segmentation | |
Azarmehr et al. | Neural architecture search of echocardiography view classifiers | |
Laumer et al. | Weakly supervised inference of personalized heart meshes based on echocardiography videos | |
Li et al. | Cardiac MRI segmentation with focal loss constrained deep residual networks | |
Baumgartner et al. | Fully convolutional networks in medical imaging: applications to image enhancement and recognition | |
Li et al. | Medical image segmentation with generative adversarial semi-supervised network | |
CN117649422B (en) | Training method of multi-modal image segmentation model and multi-modal image segmentation method | |
Hao et al. | MUE-CoT: multi-scale uncertainty entropy-aware co-training framework for left atrial segmentation | |
Khan | A Novel Deep Learning-Based Framework for Context-Aware Semantic Segmentation in Medical Imaging | |
US20230196557A1 (en) | Late Gadolinium Enhancement Analysis for Magnetic Resonance Imaging | |
Lane | A computer vision pipeline for fully automated echocardiogram interpretation | |
Shi | Deep learning in sequential data analysis | |
Galati | Cardiac Image Segmentation: towards better reliability and generalization. | |
Jaffré | Deep learning-based segmentation of the aorta from dynamic 2D magnetic resonance images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220405 |
|
CF01 | Termination of patent right due to non-payment of annual fee |