CN111739000A - System and device for improving left ventricle segmentation accuracy of multiple cardiac views - Google Patents
System and device for improving left ventricle segmentation accuracy of multiple cardiac views Download PDFInfo
- Publication number
- CN111739000A CN111739000A CN202010547296.4A CN202010547296A CN111739000A CN 111739000 A CN111739000 A CN 111739000A CN 202010547296 A CN202010547296 A CN 202010547296A CN 111739000 A CN111739000 A CN 111739000A
- Authority
- CN
- China
- Prior art keywords
- segmentation
- training
- data set
- data
- improving
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a system and a device for improving the left ventricle segmentation accuracy of a plurality of heart views based on deep learning, which comprises: a data acquisition module configured to: acquiring picture data of the echocardiograms of a plurality of different views to form an original image data set; acquiring an echocardiogram to be processed as data to be segmented; a pre-processing module configured to: preprocessing an original image data set to form an experimental data set; a training module configured to: constructing a deep neural network training model, inputting an experimental data set into the training model for training, stopping training the training model and storing model parameters when the loss function value in the training model is not reduced any more; a data processing module configured to: inputting the ultrasonic cardiogram to be processed into a training module for storing model parameters to obtain heart inner and outer membrane segmentation results; the training precision of the image processing of different heart chambers is improved, and further the heart view segmentation precision is improved.
Description
Technical Field
The invention belongs to the technical field of medical detection, and particularly relates to a system and a device for improving the segmentation accuracy of a plurality of heart view left ventricles based on deep learning.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
With the development of medical technology, a variety of medical image data are generated, and how to correctly, rapidly and maximally utilize the medical image data to diagnose diseases becomes a great hotspot of the current society.
Machine learning techniques enable researchers to develop and utilize complex models to classify or predict various abnormalities or diseases or to perform identification and segmentation of medical lesions. Nowadays, machine learning techniques are gradually developed to maturity and perfection. Deep learning is a new field of machine learning research, and the motivation lies in the establishment and simulation of human brain to analyze and study neural networks, and to simulate human brain mechanisms to interpret data. Therefore, in recent years, more and more researchers have been paying attention to processing techniques such as pattern recognition, classification, and segmentation in medical image processing.
In clinical applications, echocardiography is an important tool for physicians to judge heart conditions. In clinical treatment, the characteristics of the motion state of the left ventricle in an echocardiogram and the like are the primary basis for doctors to diagnose heart diseases. By segmenting the left ventricle, important medical indicators such as ejection fraction can be calculated. The clinical ultrasonic diagnosis process mainly includes that the echocardiogram central apex two-chamber, the echocardiogram central apex three-chamber and the echocardiogram central apex four-chamber all contain complete left ventricle information, but because the positions detected by the ultrasonic probes are different, the shapes of the left ventricles in different chambers are different. Meanwhile, the echocardiogram also contains a lot of noise, and the existing segmentation algorithm cannot accurately segment the left ventricle. It is therefore important to provide a measure for improving the accuracy of the segmentation of the medical anatomy. This will greatly reduce the workload of the physician and improve the accuracy of the diagnosis.
Disclosure of Invention
In order to solve the problems, the invention provides a left ventricular myocardium segmentation system and a left ventricular myocardium segmentation device based on a deep learning segmentation method.
In a first aspect, the present invention provides a system for improving accuracy of left ventricle segmentation of multiple cardiac views based on deep learning, including:
a data acquisition module configured to: acquiring picture data of the echocardiograms of a plurality of different views to form an original image data set; acquiring an echocardiogram to be processed as data to be segmented;
a pre-processing module configured to: preprocessing an original image data set to form an experimental data set;
a training module configured to: and constructing a deep neural network training model, inputting the experimental data set into the training model for training, and stopping training and storing model parameters by the training model when the loss function value in the training model is not reduced any more.
A data processing module configured to: and inputting the data to be segmented into a training module for storing model parameters to obtain the heart inner and outer membrane segmentation result.
In a second aspect, the present invention further provides an apparatus for improving accuracy of left ventricle segmentation of multiple cardiac views based on deep learning, including: the RetinaNet network and the system for improving the heart view segmentation accuracy as described in the first aspect input data to be segmented into the RetinaNet network to obtain different heart view identification results and left ventricle detection results, and input the detection results into the segmentation network of the system for improving the heart view segmentation accuracy to perform segmentation.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention provides a left ventricle myocardium segmentation system based on deep learning under different heart views, which adopts a preprocessing module and a training module to form an experimental data set and data to be processed, can automatically segment the inner and outer membranes of the myocardium without manual drawing by a doctor, and reduces the workflow of the doctor.
2. The training module of the invention combines the segmentation and the detection, improves the training precision of the image processing of different ventricles, and further improves the segmentation precision of the heart view.
3. The invention adopts the preprocessing module to reduce the noise influence in the echocardiogram, and can accurately segment the left ventricle by utilizing the segmentation network through establishing the training model, thereby improving the segmentation precision.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a schematic illustration of the formation of an experimental data set in example 1 of the present invention;
fig. 2 is a schematic diagram of a system for improving accuracy of left ventricle segmentation for multiple cardiac views based on deep learning according to embodiment 1 of the present invention.
Fig. 3 is a schematic diagram of an apparatus for improving the accuracy of left ventricle segmentation of multiple cardiac views based on deep learning according to the present invention.
The specific implementation mode is as follows:
the invention is further described with reference to the following figures and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example 1
In order to solve the above problems, as shown in fig. 1-3, the present invention provides a left ventricular myocardium segmentation system and device based on a deep learning segmentation method, and the system can automatically segment left ventricular myocardium under different views.
A system for improving accuracy of left ventricular segmentation for multiple cardiac views based on deep learning, comprising:
a data acquisition module configured to: acquiring picture data of the echocardiograms of a plurality of different views to form an original image data set; acquiring an echocardiogram to be processed as data to be segmented;
a pre-processing module configured to: preprocessing an original image data set to form an experimental data set;
a training module configured to: and constructing a deep neural network training model, inputting the experimental data set into the training model for training, and stopping training and storing model parameters by the training model when the loss function value in the training model is not reduced any more.
A data processing module configured to: and inputting the data to be segmented into a training module for storing model parameters to obtain the heart inner and outer membrane segmentation result.
Further, the deep neural network training model is a deep convolutional neural network and comprises a plurality of convolution kernels, parameters of the deep convolutional neural network training model comprise parameter values of the plurality of convolution kernels, and the loss function is a pixel-by-pixel cross entropy loss function.
Further, the training model comprises a segmentation network, and the experimental data set is input into the segmentation network for training; the segmentation network comprises a potential expression extraction module, a full convolution connection module and a segmentation sub-network module which are sequentially in communication connection.
Further, the full convolution connecting module is a neural network composed of a plurality of convolution kernels, and the parameters of the neural network include parameter values of the plurality of convolution kernels.
Furthermore, the potential expression extraction module is provided with two input ports, namely a port for connecting the experimental data set and a port for connecting the real segmentation labels of the images to be trained. The sparse features extracted by the potential expression extraction module can represent input information. The image to be trained is an echocardiogram marked by segmentation, namely data to be segmented.
Further, the sub-network partitioning module adopts FCN, UNet or Segnet, the Segnet partitioning network comprises an encoder and a decoder, the encoder network uses the first 13 layers of the VGG network, and each encoder layer corresponds to one decoder layer. The encoder section consists of several convolutional layers, BN layers, RELU and pooling layers, where the pooling layer uses max-pooling of 2x2, thus causing the loss of boundary details to be large.
Further, the preprocessing the raw image data set includes:
(1) removing patients, hospitals and scanning information in an original image data set, and only reserving a sector area of an ultrasonic image;
(2) and adjusting the size of the image and maintaining the aspect ratio of the original ultrasonic image.
Further, the image data are image data of a cardiac apical two-chamber, a cardiac apical three-chamber and a cardiac apical four-chamber.
Further, the forming the raw image data set includes: and marking the picture data, and combining the picture data of different echocardiograms.
Further, the labels are: the ventricular epicardium in the different views in the derived picture is labeled and recorded in the form of "picture name, view type, ventricular contour coordinates".
Further, the ventricular epicardium is labeled as: the epicardial contours are delineated in the ultrasound images of the different views.
Example 2
A deep learning based cardiac multiview left ventricular myocardium segmentation system, comprising:
an acquisition module: taking a patient as a unit, acquiring medical images of a central apex two-chamber and a central apex four-chamber of an echocardiogram, labeling contours of left ventricle myocardium in different views, and making into an original image data set;
a preprocessing module: preprocessing the data set to obtain an experimental data set;
a training module: and inputting the experimental data set into a deep learning RetinaNet network to obtain a heart view identification result and a left ventricle detection result, and inputting the detection result into a segmentation network to obtain a segmentation result.
Further, the method for producing the original image data set specifically comprises:
taking a patient as a unit, firstly deriving pictures of a second chamber of an apex of the heart, a third chamber of the apex of the heart and a fourth chamber of the apex of the heart from an echocardiogram, and drawing outlines of inner and outer membranes of a left ventricle in ultrasonic images of different views;
combining the picture information of a plurality of patients to form an img file, and forming a label file by the annotation information in the form of picture name view type left ventricle contour coordinates, wherein the img file and the label file are the original image data set.
The network segmentation method specifically comprises the following steps:
the segmentation network mainly comprises three sub-modules, namely potential expression extraction, full convolution connection and a segmentation sub-network, wherein the three modules are connected in a cascade structure;
the potential expression extraction module has two inputs in the training stage, and the two inputs are respectively connected with the original input data and the labeled segmentation image;
the full convolution connecting module is used for cascading the input image and the potential expression vector;
segmentation sub-networks may take advantage of existing image segmentation networks.
Specifically, medical images of a central apex two-chamber and a central apex four-chamber of an echocardiogram are collected by taking a patient as a unit, and contours of inner and outer membranes of a left ventricle in different views are marked to prepare an original image data set.
And preprocessing the original image data set to obtain an experimental data set.
And inputting the experimental data set into a segmentation network for training, and stopping model training and storing model parameters when the loss function value of the training model is not reduced any more.
The device also comprises a data processing module: and inputting the ultrasonic image to be processed into the training model, and loading the stored model parameters to obtain the heart left ventricle inner and outer membrane segmentation result.
Firstly, corresponding equipment is utilized to acquire an echocardiogram image, and the echocardiogram image of each experimental object is acquired under the support of hospital data. After the image is collected, the collected image is processed in a preprocessing stage to be made into an experimental data set.
The process of constructing the experimental data set is shown in fig. 1 and comprises: three parts of data acquisition, data annotation and data preprocessing;
the data acquisition comprises acquiring an echocardiogram image of a patient, and exporting data sets of the two chambers of the apex, the three chambers of the apex and the four chambers of the apex into a picture format by taking the patient as a unit.
Data annotation manual annotation of left ventricular epicardium in different views in the derived picture was recorded as "picture name view type left ventricular contour coordinates".
The data preprocessing mainly comprises the following steps:
(1) removing irrelevant information, and only reserving a sector area of the ultrasonic image;
(2) adjusting the size of the image, wherein the size of the image needs to be adjusted in order to maintain the aspect ratio of the original ultrasonic image;
and inputting the experimental data set into a RetinaNet network to obtain different heart view identification results and left ventricle detection results, and inputting the detection results into a segmentation network for segmentation, so that the segmentation precision is improved.
The invention provides a left ventricle myocardium segmentation system based on deep learning under different heart views, which can automatically segment the inner and outer membranes of the myocardium without manual drawing by a doctor, thereby reducing the workflow of the doctor.
In other embodiments, the invention also provides:
an apparatus for improving accuracy of left ventricle segmentation for multiple cardiac views based on deep learning, comprising: the RetinaNet network and the system for improving the heart view segmentation accuracy as described in the above embodiments input data to be segmented into the RetinaNet network to obtain different heart view recognition results and left ventricle detection results, and input the detection results into the segmentation network of the system for improving the heart view segmentation accuracy to perform segmentation, thereby realizing high-accuracy left ventricle inner and outer membrane segmentation, as shown in FIG. 3.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.
Claims (10)
1. A system for improving accuracy of left ventricular segmentation in multiple cardiac views based on deep learning, comprising:
a data acquisition module configured to: acquiring picture data of the echocardiograms of a plurality of different views to form an original image data set; acquiring an echocardiogram to be processed as data to be segmented;
a pre-processing module configured to: preprocessing an original image data set to form an experimental data set;
a training module configured to: building a training model, inputting an experimental data set into the training model for training, stopping training the training model and storing model parameters when the loss function value in the training model is not reduced any more;
a data processing module configured to: and inputting the data to be segmented into a training module for storing model parameters to obtain the heart inner and outer membrane segmentation result.
2. The system for improving accuracy of segmentation of a cardiac view according to claim 1, wherein the training model includes a segmentation network into which the experimental data set is input for training; the segmentation network comprises a potential expression extraction module, a full convolution connection module and a segmentation sub-network module which are sequentially in communication connection.
3. The system for improving segmentation accuracy of cardiac views according to claim 2, wherein the potential representation extraction module has two input ports, respectively a port for connecting to the data to be segmented and a port for connecting to the experimental data set.
4. The system of claim 2, wherein the full convolution concatenation module is a neural network comprising a plurality of convolution kernels, and wherein the parameters comprise parameter values of the plurality of convolution kernels.
5. The system for improving accuracy of cardiac view segmentation as set forth in claim 1, wherein the pre-processing of the raw image dataset includes:
(1) removing patients, hospitals and scanning information in an original image data set, and only reserving a sector area of an ultrasonic image;
(2) and adjusting the size of the image and maintaining the aspect ratio of the original ultrasonic image.
6. The system for improving accuracy of cardiac view segmentation as set forth in claim 1, wherein the forming a raw image dataset includes: and marking the picture data, and combining the picture data of different echocardiograms.
7. The system for improving accuracy of cardiac view segmentation as set forth in claim 6, wherein labeling the image data includes: the ventricular epicardium in the different views in the derived picture is labeled and recorded in the form of "picture name, view type, ventricular contour coordinates".
8. The system for improving accuracy of cardiac view segmentation as set forth in claim 6, wherein labeling the image data includes: the epicardial contours are delineated in the ultrasound images of the different views.
9. The system for improving accuracy of cardiac view segmentation as set forth in claim 1, wherein the image data is apical two chamber, apical three chamber, apical four chamber image data.
10. An apparatus for improving accuracy of left ventricle segmentation for multiple cardiac views based on deep learning, comprising: RetinaNet network and a system for improving heart view segmentation accuracy as claimed in any one of claims 1 to 9, inputting data to be segmented into the RetinaNet network to obtain different heart view recognition results and left ventricle detection results, and inputting the detection results into the segmentation network of the system for improving heart view segmentation accuracy to perform segmentation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010547296.4A CN111739000B (en) | 2020-06-16 | 2020-06-16 | System and device for improving left ventricle segmentation accuracy of multiple cardiac views |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010547296.4A CN111739000B (en) | 2020-06-16 | 2020-06-16 | System and device for improving left ventricle segmentation accuracy of multiple cardiac views |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111739000A true CN111739000A (en) | 2020-10-02 |
CN111739000B CN111739000B (en) | 2022-09-13 |
Family
ID=72649367
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010547296.4A Active CN111739000B (en) | 2020-06-16 | 2020-06-16 | System and device for improving left ventricle segmentation accuracy of multiple cardiac views |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111739000B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112932535A (en) * | 2021-02-01 | 2021-06-11 | 杜国庆 | Medical image segmentation and detection method |
CN113298773A (en) * | 2021-05-20 | 2021-08-24 | 山东大学 | Heart view identification and left ventricle detection device and system based on deep learning |
CN113570569A (en) * | 2021-07-26 | 2021-10-29 | 东北大学 | Ultrasonic image-based automatic detection system for chamber interval jitter |
CN113689441A (en) * | 2021-08-30 | 2021-11-23 | 华东师范大学 | DeepLabV3 network-based left ventricle ultrasonic dynamic segmentation method |
CN113762388A (en) * | 2021-09-08 | 2021-12-07 | 山东大学 | Echocardiogram view identification method and system based on deep learning |
CN114004859A (en) * | 2021-11-26 | 2022-02-01 | 山东大学 | Method and system for segmenting echocardiography left atrium map based on multi-view fusion network |
CN114663410A (en) * | 2022-03-31 | 2022-06-24 | 清华大学 | Heart three-dimensional model generation method, device, equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109584254A (en) * | 2019-01-07 | 2019-04-05 | 浙江大学 | A kind of heart left ventricle's dividing method based on the full convolutional neural networks of deep layer |
CN110197492A (en) * | 2019-05-23 | 2019-09-03 | 山东师范大学 | A kind of cardiac MRI left ventricle dividing method and system |
CN110232695A (en) * | 2019-06-18 | 2019-09-13 | 山东师范大学 | Left ventricle image partition method and system based on hybrid mode image |
CN110475505A (en) * | 2017-01-27 | 2019-11-19 | 阿特瑞斯公司 | Utilize the automatic segmentation of full convolutional network |
CN111012377A (en) * | 2019-12-06 | 2020-04-17 | 北京安德医智科技有限公司 | Echocardiogram heart parameter calculation and myocardial strain measurement method and device |
CN111127504A (en) * | 2019-12-28 | 2020-05-08 | 中国科学院深圳先进技术研究院 | Heart medical image segmentation method and system for atrial septal occlusion patient |
-
2020
- 2020-06-16 CN CN202010547296.4A patent/CN111739000B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110475505A (en) * | 2017-01-27 | 2019-11-19 | 阿特瑞斯公司 | Utilize the automatic segmentation of full convolutional network |
CN109584254A (en) * | 2019-01-07 | 2019-04-05 | 浙江大学 | A kind of heart left ventricle's dividing method based on the full convolutional neural networks of deep layer |
CN110197492A (en) * | 2019-05-23 | 2019-09-03 | 山东师范大学 | A kind of cardiac MRI left ventricle dividing method and system |
CN110232695A (en) * | 2019-06-18 | 2019-09-13 | 山东师范大学 | Left ventricle image partition method and system based on hybrid mode image |
CN111012377A (en) * | 2019-12-06 | 2020-04-17 | 北京安德医智科技有限公司 | Echocardiogram heart parameter calculation and myocardial strain measurement method and device |
CN111127504A (en) * | 2019-12-28 | 2020-05-08 | 中国科学院深圳先进技术研究院 | Heart medical image segmentation method and system for atrial septal occlusion patient |
Non-Patent Citations (4)
Title |
---|
HAO CHEN 等: "Iterative Multi-domain Regularized Deep Learning for Anatomical Structure Detection and Segmentation from Ultrasound Images", 《SPRING》 * |
YUYIN ZHOU 等: "Semi-Supervised 3D Abdominal Multi-Organ Segmentation via Deep Multi-Planar Co-Training", 《IEEE》 * |
侯金成 等: "基于全卷积神经网络的心脏CT影像的左心室分割研究", 《CNKI》 * |
陈军: "基于深度学习的心脏图像分割方法的研究", 《信息科技辑》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112932535A (en) * | 2021-02-01 | 2021-06-11 | 杜国庆 | Medical image segmentation and detection method |
CN113298773A (en) * | 2021-05-20 | 2021-08-24 | 山东大学 | Heart view identification and left ventricle detection device and system based on deep learning |
CN113570569A (en) * | 2021-07-26 | 2021-10-29 | 东北大学 | Ultrasonic image-based automatic detection system for chamber interval jitter |
CN113570569B (en) * | 2021-07-26 | 2024-04-16 | 东北大学 | Automatic heart chamber interval jitter detection system based on deep learning |
CN113689441A (en) * | 2021-08-30 | 2021-11-23 | 华东师范大学 | DeepLabV3 network-based left ventricle ultrasonic dynamic segmentation method |
CN113762388A (en) * | 2021-09-08 | 2021-12-07 | 山东大学 | Echocardiogram view identification method and system based on deep learning |
CN114004859A (en) * | 2021-11-26 | 2022-02-01 | 山东大学 | Method and system for segmenting echocardiography left atrium map based on multi-view fusion network |
CN114663410A (en) * | 2022-03-31 | 2022-06-24 | 清华大学 | Heart three-dimensional model generation method, device, equipment and storage medium |
CN114663410B (en) * | 2022-03-31 | 2023-04-07 | 清华大学 | Heart three-dimensional model generation method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111739000B (en) | 2022-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111739000B (en) | System and device for improving left ventricle segmentation accuracy of multiple cardiac views | |
US11101033B2 (en) | Medical image aided diagnosis method and system combining image recognition and report editing | |
JP6993371B2 (en) | Computed tomography lung nodule detection method based on deep learning | |
CN110232383A (en) | A kind of lesion image recognition methods and lesion image identifying system based on deep learning model | |
CN108257135A (en) | The assistant diagnosis system of medical image features is understood based on deep learning method | |
CN110338841A (en) | The display processing method and 3-D supersonic imaging method and system of three-dimensional imaging data | |
CN104200465B (en) | The dividing method and device of cardiac three-dimensional image | |
CN110164550B (en) | Congenital heart disease auxiliary diagnosis method based on multi-view cooperative relationship | |
CN111933279A (en) | Intelligent disease diagnosis and treatment system | |
CN110163877A (en) | A kind of method and system of MRI ventricular structure segmentation | |
CN112365980A (en) | Brain tumor multi-target point auxiliary diagnosis and prospective treatment evolution visualization method and system | |
CN109727197A (en) | A kind of medical image super resolution ratio reconstruction method | |
Yong et al. | Automatic ventricular nuclear magnetic resonance image processing with deep learning | |
Sengan et al. | Echocardiographic image segmentation for diagnosing fetal cardiac rhabdomyoma during pregnancy using deep learning | |
CN112164447B (en) | Image processing method, device, equipment and storage medium | |
CN112767305B (en) | Method and device for identifying echocardiography of congenital heart disease | |
CN109620288A (en) | A kind of department of general surgery's abdominal ultrasonic diagnostic device and its application method | |
CN109637629A (en) | A kind of BI-RADS hierarchy model method for building up | |
Smith et al. | Automated torso contour extraction from clinical cardiac MR slices for 3D torso reconstruction | |
CN112508943A (en) | Breast tumor identification method based on ultrasonic image | |
CN112258476A (en) | Echocardiography myocardial abnormal motion mode analysis method, system and storage medium | |
DE102020211945A1 (en) | Method and arrangement for the automatic localization of organ segments in a three-dimensional image | |
CN113298773A (en) | Heart view identification and left ventricle detection device and system based on deep learning | |
CN112914610B (en) | Contrast-enhanced echocardiography wall thickness automatic analysis system and method based on deep learning | |
CN114419309A (en) | High-dimensional feature automatic extraction method based on brain T1-w magnetic resonance image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |