CN109064443B - Multi-model organ segmentation method based on abdominal ultrasonic image - Google Patents
Multi-model organ segmentation method based on abdominal ultrasonic image Download PDFInfo
- Publication number
- CN109064443B CN109064443B CN201810641415.5A CN201810641415A CN109064443B CN 109064443 B CN109064443 B CN 109064443B CN 201810641415 A CN201810641415 A CN 201810641415A CN 109064443 B CN109064443 B CN 109064443B
- Authority
- CN
- China
- Prior art keywords
- organ
- segmentation
- model
- image
- abdominal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30056—Liver; Hepatic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30084—Kidney; Renal
Abstract
A multi-model organ segmentation method and a multi-model organ segmentation system based on an abdominal ultrasonic image solve the problems of insufficient accuracy, poor real-time performance and poor universality of the traditional abdominal ultrasonic image organ segmentation method. The method comprises the following steps: the method comprises the following steps: decoding the scanned ultrasonic video stream into a single-frame image and preprocessing the image by applying histogram equalization; step two: the rough segmentation of the abdominal organs is realized on the single-frame image based on the improved U-Net segmentation model; step three: and correcting the rough segmentation result by combining a plurality of models such as the classification result of a single-frame image, the prior knowledge of the medical organ structure, the video interframe correlation characteristics and the like of the GoogleNet abdominal organ classification model, so as to realize the fine segmentation of the abdominal organ. The invention utilizes a multi-model method to finish the organ fine segmentation based on the abdominal ultrasonic image, has high segmentation accuracy and good real-time property and universality, provides an implementation platform for an end-to-end intelligent diagnosis system, and can provide effective diagnosis assistance for medical personnel.
Description
(I) technical field
The invention belongs to the field of computer-aided diagnosis, and particularly relates to research on an abdominal organ ultrasonic image segmentation method, in particular to a multi-model organ fine segmentation method for an abdominal organ ultrasonic image, which combines an organ segmentation model, an organ classification model, medical priori knowledge and continuous frame correlation.
(II) background of the invention
The abdomen has a plurality of important organs of a human body, and when the disease diagnosis is performed on the abdominal organs, the ultrasonic imaging technology is always a main disease screening means due to the advantages of no wound, no radiation, high imaging speed and the like. The organ segmentation algorithm based on the ultrasonic image can segment images scanned by ultrasonic equipment in real time, accurately position each visceral organ and make basic work for intelligent diagnosis.
The current intelligent diagnosis method is to diagnose diseases aiming at specific organs on the premise of knowing the organ types. An important link, namely intelligent organ recognition, is omitted. For abdominal ultrasound images, there are many organs, including the liver, gallbladder, pancreas, spleen, left kidney, and right kidney. Particularly, for such large organs as the liver and kidney, a doctor may generate a plurality of sections during scanning, for example, the liver right oblique, the liver right transverse, the liver left longitudinal, the liver left transverse, the left kidney longitudinal, the right kidney transverse, the right kidney longitudinal, etc., the different section forms of the same organ have large differences, and the speckle noise of the ultrasound image affects the image quality, so even if the ultrasound doctor has insufficient clinical experience, the identification is difficult. The intelligent organ identification of the abdominal ultrasonic image has important clinical significance, and specific diagnosis procedures can be carried out on specific organs only if organ identification without participation of a clinical ultrasonic doctor is realized, so that the intelligent diagnosis in the true sense is realized.
The invention provides a multi-model organ segmentation method, which utilizes an improved U-Net deep neural network model and a GoogleNet deep neural network model to segment and classify abdominal organs, and combines medical organ structure knowledge and video interframe correlation characteristics to build a multi-model organ segmentation system, thereby realizing the fine segmentation of the abdominal organs based on ultrasonic images.
Disclosure of the invention
The invention aims to provide a multi-model organ segmentation method based on an ultrasonic image, which realizes the fine segmentation of an abdominal organ of the ultrasonic image and solves the problems of insufficient accuracy, poor real-time property and poor universality of the traditional abdominal ultrasonic image organ segmentation method.
The invention is realized by the following technical scheme: analyzing the scanned abdominal ultrasonic video frame by frame to obtain an abdominal organ ultrasonic image, and preprocessing the abdominal organ ultrasonic image; performing organ segmentation on the ultrasonic image by using the trained U-Net deep neural network model to realize rough segmentation of the liver, the gall bladder, the pancreas, the spleen, the left kidney and the right kidney; and (3) carrying out organ classification on the ultrasonic image by applying the trained GoogleNet deep neural network model, and improving and correcting the segmentation result based on the classification result, the medical organ structure prior knowledge and the video interframe correlation information to obtain a final organ fine segmentation result.
The process of the invention is divided into three steps, and the specific steps are as follows:
the method comprises the following steps: video decoding and image preprocessing.
Before training, the ultrasonic video stream obtained by scanning is decoded frame by frame to obtain an ultrasonic image. The whole color gamut of the ultrasonic image is dark, the whole gray average values of the ultrasonic images obtained by scanning with different machines are different, and the gray of the image is mapped to the intervals uniformly distributed in all gray domains from a certain concentrated interval by adopting a histogram equalization algorithm, so that the defects of the ultrasonic image are alleviated, and the robustness of the algorithm is enhanced.
Step two: and realizing rough abdominal organ segmentation based on the improved U-Net segmentation model.
1) The method comprises the steps of preprocessing an existing abdomen ultrasonic image to be used as a training set, drawing out all organ edges appearing in each image under the assistance of a doctor, processing the organ edges into a binary image, and using the binary image as a label of a segmentation network. The six organs of liver, gallbladder, pancreas, spleen, left kidney and right kidney are added with background to form seven classes, and label pixels after aggregation are marked with 0-6 for distinguishing.
2) For the original U-Net network[1]The improvement is carried out, edge completion is added, and a Batch Normalization algorithm is introduced, so that the symmetry of the network is increased, and the problem of slow convergence of the original network is solved. And training by using the training set on the improved U-Net network to obtain an organ segmentation model.
3) And testing the ultrasonic image analyzed and preprocessed in the ultrasonic video stream for testing by using the trained improved U-Net model to obtain a rough organ segmentation result.
Step three: method for realizing accurate segmentation of abdominal organs based on multi-model information
1) Taking the preprocessed liver ultrasonic image set as a training set, taking main organs in images given by doctors as labels, and constructing a general GoogleNet network architecture[2]And performing classification model training on the training set.
2) And testing the analyzed and preprocessed image in the ultrasonic video stream for testing by using the trained GoogleNet model to obtain an organ classification result. The classification result is used for correcting the rough organ segmentation result, and then secondary correction is carried out based on the prior knowledge of the organ structure in medicine, and the accuracy of organ segmentation can be effectively improved by the two corrections.
3) The video inter-frame information has correlation, and the segmentation results of the previous frames can be introduced for error correction when the current segmentation result is judged. A first-in first-out queue is adopted, results of adjacent four frames are weighted, and finally a fine result of abdominal organ segmentation is obtained.
On the basis of the algorithm flow, the invention develops and realizes an intelligent abdominal ultrasound image multi-model organ segmentation system.
The present invention has the following advantageous effects. In an actual test environment, organ identification of the abdominal ultrasonic image can be accurately and efficiently completed, a realization platform is provided for a subsequent end-to-end intelligent diagnosis system, and effective diagnosis assistance can be provided for medical personnel. The invention represents the development trend of intelligent medical treatment and has positive promoting effect on the auxiliary medical treatment of an intelligent system.
Reference documents:
[1]Ronneberger O,Fischer P,Brox T.U-Net:Convolutional Networks for Biomedical Image Segmentation[M]//Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015.Springer International Publishing,2015:234-241.
[2]Szegedy C,Liu W,Jia Y,et al.Going deeper with convolutions[J].2014:1-9.
(IV) detailed description of the preferred embodiment
The following examples are provided to illustrate specific embodiments of the present invention. The multi-model organ segmentation method and system based on the abdominal ultrasound image are applied to abdominal organ identification of an actual ultrasound scanning video stream.
The abdominal ultrasound images used by the training model in the experiment are 7230 images which are collected by the ultrasonographer of the second hospital affiliated to the Harbin medical university in actual cases. Under the guidance of doctors, scanning areas of the images are extracted one by one, and organ labeling and organ edge labeling are carried out to serve as training sets. The U-Net abdominal organ segmentation model and the GoogleNet abdominal organ classification model are obtained by training on the improved U-Net network and the improved GoogleNet network, the organ recognition accuracy of the segmentation model is 75% through testing, and the accuracy of the classification model is 83%. After the training model is obtained, the abdominal organ identification test can be carried out in real time aiming at the actually scanned ultrasonic video stream.
Executing the step one: and decoding the ultrasonic scanning video stream to obtain a picture of each frame. And pre-processes the image.
And (5) executing the step two: and (3) carrying out organ segmentation on the single-frame image extracted and preprocessed from the ultrasonic scanning video stream based on the trained improved U-Net organ segmentation model to obtain a rough organ segmentation result.
And step three is executed: and (4) carrying out organ classification on the single-frame image extracted and preprocessed from the video based on the trained GoogleNet organ classification model to obtain an organ classification result. And correcting the rough organ segmentation result by using the classification result and combining with the prior knowledge of the medical organ structure, and then performing weighting processing on the corrected segmentation result by using the adjacent frame information to finally obtain the fine organ segmentation result of the single-frame image.
We take 1 frame every 10 frames for the test video stream as the test set, and ask the doctor to work the organ edge label as the test set label. Comparing the test set label with the rough organ segmentation result and the fine organ segmentation result given by the multi-model organ segmentation system, and simultaneously applying a universal FCN network[1]Organ segmentation was achieved on the test set as a control, and the specific comparison results are shown in the following table.
In the table, TPR, SIR, HD and MD are all common image segmentation result evaluation indexes[2]TR and SIR are area evaluation criteria, the larger the better, and HD and MD are edge evaluation criteria, the smaller the better. As can be seen from the data in the table, the rough segmentation result of the segmentation algorithm and the segmentation system provided by the invention is superior to that of the rough segmentation resultCompared with the algorithm, the fine segmentation result obtained after multi-model correction far exceeds the compared algorithm in parameter, and effective identification of abdominal organs of the ultrasonic images can be realized.
Seven medical ultrasonic imaging devices of different brands are selected in sequence to test the multi-model fusion abdominal organ intelligent identification system provided by the invention, and the organ segmentation and identification effects are approved by doctors, so that the requirements of clinical application can be met.
Reference documents:
[1]Long J,Shelhamer E,Darrell T.Fully convolutional networks for semantic segmentation[J].IEEE Transactions on Pattern Analysis&Machine Intelligence,2017,39(4):640.
[2]Shao H,Zhang Y,Xian M,et al.A saliency model for automated tumor detection in breast ultrasound images[C]//IEEE International Conference on Image Processing.IEEE,2015.
Claims (1)
1. a multi-model organ segmentation method based on an abdominal ultrasonic image is characterized by comprising the following steps:
the method comprises the following steps: video analysis and image preprocessing;
step two: realizing rough segmentation of abdominal organs based on the improved U-Net segmentation model;
the second step is as follows:
1) preprocessing an existing abdomen ultrasonic image to be used as a training set, outlining all organ edges appearing in each image under the assistance of a doctor, processing the organ edges into a binary image and using the binary image as a label of a segmentation network;
2) improving the original U-Net network, adding edge completion, and introducing a Batch Normalization algorithm; training by using a training set on an improved U-Net network to obtain an organ segmentation model;
3) testing the ultrasonic image analyzed and preprocessed in the ultrasonic video stream for testing by applying the trained improved U-Net model to obtain a rough organ segmentation result
Step three: accurate division of abdominal organs is realized based on multi-model information;
the third step is as follows:
1) taking the preprocessed liver ultrasonic image set as a training set, taking organs in the image given by a doctor as labels, and carrying out classification model training on the training set on a general GoogleNet network architecture;
2) testing the analyzed and preprocessed image in the ultrasonic video stream for testing by using the trained GoogleNet model to obtain an organ classification result; correcting the rough organ segmentation result by using the classification result, and then performing secondary correction based on the prior knowledge of the organ structure in medicine;
3) the video inter-frame information has correlation, and the segmentation results of the previous frames are introduced for error correction when the current segmentation result is judged; a first-in first-out queue is adopted, results of adjacent four frames are weighted, and finally a fine result of abdominal organ segmentation is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810641415.5A CN109064443B (en) | 2018-06-22 | 2018-06-22 | Multi-model organ segmentation method based on abdominal ultrasonic image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810641415.5A CN109064443B (en) | 2018-06-22 | 2018-06-22 | Multi-model organ segmentation method based on abdominal ultrasonic image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109064443A CN109064443A (en) | 2018-12-21 |
CN109064443B true CN109064443B (en) | 2021-07-16 |
Family
ID=64821287
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810641415.5A Active CN109064443B (en) | 2018-06-22 | 2018-06-22 | Multi-model organ segmentation method based on abdominal ultrasonic image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109064443B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111564206A (en) * | 2019-02-13 | 2020-08-21 | 东软医疗系统股份有限公司 | Diagnosis method, device, equipment and medium |
CN111724893B (en) * | 2019-03-20 | 2024-04-09 | 宏碁股份有限公司 | Medical image identification device and medical image identification method |
CN110084751A (en) * | 2019-04-24 | 2019-08-02 | 复旦大学 | Image re-construction system and method |
CN110288605A (en) * | 2019-06-12 | 2019-09-27 | 三峡大学 | Cell image segmentation method and device |
CN110853049A (en) * | 2019-10-17 | 2020-02-28 | 上海工程技术大学 | Abdominal ultrasonic image segmentation method |
CN111429421B (en) * | 2020-03-19 | 2021-08-27 | 推想医疗科技股份有限公司 | Model generation method, medical image segmentation method, device, equipment and medium |
CN111754530B (en) * | 2020-07-02 | 2023-11-28 | 广东技术师范大学 | Prostate ultrasonic image segmentation classification method |
CN112837275B (en) * | 2021-01-14 | 2023-10-24 | 长春大学 | Capsule endoscope image organ classification method, device, equipment and storage medium |
CN113298813B (en) * | 2021-05-07 | 2022-11-25 | 中山大学 | Brain structure segmentation system based on T1 weighted magnetic resonance image |
CN113658332B (en) * | 2021-08-24 | 2023-04-11 | 电子科技大学 | Ultrasonic image-based intelligent abdominal rectus muscle segmentation and reconstruction method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101366059A (en) * | 2005-12-29 | 2009-02-11 | 卡尔斯特里姆保健公司 | Cad detection system for multiple organ systems |
CN107424152A (en) * | 2017-08-11 | 2017-12-01 | 联想(北京)有限公司 | The detection method and electronic equipment of organ lesion and the method and electronic equipment for training neuroid |
CN108053417A (en) * | 2018-01-30 | 2018-05-18 | 浙江大学 | A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7006677B2 (en) * | 2002-04-15 | 2006-02-28 | General Electric Company | Semi-automatic segmentation algorithm for pet oncology images |
US8837771B2 (en) * | 2012-02-28 | 2014-09-16 | Siemens Aktiengesellschaft | Method and system for joint multi-organ segmentation in medical image data using local and global context |
-
2018
- 2018-06-22 CN CN201810641415.5A patent/CN109064443B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101366059A (en) * | 2005-12-29 | 2009-02-11 | 卡尔斯特里姆保健公司 | Cad detection system for multiple organ systems |
CN107424152A (en) * | 2017-08-11 | 2017-12-01 | 联想(北京)有限公司 | The detection method and electronic equipment of organ lesion and the method and electronic equipment for training neuroid |
CN108053417A (en) * | 2018-01-30 | 2018-05-18 | 浙江大学 | A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature |
Non-Patent Citations (2)
Title |
---|
一种快速的全自动超声子宫图像分割算法;唐盛;《中国生物医学工程学报》;20071031;第26卷(第5期);全文 * |
医学超声影像三维目标对象的分割方法;敖贵文;《医药前沿》;20120630;第2卷(第18期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109064443A (en) | 2018-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109064443B (en) | Multi-model organ segmentation method based on abdominal ultrasonic image | |
CN110097131B (en) | Semi-supervised medical image segmentation method based on countermeasure cooperative training | |
JP6947759B2 (en) | Systems and methods for automatically detecting, locating, and semantic segmenting anatomical objects | |
WO2020224123A1 (en) | Deep learning-based seizure focus three-dimensional automatic positioning system | |
CN106097335B (en) | Alimentary canal lesion image identification system and recognition methods | |
CN108010021A (en) | A kind of magic magiscan and method | |
Cao et al. | Region-adaptive deformable registration of CT/MRI pelvic images via learning-based image synthesis | |
CN108765392B (en) | Digestive tract endoscope lesion detection and identification method based on sliding window | |
CN112634283A (en) | Hip joint segmentation model establishment method using small sample image training and application thereof | |
CN112949838B (en) | Convolutional neural network based on four-branch attention mechanism and image segmentation method | |
CN111415359A (en) | Method for automatically segmenting multiple organs of medical image | |
CN113298830B (en) | Acute intracranial ICH region image segmentation method based on self-supervision | |
CN109241963B (en) | Adaboost machine learning-based intelligent identification method for bleeding point in capsule gastroscope image | |
CN110428426A (en) | A kind of MRI image automatic division method based on improvement random forests algorithm | |
CN112820399A (en) | Method and device for automatically diagnosing benign and malignant thyroid nodules | |
CN112102332A (en) | Cancer WSI segmentation method based on local classification neural network | |
US20230419499A1 (en) | Systems and methods for brain identifier localization | |
Jin et al. | Object recognition in medical images via anatomy-guided deep learning | |
CN114581499A (en) | Multi-modal medical image registration method combining intelligent agent and attention mechanism | |
Feng et al. | Learning what and where to segment: A new perspective on medical image few-shot segmentation | |
Schenk et al. | Automatic glottis segmentation from laryngeal high-speed videos using 3D active contours | |
Li et al. | Automatic pulmonary vein and left atrium segmentation for TAPVC preoperative evaluation using V-net with grouped attention | |
CN116258732A (en) | Esophageal cancer tumor target region segmentation method based on cross-modal feature fusion of PET/CT images | |
CN113298754B (en) | Method for detecting control points of outline of prostate tissue | |
Xu et al. | A pilot study to utilize a deep convolutional network to segment lungs with complex opacities |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |