WO2021128230A1 - Procédé et système de traitement d'images médicales basé sur un apprentissage profond et dispositif informatique - Google Patents

Procédé et système de traitement d'images médicales basé sur un apprentissage profond et dispositif informatique Download PDF

Info

Publication number
WO2021128230A1
WO2021128230A1 PCT/CN2019/128924 CN2019128924W WO2021128230A1 WO 2021128230 A1 WO2021128230 A1 WO 2021128230A1 CN 2019128924 W CN2019128924 W CN 2019128924W WO 2021128230 A1 WO2021128230 A1 WO 2021128230A1
Authority
WO
WIPO (PCT)
Prior art keywords
deep learning
data
learning model
interest
region
Prior art date
Application number
PCT/CN2019/128924
Other languages
English (en)
Chinese (zh)
Inventor
刘非
Original Assignee
上海昕健医疗技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海昕健医疗技术有限公司 filed Critical 上海昕健医疗技术有限公司
Priority to PCT/CN2019/128924 priority Critical patent/WO2021128230A1/fr
Publication of WO2021128230A1 publication Critical patent/WO2021128230A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Definitions

  • the present invention relates to the field of medical image processing, in particular to a medical image processing method, system and computer equipment based on deep learning.
  • Accurate medical image segmentation is a solid foundation for subsequent doctors to diagnose and formulate surgical plans. If the results of image segmentation are biased, it will affect the safety and effectiveness of the operation. Therefore, efficient and high-precision image segmentation methods are essential. However, the current image segmentation for complex parts is still unable to obtain good results.
  • the ankle includes 26 bones such as the calcaneus, talus, navicular, internal cuneiform, intermediate cuneiform, external cuneiform, cuboid, five metatarsals, and phalanges (including 14 bones in the phalanges).
  • the bones are closely connected, the space between the bones at the bone joints is small, and the gray scale of the bone joint is similar to the gray scale of the surrounding tissues, which makes the segmentation of the bone and joint difficult.
  • Using traditional methods to segment these 26 bones is difficult to achieve satisfactory results. It is easy to cause incomplete segmentation at the bone joints, leakage of segmentation boundaries, etc., which requires cumbersome subsequent editing and processing; manual calibration usually requires a lot of cost.
  • the manpower, material resources and time required, and the calibration results of different people have certain differences, which will affect the accuracy of bone and joint segmentation.
  • the purpose of the present invention is to provide a medical image processing method, system and computer equipment based on deep learning that can quickly and accurately segment images of complex parts.
  • the present invention provides a deep learning model design method, including the following steps: S11, collecting two-dimensional medical image data containing the part to be segmented; S12, labeling the image data of the region of interest to obtain the initial label Data set; S13, divide the initial labeled data set into a training set and a test set; S14, select and design a deep learning model to obtain an initial deep learning model; the specific steps are as follows: select a deep learning model; according to the selected deep learning Model, add an adaptive layer to the first layer of the network to obtain a deep learning design model; according to the deep learning design model, add a custom loss function loss to the tail layer of the network to obtain the initial deep learning model; S15.
  • Adjust Deep learning initial model hyperparameters training the deep learning initial model according to the obtained training set data to obtain a deep learning training model; S16, testing the deep learning training model according to the obtained test set data , Calculate the test coefficient; S17, determine whether the test coefficient reaches a fixed parameter value or more, the judgment result is yes, then the model parameters after the deep learning training meet the requirements, the training ends, and the final deep learning model is obtained; otherwise, deep learning training After the model parameters do not meet the requirements, return to step S15.
  • the activation function of the adaptive layer is Where x is the input of the adaptive layer, and ⁇ is the adaptive threshold of x; the loss function Where ⁇ is a constant, which is a custom hyperparameter; m represents the segmentation category; Vol(A_perd i ) represents the predicted volume of the i-th category of bone; Vol(A_true i ) represents the i-th category of bone artificial The marked volume; Vol(A_pred_true i ) represents the correct volume predicted by the i-th category; Vol(A_pred i ⁇ A_true i ) represents the volume obtained after the merging of the bone predicted by the i-th category with the artificially labeled bone.
  • the method for selecting a deep learning model includes the following steps: S31. Divide the training set data into N pieces, where N-1 pieces are used as training data and 1 piece is used as verification data, where N is an integer greater than 1; S32 Select a deep learning training model as the preliminary selected deep learning model; S33. According to the obtained training data, train the preliminary selected deep learning model, and verify the preliminary selected deep learning model according to the obtained verification data.
  • step S34 non-repetitively select another piece of the N training set data in step S31 as the verification data, and the remaining N-1 pieces as the training data, repeat the operation of step S33; S35, steps S33 and S34 repeat After N times, record the average of the errors recorded for N times to obtain the average error of the selected deep learning model; S36, select other deep learning models as the preliminary selected deep learning models, and repeat steps S33 to S35 to obtain other deep learning The average error of the model; S37. Select the deep learning model with the smallest average error as the final selected deep learning model.
  • the calculation steps of the adaptive threshold ⁇ are as follows: S41, traverse the input data of the first layer of the network, and calculate the average value ⁇ ; S42, use the average value ⁇ as the threshold to divide the input data of the first layer of the network into the foreground and the background.
  • a medical image processing method based on deep learning which is a deep learning model designed based on the deep learning model design method of claim 1, comprising the following steps: reading two-dimensional medical image data; Preprocess the three-dimensional medical image data to obtain the initial data; extract the data of the region of interest according to the obtained initial data; scale the data of the region of interest to a fixed size through the interpolation method to obtain the scaled region of interest data; The scaled region of interest data is standardized to obtain input data of a deep learning model; the final deep learning model is run to obtain an initial segmentation result; the initial segmentation result is post-processed to obtain a final segmentation result.
  • the preprocessing method is filtering processing.
  • a medical image processing system based on deep learning comprising: a reading module for reading two-dimensional medical image data; a preprocessing module for preprocessing the read two-dimensional medical image data, Obtain training data; an extraction module for extracting data of the region of interest according to the obtained training data; a scaling module for scaling the data of the region of interest to a fixed size through an interpolation method to obtain the scaled region of interest data ; Standardization module, used to standardize the scaled region of interest data to obtain input data of the deep learning model; Run module, used to run the final deep learning model to obtain the initial segmentation results; Post-processing module, used to The initial segmentation result is post-processed to obtain the final segmentation result.
  • a computer device includes a memory and a processor, and a computer program is stored in the memory.
  • the processor executes the computer program, the following steps are implemented: reading two-dimensional medical image data; Perform preprocessing of the two-dimensional medical image data of the two-dimensional medical image to obtain training data; extract the region of interest data according to the obtained training data; scale the region of interest data to a fixed size through an interpolation method to obtain the scaled region of interest data Standardize the scaled region of interest data to obtain the input data of the deep learning model; run the final deep learning model to obtain the initial segmentation result; post-process the initial segmentation result to obtain the final segmentation result.
  • a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the following steps are implemented: reading two-dimensional medical image data; preprocessing the read two-dimensional medical image data, Obtain the training data; extract the region of interest data according to the obtained training data; scale the region of interest data to a fixed size by interpolation method to obtain the scaled region of interest data; convert the scaled region of interest Standardize data processing to obtain the input data of the deep learning model; run the final deep learning model to obtain the initial segmentation result;
  • the medical image processing method, system and computer equipment based on deep learning of the present invention adopts a fully automatic segmentation algorithm, which can automatically and finely separate the calcaneus, talus, navicular bone, and internal cuneiform from the entire ankle within tens of seconds 26 bones, including the middle cuneiform, the external cuneiform, the cuboid, the first metatarsal, the second metatarsal, the third metatarsal, the fourth metatarsal, the fifth metatarsal and the phalanges, for the doctor’s reference. , Perform a specific analysis of the patient to obtain the diagnosis result, thereby reducing the workload of the doctor, improving the doctor's work efficiency and the accuracy of the diagnosis.
  • Fig. 1 is a schematic diagram of a method for designing a deep learning model for medical image processing according to the present invention.
  • Figure 2 is a schematic diagram of the process of selecting and designing the deep learning model in Figure 1 to obtain the initial deep learning model.
  • Fig. 3 is a schematic diagram of the process of selecting the deep learning model in Fig. 2.
  • Fig. 4 is a schematic flowchart of the calculation method of the adaptive threshold ⁇ in Fig. 2.
  • Fig. 5 is a schematic flowchart of a medical image processing method based on deep learning of the present invention.
  • FIG. 6 is a schematic flowchart of the method for normalizing the zoomed region of interest data in FIG. 5.
  • the first embodiment provides a deep learning model design method for medical image processing, including the following steps:
  • step S17 Determine whether the test coefficient reaches a fixed parameter value or more, and the judgment result is yes, then the model parameters after the deep learning training meet the requirements, and the training ends, and the final deep learning model is obtained; otherwise, the model parameters after the deep learning training do not meet the requirements , Return to step S15.
  • the part to be divided in step S11 is the ankle part
  • the number of the two-dimensional medical image data is more than 100 sets
  • the size of each set of data may be 512*512 pixels*100 sheets.
  • the marking of the image data of the region of interest in step S12 is to organize some qualified physicians to manually outline the calcaneus, talus, navicular bone, internal cuneiform, and intermediate cuneiform in the image data of the region of interest.
  • 26 bones including the external cuneiform, cuboid, first metatarsal, second metatarsal, third metatarsal, fourth metatarsal, fifth metatarsal, and phalanges, to obtain the initial labeled data set.
  • step S13 the initial labeled data set is divided into a training set and a test set, and the division operation may be performed in a random allocation manner, and the allocation ratio of the training set and the test set may be 80% and 20%.
  • the test coefficient in step S16 is the index DICE coefficient of the test set data segmentation result; the fixed parameter value in step S17 is 90%.
  • the method of selecting and designing a deep learning model in step S14 to obtain an initial deep learning model includes the following steps:
  • the deep learning model includes but not limited to FCN, Res-VNet, U-Net, and V-Net models, and the method for selecting the deep learning model is to pass the training set data through N-fold cross-validation To evaluate the robustness of the selected deep learning model, choose a deep learning model with better robustness;
  • an adaptive layer is added to the first layer of the network to obtain a deep learning design model;
  • the activation function of the adaptive layer is Among them, x is the input of the adaptive layer, and ⁇ is the adaptive threshold of x;
  • the method for selecting a deep learning model in step S21 includes the following steps:
  • step S34 Non-repetitively select another piece of data in the N training set data in step S31 as verification data, and the remaining N-1 pieces as training data, and repeat the operation of step S33;
  • steps S33 and S34 are repeated N times, the average value of the errors recorded for N times is recorded to obtain the average error of the selected deep learning model;
  • the value range of N is 3-10, preferably, the value range of N is 5-10, and more preferably, the value of N in the first embodiment is 10.
  • step S22 the calculation steps of the adaptive threshold ⁇ in step S22 are as follows:
  • the threshold value set in step S45 has a value range of 0.01-10.
  • the threshold value set in the first embodiment is 0.1.
  • the second embodiment provides a medical image processing method based on the deep learning model designed in the first embodiment, including the following steps:
  • S54 Scale the data of the region of interest to a fixed size by an interpolation method to obtain the data of the region of interest after scaling, and the fixed size may be 512*512 pixels;
  • S55 Standardize the scaled region of interest data to obtain input data of the deep learning model
  • the preprocessing method in step S52 is filtering processing, such as median filtering, mean filtering, Gaussian filtering, bilateral filtering or other filtering methods.
  • the method of extracting the region of interest data in step S53 includes: automatically searching all connected regions of the image according to the obtained initial data and according to a set threshold, and growing all connected regions into a separate container; according to the size of each connected region Judge the area of the patient’s ankle with the direction, and extract the area of the ankle as the area of interest data.
  • the method of post-processing the initial segmentation result in step S57 is to modify part of the over-segmented and under-segmented regions, filter out small mis-segmented impurities, and process them through algorithms such as morphological expansion, erosion, and filling. Obtain the final segmentation result.
  • the method for standardizing the zoomed region of interest data in step S55 includes the following steps:
  • the present invention adopts a medical image processing method based on deep learning and a fully automatic segmentation algorithm, which can automatically and finely separate the calcaneus, talus, navicular bone, internal cuneiform, and intermediate cuneiform from the entire ankle within tens of seconds.
  • 26 bones including the external cuneiform, cuboid, first metatarsal, second metatarsal, third metatarsal, fourth metatarsal, fifth metatarsal, and phalanges, for the doctor’s reference.
  • the diagnosis results are analyzed, thereby reducing the workload of the doctors and improving the working efficiency of the doctors and the accuracy of the diagnosis.
  • the third embodiment provides a medical image processing system based on deep learning, and the system includes:
  • a reading module for reading two-dimensional medical image data the two-dimensional medical image data being CT image data complying with the DICOM3.0 standard;
  • the preprocessing module is used to preprocess the read two-dimensional medical image data to obtain training data
  • the extraction module is used to extract the data of the region of interest according to the obtained training data
  • a scaling module configured to scale the region of interest data to a fixed size through an interpolation method to obtain scaled region of interest data, the fixed size may be 512*512 pixels;
  • the standardization module is used to standardize the scaled region of interest data to obtain input data of the deep learning model
  • the running module is used to run the final deep learning model obtained in the first embodiment to obtain the initial segmentation result
  • the post-processing module is used to perform post-processing on the initial segmentation result to obtain the final segmentation result.
  • the various modules in the medical image processing system based on deep learning can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the foregoing modules may be embedded in the form of hardware or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the foregoing modules.
  • the fourth embodiment provides a computer device, the computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when the processor executes the computer program:
  • the fifth embodiment provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented:
  • S84 Scale the data of the region of interest to a fixed size by an interpolation method to obtain the data of the region of interest after scaling, and the fixed size may be 512*512 pixels;

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Optics & Photonics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé et un système de traitement d'images médicales basé sur un apprentissage profond, et un dispositif informatique. Un procédé de conception de modèle d'apprentissage profond comprend les étapes suivantes : la collecte de données d'image (S11) ; l'annotation des données d'image (S12) ; la division d'un ensemble initial de données annotées (S13) ; la sélection et la conception d'un modèle d'apprentissage profond (S14) ; l'ajustement d'un hyperparamètre de modèle initial d'apprentissage profond et l'apprentissage d'un modèle initial d'apprentissage profond (S15) ; le test d'un modèle entraîné par apprentissage profond (S16) ; et la détermination si un coefficient de test atteint une valeur de paramètre fixe ou plus (S17). Selon le procédé, une image partielle complexe peut être segmentée rapidement et avec une haute précision, ce qui permet d'améliorer l'efficacité de travail et la précision de diagnostic des médecins.
PCT/CN2019/128924 2019-12-27 2019-12-27 Procédé et système de traitement d'images médicales basé sur un apprentissage profond et dispositif informatique WO2021128230A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/128924 WO2021128230A1 (fr) 2019-12-27 2019-12-27 Procédé et système de traitement d'images médicales basé sur un apprentissage profond et dispositif informatique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/128924 WO2021128230A1 (fr) 2019-12-27 2019-12-27 Procédé et système de traitement d'images médicales basé sur un apprentissage profond et dispositif informatique

Publications (1)

Publication Number Publication Date
WO2021128230A1 true WO2021128230A1 (fr) 2021-07-01

Family

ID=76573811

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/128924 WO2021128230A1 (fr) 2019-12-27 2019-12-27 Procédé et système de traitement d'images médicales basé sur un apprentissage profond et dispositif informatique

Country Status (1)

Country Link
WO (1) WO2021128230A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800095A (zh) * 2012-07-17 2012-11-28 南京特雷多信息科技有限公司 一种镜头边界检测方法
CN105631479A (zh) * 2015-12-30 2016-06-01 中国科学院自动化研究所 基于非平衡学习的深度卷积网络图像标注方法及装置
CN108921227A (zh) * 2018-07-11 2018-11-30 广东技术师范学院 一种基于capsule理论的青光眼医学影像分类方法
CN110097554A (zh) * 2019-04-16 2019-08-06 东南大学 基于密集卷积和深度可分离卷积的视网膜血管分割方法
US20190365341A1 (en) * 2018-05-31 2019-12-05 Canon Medical Systems Corporation Apparatus and method for medical image reconstruction using deep learning to improve image quality in position emission tomography (pet)
CN110599502A (zh) * 2019-09-06 2019-12-20 江南大学 一种基于深度学习的皮肤病变分割方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800095A (zh) * 2012-07-17 2012-11-28 南京特雷多信息科技有限公司 一种镜头边界检测方法
CN105631479A (zh) * 2015-12-30 2016-06-01 中国科学院自动化研究所 基于非平衡学习的深度卷积网络图像标注方法及装置
US20190365341A1 (en) * 2018-05-31 2019-12-05 Canon Medical Systems Corporation Apparatus and method for medical image reconstruction using deep learning to improve image quality in position emission tomography (pet)
CN108921227A (zh) * 2018-07-11 2018-11-30 广东技术师范学院 一种基于capsule理论的青光眼医学影像分类方法
CN110097554A (zh) * 2019-04-16 2019-08-06 东南大学 基于密集卷积和深度可分离卷积的视网膜血管分割方法
CN110599502A (zh) * 2019-09-06 2019-12-20 江南大学 一种基于深度学习的皮肤病变分割方法

Similar Documents

Publication Publication Date Title
CN107767376B (zh) 基于深度学习的x线片骨龄预测方法及系统
CN110111313B (zh) 基于深度学习的医学图像检测方法及相关设备
CN109754387B (zh) 一种全身骨显像放射性浓聚灶的智能检测定位方法
CN108464840B (zh) 一种乳腺肿块自动检测方法及系统
CN110059697B (zh) 一种基于深度学习的肺结节自动分割方法
Liu et al. A framework of wound segmentation based on deep convolutional networks
CN109671068B (zh) 一种基于深度学习的腹部肌肉标注方法及装置
CN108062749B (zh) 肛提肌裂孔的识别方法、装置和电子设备
CN111179227A (zh) 基于辅助诊断和主观美学的乳腺超声图像质量评价方法
US20220335600A1 (en) Method, device, and storage medium for lesion segmentation and recist diameter prediction via click-driven attention and dual-path connection
CN111652838B (zh) 基于目标检测网络的甲状腺结节定位与超声报告纠错方法
US20230230241A1 (en) System and method for detecting lung abnormalities
Tang et al. One click lesion RECIST measurement and segmentation on CT scans
CN116386902B (zh) 基于深度学习的大肠癌人工智能辅助病理诊断系统
CN113763340A (zh) 基于多任务深度学习强直性脊柱炎的自动分级方法
CN114565572A (zh) 一种基于图像序列分析的脑出血ct图像分类方法
Liu et al. Automated classification and measurement of fetal ultrasound images with attention feature pyramid network
Zeng et al. TUSPM-NET: A multi-task model for thyroid ultrasound standard plane recognition and detection of key anatomical structures of the thyroid
Tang et al. Lesion segmentation and RECIST diameter prediction via click-driven attention and dual-path connection
CN111667457B (zh) 一种基于医学影像的脊椎椎体信息自动识别方法、系统、终端及存储介质
CN113435469A (zh) 一种基于深度学习的肾肿瘤增强ct图像自动识别系统及其训练方法
CN111476802A (zh) 一种基于稠密卷积模型的医学图像分割与肿瘤检测方法,设备及可读存储介质
WO2021128230A1 (fr) Procédé et système de traitement d'images médicales basé sur un apprentissage profond et dispositif informatique
CN116228660A (zh) 胸部正片异常部位检测方法及装置
CN115564763A (zh) 甲状腺超声图像处理方法、装置、介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19957775

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19957775

Country of ref document: EP

Kind code of ref document: A1