WO2022037548A1 - Mri spinal image keypoint detection method based on deep learning - Google Patents

Mri spinal image keypoint detection method based on deep learning Download PDF

Info

Publication number
WO2022037548A1
WO2022037548A1 PCT/CN2021/112874 CN2021112874W WO2022037548A1 WO 2022037548 A1 WO2022037548 A1 WO 2022037548A1 CN 2021112874 W CN2021112874 W CN 2021112874W WO 2022037548 A1 WO2022037548 A1 WO 2022037548A1
Authority
WO
WIPO (PCT)
Prior art keywords
vertebra
edge
information
vertebrae
mri
Prior art date
Application number
PCT/CN2021/112874
Other languages
French (fr)
Chinese (zh)
Inventor
刘刚
郑友怡
方向前
马成龙
赵兴
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Priority to JP2022578644A priority Critical patent/JP7489732B2/en
Publication of WO2022037548A1 publication Critical patent/WO2022037548A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone

Definitions

  • the invention belongs to the fields of computer vision and artificial intelligence, and particularly relates to a method for detecting key points of spine MRI images based on deep learning.
  • the present invention aims to detect the key points of the spine MRI image, and the previous key point detection work of the spine MRI image mostly relies on the manual annotation of experts. Manual annotation is inefficient and subject to the subjective influence of experts, especially not suitable for large-scale data processing and analysis.
  • most of the methods that try to use artificial intelligence technology to detect are using the underlying features of the image, such as the paper (Ebrahimi S, Angelini E, Gajny L, et al.Lumbar spine posterior corner detection in X-rays using Haar-based features[C]// In 2016 IEEE 13th international symposium on biomedical imaging (ISBI).
  • the Harr feature is used to detect the corners of the vertebrae, but this method only uses the underlying information of the image, and the robustness is poor, only suitable for In some specific scenarios, it is not suitable for complex and changeable medical scenarios.
  • the present invention realizes a more robust and accurate spinal MRI image key point detection method by establishing a high-quality data set and utilizing the excellent learning ability of deep learning.
  • a deep learning-based spine MRI image key point detection method comprising the following steps:
  • Step 1 Input the spine MRI image into the trained target detection network to obtain the position information of each vertebra and whether it is the coarse-grained label of S1;
  • Step 2 Use all the vertebrae obtained in step 1 and the located S1 position, and combine with the physiological structure information of the spine itself to filter the false positive detection results and identify the fine-grained labels to which each vertebra belongs.
  • Step 3 Cut out the vertebrae detected in step 2 and some surrounding areas, and input them into the trained key point detection network to detect the upper and lower boundaries of each vertebra, UA, UM, UP, LA, LM, LP, a total of six key points location information.
  • Step 4 Use the trained segmentation network to segment the vertebrae obtained in step 2 to obtain edge information, and modify the position information of the key points obtained in step 3 according to the edge information obtained in step 4 to obtain the final key point prediction result.
  • the coarse-grained labels of the vertebrae are S1 and NS1 (wherein S1 refers to sacral 1, and NS1 refers to all other vertebrae except sacral 1), and the target detection network is YOLOv3.
  • the third step is realized by the following sub-steps.
  • the aspect ratio threshold is 1.6
  • the upper edge height threshold is 5.
  • step 4 is specifically:
  • the vertebral edge segmentation network consists of a down-sampling part and an up-sampling part.
  • the structure of the down-sampling part is to remove the resnet50 of the fully connected layer.
  • the up-sampling part is composed of the corresponding four-stage up-sampling convolution blocks.
  • the up-sampling convolution block structure is upsampling->conv->bn->relu.
  • (4.2) Training the vertebral edge segmentation network First, use the key point annotation information to establish a coarse-grained segmentation data set to pre-train the vertebral edge segmentation network, and then construct an accurate fine-grained segmentation data set to further train the segmentation network.
  • the segmentation results are further modified by using the characteristics of the CRF and the large gradient of the image at the edge to obtain more accurate edge segmentation information.
  • step (4.3) is specifically:
  • the beneficial effect of the present invention is that the present invention utilizes the deep learning technology to realize the key point detection of the spine MRI image, avoids the tedious manual labeling, and reduces the burden on the doctor.
  • the present invention can avoid the influence of subjective factors of doctors, and can also process large-scale data in batches, providing a data basis for further spinal MRI image analysis, and the method of the present invention can be developed into an interactive visual MRI spinal image.
  • Key point automatic labeling software, the obtained key point detection results can calculate the intervertebral disc height index and lumbar anterior process angle.
  • FIG. 1 is a schematic diagram of detecting key points in the present invention.
  • Fig. 2 is the overall flow chart of the present invention.
  • FIG. 3 is a schematic diagram of the key point correction of the present invention.
  • the data sets of the training network in the present invention are all self-built data sets, and the key points of the spine are marked by doctors and experts, and the key points of detection are the upper and lower boundaries of each vertebra.
  • UA, UM, UP, LA, LM, LP A total of six key points.
  • the basic process of model training is as follows:
  • the target detection network described in Figure 2 is preferably YOLOv3.
  • the vertebra is divided into two categories: S1 and NS1 (sacral 1 and non-sacral 1) according to whether the vertebra is S1 or not, as coarse-grained label information.
  • the data set for training the detection network is constructed from the original data set, and the YOLOv3 network is trained according to the key point labels of each vertebra to calculate the bounding boxes of the vertebrae and the coarse-grained category labels.
  • the method uses the structural information of the spine to filter the false positive prediction results and determines the category to which each vertebra belongs.
  • the specific method is: use S1 as the To locate the vertebrae, calculate the height of each vertebra center on the image and sort the detected vertebrae according to the centroid height. Then, according to the physiological structure information of the human spine, from bottom to top, corresponding labels such as S1, L5, L4, L3, L2, L1, T12, T11 are assigned to each detected vertebra.
  • the vertebrae may not be completely captured at the top of the picture.
  • This method filters such objects by calculating the aspect ratio of the vertebrae and whether the center height meets the threshold requirements.
  • the aspect ratio of the vertebrae can be calculated according to the target detection results.
  • the key point detection network described in Figure 2 is preferably a U-shaped network or a Stacked Hourglass Network.
  • the training data is the vertebrae image cropped from the original image and the heat map corresponding to the surrounding image and key points.
  • the online hard sample mining method is used in the training process of the point detection network.
  • this method trains a U-shaped deep convolutional neural network as a vertebral edge segmentation network for segmenting vertebral edges.
  • the vertebral edge segmentation network consists of a down-sampling part and an up-sampling part.
  • the structure of the downsampling part is to remove the Resnet50 of the fully connected layer, and the upsampling part is composed of the corresponding four-stage upsampling convolution blocks.
  • the upsampling convolution block structure is upsampling->conv->bn->relu.
  • this method first uses the key point annotation information to establish a coarse-grained segmentation data set to pre-train the vertebral edge segmentation network, and then constructs an accurate fine-grained segmentation data set to further train the segmentation network.
  • the method also uses the conditional random field (CRF) and the characteristics of large image gradient at the edge to further modify the segmentation result to obtain more accurate edge segmentation information.
  • CCF conditional random field
  • this method uses the edge information to correct the detected key points.
  • the edge is an extension line, and the farthest intersection of the extension line and the edge of the vertebra is used as the corrected UA, LA, UM, LM, UP, and LP coordinates.
  • the above method was developed into an interactive visual automatic detection and labeling software.
  • the present invention needs to complete the following steps in the spine MRI image key point detection process:
  • Step 1 Input the spine MRI image into the trained target detection network YOLOv3 (Redmon J, Farhadi A.Yolov3:An incremental improvement[J].arXiv preprint arXiv:1804.02767,2018.), obtain the position information of each vertebra and whether it is Coarse-grained labels for S1;
  • Step 2 Use all the vertebrae obtained in step 1 and the located S1 position, and combine with the physiological structure information of the spine itself to filter the false positive detection results and identify the fine-grained labels to which each vertebra belongs.
  • Step 3 Cut out the vertebrae detected in step 2 and some surrounding areas, and input them into the trained key point detection network to detect the upper and lower boundaries of each vertebra, UA, UM, UP, LA, LM, LP, a total of six key points heatmap.
  • the coordinates of the key points in the heatmap can be extracted from but not limited to papers (Zhang F, Zhu X, Dai H, et al. Distribution-aware coordinate representation for human pose estimation [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020:7093-7102.) method.
  • Step 4 Use the trained segmentation network to segment the vertebrae obtained in step 2 to obtain edge information, and modify the position information of key points obtained in step 3 according to the edge information obtained in step 4.
  • This method makes an extension line along the line connecting UA-LA, UM-LM and UP-LP to the edge of the vertebra, and takes the farthest intersection of the extension line and the edge of the vertebra as the corrected UA, LA, UM, LM, UP , LP coordinates.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The title of the invention is an MRI spinal image keypoint detection method based on deep learning. Disclosed is an MRI spinal image keypoint detection method based on deep learning. The method comprises: firstly, detecting and positioning a vertebra in an MRI spinal image by using a deep target detection network, and identifying a sacrum 1 (S1) as a positioned vertebra; then, in conjunction with structural information of a spine, filtering false positive detection results and determining a fine-grained label of each vertebra; next, respectively detecting six keypoints in total, namely, UA, UM, UP, LA, LM and LP, on upper and lower borders of each vertebra by using a keypoint detection network; afterwards, in conjunction with edge information, determining and correcting a keypoint position of each vertebra; and finally, developing interactive visual MRI spinal image keypoint automatic annotation software. By means of the present invention, an MRI spinal image keypoint can be automatically extracted, which has immense application value in terms of medical image analysis, medical treatment assistance, etc.

Description

一种基于深度学习的脊椎MRI影像关键点检测方法A deep learning-based method for detecting key points in spine MRI images 技术领域technical field
本发明属于计算机视觉,人工智能领域,尤其涉及一种基于深度学习的脊椎MRI影像关键点检测方法。The invention belongs to the fields of computer vision and artificial intelligence, and particularly relates to a method for detecting key points of spine MRI images based on deep learning.
背景技术Background technique
人工智能技术在最近几年在医疗领域中有着极其广泛的应用,其中计算机视觉在医疗影像分析上存在巨大的应用潜力。本发明针对的脊椎MRI影像关键点检测,以往的脊椎MRI影像的关键点检测工作多依赖于专家手动标注。手动标注效率低下,受专家主观影响较大,尤其不适用于大规模数据处理分析的情况。目前尝试利用人工智能技术检测的方法多是利用图像底层特征,例如论文(Ebrahimi S,Angelini E,Gajny L,et al.Lumbar spine posterior corner detection in X-rays using Haar-based features[C]//2016 IEEE 13th international symposium on biomedical imaging(ISBI).IEEE,2016:180-183.)中利用Harr特征检测椎骨的角点,但此类方法只利用到了图像底层信息,鲁棒性较差,只适用于部分特定的场景下,不适用于复杂多变的医疗场景中。本发明则通过建立高质量的数据集和利用深度学习的优秀学习能力实现更加鲁棒准确的脊椎MRI影像关键点检测方法。Artificial intelligence technology has been widely used in the medical field in recent years, among which computer vision has great application potential in medical image analysis. The present invention aims to detect the key points of the spine MRI image, and the previous key point detection work of the spine MRI image mostly relies on the manual annotation of experts. Manual annotation is inefficient and subject to the subjective influence of experts, especially not suitable for large-scale data processing and analysis. At present, most of the methods that try to use artificial intelligence technology to detect are using the underlying features of the image, such as the paper (Ebrahimi S, Angelini E, Gajny L, et al.Lumbar spine posterior corner detection in X-rays using Haar-based features[C]// In 2016 IEEE 13th international symposium on biomedical imaging (ISBI). IEEE, 2016: 180-183.), the Harr feature is used to detect the corners of the vertebrae, but this method only uses the underlying information of the image, and the robustness is poor, only suitable for In some specific scenarios, it is not suitable for complex and changeable medical scenarios. The present invention realizes a more robust and accurate spinal MRI image key point detection method by establishing a high-quality data set and utilizing the excellent learning ability of deep learning.
发明内容SUMMARY OF THE INVENTION
本发明是通过以下技术方案来实现的:一种基于深度学习的脊椎MRI影像关键点检测方法,包括以下步骤:The present invention is achieved through the following technical solutions: a deep learning-based spine MRI image key point detection method, comprising the following steps:
步骤一:将脊椎MRI影像输入训练好的目标检测网络,获得各椎骨的位置信息以及是否是S1的粗粒度标签;Step 1: Input the spine MRI image into the trained target detection network to obtain the position information of each vertebra and whether it is the coarse-grained label of S1;
步骤二:利用步骤一得到的所有椎骨以及定位的S1位置,并结合脊椎自身的生理结构信息过滤假阳性的检测结果和识别各椎骨所属的细粒度标签。Step 2: Use all the vertebrae obtained in step 1 and the located S1 position, and combine with the physiological structure information of the spine itself to filter the false positive detection results and identify the fine-grained labels to which each vertebra belongs.
步骤三:将步骤二中检测得到的椎骨及其周围部分区域裁剪出,输入到训练好的关键点检测网络检测各块椎骨上下边界UA、UM、UP、LA、LM、LP共计六个关键点的位置信息。Step 3: Cut out the vertebrae detected in step 2 and some surrounding areas, and input them into the trained key point detection network to detect the upper and lower boundaries of each vertebra, UA, UM, UP, LA, LM, LP, a total of six key points location information.
步骤四:利用训练的分割网络对步骤二中得到的椎骨进行分割以获得边缘信息,并根据步骤四获得的边缘信息对步骤三中所获得的关键点的位置信息进行修正,获得最终关键点预测结果。Step 4: Use the trained segmentation network to segment the vertebrae obtained in step 2 to obtain edge information, and modify the position information of the key points obtained in step 3 according to the edge information obtained in step 4 to obtain the final key point prediction result.
进一步地,所述步骤一中,椎骨的粗粒度标签是S1和NS1(其中S1是指骶1,NS1是除骶1以外的所有其他椎骨),所述目标检测网络为YOLOv3。Further, in the first step, the coarse-grained labels of the vertebrae are S1 and NS1 (wherein S1 refers to sacral 1, and NS1 refers to all other vertebrae except sacral 1), and the target detection network is YOLOv3.
进一步地,所述步骤三通过以下子步骤来实现。Further, the third step is realized by the following sub-steps.
1)采用检测出的S1作为定位的椎骨,计算各个椎骨中心在图像上的高度并根据形心高度对检测出的椎骨进行排序。1) Using the detected S1 as the positioned vertebra, calculate the height of each vertebra center on the image and sort the detected vertebrae according to the centroid height.
2)按照人体脊椎的生理结构信息,自下而上依次为检测出的每块椎骨分配对应的S1,L5,L4,L3,L2,L1,T12,T11等细粒度标签。2) According to the physiological structure information of the human spine, from bottom to top, assign corresponding fine-grained labels such as S1, L5, L4, L3, L2, L1, T12, T11 to each detected vertebra.
3)通过计算椎骨的宽高比以及上侧边缘高度并根据是否满足阈值要求过滤此假阳性目标。3) Filter this false positive target by calculating the aspect ratio of the vertebrae and the height of the upper side edge and according to whether the threshold requirements are met.
进一步地,宽高比阈值是1.6,上侧边缘高度阈值是5。Further, the aspect ratio threshold is 1.6, and the upper edge height threshold is 5.
进一步地,所述步骤四具体为:Further, the step 4 is specifically:
(4.1)构建椎骨边缘分割网络:椎骨边缘分割网络由降采样部分和升采样部分构成。降采样部分结构为去掉全连接层的resnet50,升采样部分由对应的四个阶段的升采样卷积块构成,升采样卷积块结构为upsampling->conv->bn->relu。(4.1) Constructing the vertebral edge segmentation network: The vertebral edge segmentation network consists of a down-sampling part and an up-sampling part. The structure of the down-sampling part is to remove the resnet50 of the fully connected layer. The up-sampling part is composed of the corresponding four-stage up-sampling convolution blocks. The up-sampling convolution block structure is upsampling->conv->bn->relu.
(4.2)训练椎骨边缘分割网络:先利用关键点标注信息建立粗粒度的分割数据集对椎骨边缘分割网络进行预训练,再构建一个精确的细粒度分割数据集对分割网络进一步进行训练。(4.2) Training the vertebral edge segmentation network: First, use the key point annotation information to establish a coarse-grained segmentation data set to pre-train the vertebral edge segmentation network, and then construct an accurate fine-grained segmentation data set to further train the segmentation network.
(4.3)在获得分割结果后,利用CRF及边缘处图像梯度较大的特点对分割结果进行进一步的修正以获得更精确的边缘分割信息。(4.3) After the segmentation results are obtained, the segmentation results are further modified by using the characteristics of the CRF and the large gradient of the image at the edge to obtain more accurate edge segmentation information.
进一步地,所述步骤(4.3)具体为:Further, the step (4.3) is specifically:
(4.3.1)作两关键点连线的延长线,利用椎骨边缘分割网络获得椎骨边缘信息,将延长线与椎骨边缘的最远处交点作为修正后的关键点坐标。(4.3.1) Make an extension line connecting the two key points, use the vertebral edge segmentation network to obtain the vertebral edge information, and take the farthest intersection point between the extension line and the vertebral edge as the corrected key point coordinates.
(4.3.2)结合步骤二获得标签得到最终关键点预测结果;(4.3.2) Obtain the label in combination with step 2 to obtain the final key point prediction result;
本发明的有益效果是,本发明利用深度学习技术实现了脊椎MRI影像的关键点检测,避免了繁琐的人工标注,减轻了医生负担。相比于专家人工标注,本发明能够避免医生主观因素的影响,也能批量化处理大规模的数据,为进一步脊椎MRI影像分析提供数据基础,本发明方法可开发成交互式的可视化MRI脊椎影像关键点自动标注软件,获得的关键点检测结果可计算椎间盘高度指数和腰椎前突角。The beneficial effect of the present invention is that the present invention utilizes the deep learning technology to realize the key point detection of the spine MRI image, avoids the tedious manual labeling, and reduces the burden on the doctor. Compared with manual annotation by experts, the present invention can avoid the influence of subjective factors of doctors, and can also process large-scale data in batches, providing a data basis for further spinal MRI image analysis, and the method of the present invention can be developed into an interactive visual MRI spinal image. Key point automatic labeling software, the obtained key point detection results can calculate the intervertebral disc height index and lumbar anterior process angle.
附图说明Description of drawings
图1是本发明中检测关键点的示意图。FIG. 1 is a schematic diagram of detecting key points in the present invention.
图2是本发明整体流程图。Fig. 2 is the overall flow chart of the present invention.
图3是本发明关键点矫正示意图。FIG. 3 is a schematic diagram of the key point correction of the present invention.
具体实施方式detailed description
下面根据附图详细说明本发明。The present invention will be described in detail below with reference to the accompanying drawings.
参阅图1,本发明中训练网络的数据集均为自建数据集,脊椎关键点均由医生专家标注, 检测的关键点是各块椎骨的上下边界UA、UM、UP、LA、LM、LP共计六个关键点。模型训练基本流程如下:Referring to Fig. 1, the data sets of the training network in the present invention are all self-built data sets, and the key points of the spine are marked by doctors and experts, and the key points of detection are the upper and lower boundaries of each vertebra. UA, UM, UP, LA, LM, LP A total of six key points. The basic process of model training is as follows:
1.搜集脊椎MRI影像并随机抽取一部分影像作为初始数据集。1. Collect spine MRI images and randomly select a part of the images as the initial data set.
2.对数据集进行标注或修正,并利用标注好的数据集训练模型。2. Label or correct the dataset, and use the labeled dataset to train the model.
3.利用训练好的模型预测新采集到的脊椎MRI影像,并添加到数据集中。3. Use the trained model to predict the newly acquired spinal MRI images and add them to the dataset.
4.重复步骤2,3直至模型准确率满足使用要求。4. Repeat steps 2 and 3 until the model accuracy meets the usage requirements.
图2中描述的目标检测网络优选为YOLOv3,本方法在训练过程中将椎骨按是否是S1将椎骨分为S1和NS1(骶1和非骶1)两个类别作为粗粒度标签信息。训练检测网络的数据集则从原始数据集中构建,根据各个椎骨的关键点标注计算椎骨的包围框以及粗粒度类别标签训练YOLOv3网络。The target detection network described in Figure 2 is preferably YOLOv3. In the training process, the vertebra is divided into two categories: S1 and NS1 (sacral 1 and non-sacral 1) according to whether the vertebra is S1 or not, as coarse-grained label information. The data set for training the detection network is constructed from the original data set, and the YOLOv3 network is trained according to the key point labels of each vertebra to calculate the bounding boxes of the vertebrae and the coarse-grained category labels.
当获得脊椎MRI图像中的椎骨的位置以及是否是S1的粗粒度类别信息后,本方法利用脊椎的结构信息过滤假阳性的预测结果并确定每块椎骨所属的类别,具体做法是:采用S1作为定位的椎骨,计算各个椎骨中心在图像上的高度并根据形心高度对检测出的椎骨进行排序。随后按照人体脊椎的生理结构信息,自下而上依次为检测出的每块椎骨分配对应的S1,L5,L4,L3,L2,L1,T12,T11等标签。其中图片最上方可能出现椎骨未被完整拍摄到的情况,本方法通过计算椎骨的宽高比以及中心高度是否满足阈值要求过滤此类目标,椎骨的宽高比可根据目标检测结果进行计算。After obtaining the position of the vertebrae in the spine MRI image and whether it is the coarse-grained category information of S1, the method uses the structural information of the spine to filter the false positive prediction results and determines the category to which each vertebra belongs. The specific method is: use S1 as the To locate the vertebrae, calculate the height of each vertebra center on the image and sort the detected vertebrae according to the centroid height. Then, according to the physiological structure information of the human spine, from bottom to top, corresponding labels such as S1, L5, L4, L3, L2, L1, T12, T11 are assigned to each detected vertebra. The vertebrae may not be completely captured at the top of the picture. This method filters such objects by calculating the aspect ratio of the vertebrae and whether the center height meets the threshold requirements. The aspect ratio of the vertebrae can be calculated according to the target detection results.
图2中描述的关键点检测网络优选为U型网络或堆叠沙漏网络(Stacked Hourglass Network),训练数据为从原始图像中裁剪出的椎骨图像及周围部分图像和关键点对应的热力图,在关键点检测网络的训练过程中利用了在线难样本挖掘方法。The key point detection network described in Figure 2 is preferably a U-shaped network or a Stacked Hourglass Network. The training data is the vertebrae image cropped from the original image and the heat map corresponding to the surrounding image and key points. The online hard sample mining method is used in the training process of the point detection network.
原始数据标注过程中存在一定的误差,该误差会导致标注的关键点位置脊椎边缘有一定的误差。本方法利用脊椎的边缘信息对检测出的关键点结果作进一步修正。首先本方法训练出一个U型深度卷积神经网络作为椎骨边缘分割网络用于分割椎骨边缘,该椎骨边缘分割网络由降采样部分和升采样部分构成。降采样部分结构为去掉全连接层的Resnet50,升采样部分由对应的四个阶段的升采样卷积块构成,升采样卷积块结构为upsampling->conv->bn->relu。为提高椎骨边缘分割网络的精度,本方法先利用关键点标注信息建立粗粒度的分割数据集对椎骨边缘分割网络进行预训练,再构建一个精确的细粒度分割数据集对分割网络进一步进行训练。本方法还利用条件随机场CRF及边缘处图像梯度较大的特点对分割结果进行进一步的修正以获得更精确的边缘分割信息。在利用椎骨边缘分割网络获得椎骨边缘信息后,本方法利用边缘信息对检测出的关键点进行修正,如图3所示,本方法沿UA-LA、UM-LM、UP-LP连线向椎骨边缘作延长线,将延长线与椎骨边缘的最远处交点作为修正后的UA、LA、UM、 LM、UP、LP坐标。为提高数据集制作流程效率和方便医疗人员使用,将上述方法开发成交互式的可视化自动检测标注软件。There is a certain error in the original data labeling process, which will lead to a certain error at the edge of the spine at the labelled key point position. The method uses the edge information of the spine to further correct the detected key point results. Firstly, this method trains a U-shaped deep convolutional neural network as a vertebral edge segmentation network for segmenting vertebral edges. The vertebral edge segmentation network consists of a down-sampling part and an up-sampling part. The structure of the downsampling part is to remove the Resnet50 of the fully connected layer, and the upsampling part is composed of the corresponding four-stage upsampling convolution blocks. The upsampling convolution block structure is upsampling->conv->bn->relu. In order to improve the accuracy of the vertebral edge segmentation network, this method first uses the key point annotation information to establish a coarse-grained segmentation data set to pre-train the vertebral edge segmentation network, and then constructs an accurate fine-grained segmentation data set to further train the segmentation network. The method also uses the conditional random field (CRF) and the characteristics of large image gradient at the edge to further modify the segmentation result to obtain more accurate edge segmentation information. After the vertebral edge information is obtained by using the vertebral edge segmentation network, this method uses the edge information to correct the detected key points. The edge is an extension line, and the farthest intersection of the extension line and the edge of the vertebra is used as the corrected UA, LA, UM, LM, UP, and LP coordinates. In order to improve the efficiency of the data set production process and facilitate the use of medical personnel, the above method was developed into an interactive visual automatic detection and labeling software.
网络训练完成之后即可应用于整个脊椎MRI影像关键点检测流程中,按照本发明阐述的技术方案,本发明需要在针对脊椎MRI影像关键点检测流程完成以下步骤:After the network training is completed, it can be applied to the entire spine MRI image key point detection process. According to the technical solution set forth in the present invention, the present invention needs to complete the following steps in the spine MRI image key point detection process:
步骤一:将脊椎MRI影像输入训练好的目标检测网络YOLOv3(Redmon J,Farhadi A.Yolov3:An incremental improvement[J].arXiv preprint arXiv:1804.02767,2018.),获得各椎骨的位置信息以及是否是S1的粗粒度标签;Step 1: Input the spine MRI image into the trained target detection network YOLOv3 (Redmon J, Farhadi A.Yolov3:An incremental improvement[J].arXiv preprint arXiv:1804.02767,2018.), obtain the position information of each vertebra and whether it is Coarse-grained labels for S1;
步骤二:利用步骤一得到的所有椎骨以及定位的S1位置,并结合脊椎自身的生理结构信息过滤假阳性的检测结果和识别各椎骨所属的细粒度标签。Step 2: Use all the vertebrae obtained in step 1 and the located S1 position, and combine with the physiological structure information of the spine itself to filter the false positive detection results and identify the fine-grained labels to which each vertebra belongs.
1)采用检测出的S1作为定位的椎骨,计算各个椎骨中心在图像上的高度并根据形心高度对检测出的椎骨进行排序。1) Using the detected S1 as the positioned vertebra, calculate the height of each vertebra center on the image and sort the detected vertebrae according to the centroid height.
2)按照人体脊椎的生理结构信息,自下而上依次为检测出的每块椎骨分配对应的S1,L5,L4,L3,L2,L1,T12,T11等细粒度标签。2) According to the physiological structure information of the human spine, from bottom to top, assign corresponding fine-grained labels such as S1, L5, L4, L3, L2, L1, T12, T11 to each detected vertebra.
3)通过计算椎骨的宽高比以及上侧边缘高度并根据是否满足阈值要求过滤此假阳性目标。宽高比阈值是1.6,上侧边缘高度阈值是5。3) Filter this false positive target by calculating the aspect ratio of the vertebrae and the height of the upper side edge and according to whether the threshold requirements are met. The aspect ratio threshold is 1.6 and the top edge height threshold is 5.
步骤三:将步骤二中检测得到的椎骨及其周围部分区域裁剪出,输入到训练好的关键点检测网络检测各块椎骨上下边界UA、UM、UP、LA、LM、LP共计六个关键点的热力图。提取热力图中关键点的坐标可以采用但不限于论文(Zhang F,Zhu X,Dai H,et al.Distribution-aware coordinate representation for human pose estimation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:7093-7102.)中的方法。Step 3: Cut out the vertebrae detected in step 2 and some surrounding areas, and input them into the trained key point detection network to detect the upper and lower boundaries of each vertebra, UA, UM, UP, LA, LM, LP, a total of six key points heatmap. The coordinates of the key points in the heatmap can be extracted from but not limited to papers (Zhang F, Zhu X, Dai H, et al. Distribution-aware coordinate representation for human pose estimation [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020:7093-7102.) method.
步骤四:利用训练的分割网络对步骤二中得到的椎骨进行分割以获得边缘信息,并根据步骤四获得的边缘信息对步骤三中所获得的关键点的位置信息进行修正。Step 4: Use the trained segmentation network to segment the vertebrae obtained in step 2 to obtain edge information, and modify the position information of key points obtained in step 3 according to the edge information obtained in step 4.
1)利用椎骨边缘分割网络获得椎骨边缘信息;1) Use the vertebral edge segmentation network to obtain the vertebral edge information;
2)本方法沿UA-LA、UM-LM、UP-LP连线向椎骨边缘作延长线,将延长线与椎骨边缘的最远处交点作为修正后的UA、LA、UM,、LM,UP、LP坐标。2) This method makes an extension line along the line connecting UA-LA, UM-LM and UP-LP to the edge of the vertebra, and takes the farthest intersection of the extension line and the edge of the vertebra as the corrected UA, LA, UM, LM, UP , LP coordinates.
3)结合步骤二得到的标签输出最终关键点预测结果。3) Combine the labels obtained in step 2 to output the final key point prediction result.
基于上述流程开发交互式的可视化MRI脊椎影像关键点自动标注软件,并基于关键点检测结果计算椎间盘高度指数和腰椎前突角。Based on the above process, an interactive visual MRI spine image key point automatic labeling software was developed, and based on the key point detection results, the intervertebral disc height index and lumbar protrusion angle were calculated.
以上所述为本发明的主要内容,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above is the main content of the present invention. Any equivalent structure or equivalent process transformation made by using the contents of the description and the accompanying drawings of the present invention, or directly or indirectly applied in other related technical fields, are similarly included in the present invention. within the scope of patent protection.

Claims (4)

  1. 一种基于深度学习的脊椎MRI影像关键点检测方法,其特征在于,包括以下步骤:A method for detecting key points in spine MRI images based on deep learning, characterized in that it includes the following steps:
    步骤一:将脊椎MRI影像输入训练好的目标检测网络,获得各椎骨的位置信息以及是否是S1的粗粒度标签;Step 1: Input the spine MRI image into the trained target detection network to obtain the position information of each vertebra and whether it is the coarse-grained label of S1;
    步骤二:利用步骤一得到的所有椎骨以及定位的S1位置,结合脊椎自身的生理结构信息过滤假阳性的检测结果并识别各椎骨所属的细粒度标签;Step 2: Use all the vertebrae obtained in step 1 and the located S1 position, and combine the physiological structure information of the spine itself to filter the false positive detection results and identify the fine-grained label to which each vertebra belongs;
    步骤三:将步骤二中检测得到的椎骨及其周围部分区域裁剪出,输入到训练好的关键点检测网络检测各块椎骨上下边界UA、UM、UP、LA、LM、LP共计六个关键点的位置信息;Step 3: Cut out the vertebrae detected in step 2 and some surrounding areas, and input them into the trained key point detection network to detect the upper and lower boundaries of each vertebra, UA, UM, UP, LA, LM, LP, a total of six key points location information;
    步骤四:利用训练的分割网络对步骤二中得到的椎骨进行分割以获得边缘信息,并根据步骤四获得的边缘信息对步骤三中所获得的关键点的位置信息进行修正,获得最终关键点预测结果,具体为:Step 4: Use the trained segmentation network to segment the vertebrae obtained in step 2 to obtain edge information, and modify the position information of the key points obtained in step 3 according to the edge information obtained in step 4 to obtain the final key point prediction The result, specifically:
    (4.1)构建椎骨边缘分割网络:椎骨边缘分割网络由降采样部分和升采样部分构成;降采样部分结构为去掉全连接层的resnet50,升采样部分由对应的四个阶段的升采样卷积块构成,升采样卷积块结构为upsampling->conv->bn->relu;(4.1) Constructing the vertebral edge segmentation network: The vertebral edge segmentation network consists of a down-sampling part and an up-sampling part; the structure of the down-sampling part is resnet50 that removes the fully connected layer, and the up-sampling part consists of corresponding four-stage up-sampling convolution blocks The structure of the upsampling convolution block is upsampling->conv->bn->relu;
    (4.2)训练椎骨边缘分割网络:先利用关键点标注信息建立粗粒度的分割数据集对椎骨边缘分割网络进行预训练,再构建一个精确的细粒度分割数据集对分割网络进一步进行训练;(4.2) Training the vertebral edge segmentation network: first use the key point annotation information to establish a coarse-grained segmentation data set to pre-train the vertebral edge segmentation network, and then construct an accurate fine-grained segmentation data set to further train the segmentation network;
    (4.3)在获得分割结果后,利用条件随机场及边缘处图像梯度较大的特点对分割结果进行进一步的修正以获得更精确的边缘分割信息,具体为:(4.3) After the segmentation result is obtained, the conditional random field and the large gradient of the image at the edge are used to further modify the segmentation result to obtain more accurate edge segmentation information, specifically:
    (4.3.1)作两关键点连线的延长线,利用椎骨边缘分割网络获得椎骨边缘信息,将延长线与椎骨边缘的最远处交点作为修正后的关键点坐标;(4.3.1) Make an extension line connecting the two key points, use the vertebral edge segmentation network to obtain the vertebral edge information, and take the farthest intersection of the extension line and the vertebral edge as the corrected key point coordinates;
    (4.3.2)结合步骤二得到的标签输出最终关键点预测结果。(4.3.2) Combine the labels obtained in step 2 to output the final key point prediction result.
  2. 根据权利要求1所述基于深度学习的脊椎MRI影像关键点检测方法,其特征在于,所述步骤一中,椎骨的粗粒度标签是S1和NS1,其中S1是指骶1,NS1是除骶1以外的所有其他椎骨,所述目标检测网络为YOLOv3。The deep learning-based method for detecting key points in spine MRI images according to claim 1, wherein in step 1, the coarse-grained labels of vertebrae are S1 and NS1, wherein S1 refers to sacral 1, and NS1 refers to sacral 1 except for sacral 1. For all other vertebrae, the object detection network is YOLOv3.
  3. 根据权利要求1所述基于深度学习的脊椎MRI影像关键点检测方法,其特征在于,所述步骤二通过以下子步骤来实现;The deep learning-based spine MRI image key point detection method according to claim 1, wherein the step 2 is realized by the following sub-steps;
    (2.1)采用检测出的S1作为定位的椎骨,计算各个椎骨中心在图像上的高度并根据形心高度对检测出的椎骨进行排序;(2.1) Using the detected S1 as the positioned vertebra, calculate the height of the center of each vertebra on the image and sort the detected vertebrae according to the centroid height;
    (2.2)按照人体脊椎的生理结构信息,自下而上依次为检测出的每块椎骨分配对应的S1,L5,L4,L3,L2,L1,T12,T11细粒度标签;(2.2) According to the physiological structure information of the human spine, assign the corresponding S1, L5, L4, L3, L2, L1, T12, T11 fine-grained labels to each detected vertebra from bottom to top;
    (2.3)通过计算椎骨的宽高比以及上侧边缘高度并根据是否满足阈值要求过滤假阳性目标。(2.3) By calculating the aspect ratio of the vertebrae and the height of the upper side edge and filtering the false positive targets according to whether the threshold requirements are met.
  4. 根据权利要求3所述基于深度学习的脊椎MRI影像关键点检测方法,其特征在于,宽高比阈值是1.6,上侧边缘高度阈值是5。The deep learning-based method for detecting key points in spine MRI images according to claim 3, wherein the aspect ratio threshold is 1.6, and the upper edge height threshold is 5.
PCT/CN2021/112874 2020-08-17 2021-08-16 Mri spinal image keypoint detection method based on deep learning WO2022037548A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022578644A JP7489732B2 (en) 2020-08-17 2021-08-16 Method for detecting key points in spinal MRI images based on deep learning

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010824727.7A CN112184617B (en) 2020-08-17 2020-08-17 Spine MRI image key point detection method based on deep learning
CN202010824727.7 2020-08-17

Publications (1)

Publication Number Publication Date
WO2022037548A1 true WO2022037548A1 (en) 2022-02-24

Family

ID=73919631

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/112874 WO2022037548A1 (en) 2020-08-17 2021-08-16 Mri spinal image keypoint detection method based on deep learning

Country Status (3)

Country Link
JP (1) JP7489732B2 (en)
CN (1) CN112184617B (en)
WO (1) WO2022037548A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881930A (en) * 2022-04-07 2022-08-09 重庆大学 3D target detection method, device, equipment and storage medium based on dimension reduction positioning
US20230169644A1 (en) * 2021-11-30 2023-06-01 Pong Yuen Holdings Limited Computer vision system and method for assessing orthopedic spine condition
CN116309591A (en) * 2023-05-19 2023-06-23 杭州健培科技有限公司 Medical image 3D key point detection method, model training method and device
CN117474906A (en) * 2023-12-26 2024-01-30 合肥吉麦智能装备有限公司 Spine X-ray image matching method and intraoperative X-ray machine resetting method

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184617B (en) * 2020-08-17 2022-09-16 浙江大学 Spine MRI image key point detection method based on deep learning
CN112700448B (en) * 2021-03-24 2021-06-08 成都成电金盘健康数据技术有限公司 Spine image segmentation and identification method
CN113392872A (en) * 2021-04-30 2021-09-14 上海市第六人民医院 Vertebral fracture radiograph reading method and system based on artificial intelligence assistance
CN114494192B (en) * 2022-01-26 2023-04-25 西南交通大学 Thoracolumbar fracture identification segmentation and detection positioning method based on deep learning
CN114581395A (en) * 2022-02-28 2022-06-03 四川大学 Method for detecting key points of spine medical image based on deep learning
CN116797597B (en) * 2023-08-21 2023-11-17 邦世科技(南京)有限公司 Three-stage full-network-based full detection method and system for degenerative spinal diseases

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780520A (en) * 2015-11-18 2017-05-31 周兴祥 The extraction method of vertebra in a kind of MRI lumbar vertebraes image
CN110415291A (en) * 2019-08-07 2019-11-05 清华大学 Image processing method and relevant device
US20190370957A1 (en) * 2018-05-31 2019-12-05 General Electric Company Methods and systems for labeling whole spine image using deep neural network
CN111402269A (en) * 2020-03-17 2020-07-10 东北大学 Vertebral canal segmentation method based on improved FC-DenseNuts
CN112184617A (en) * 2020-08-17 2021-01-05 浙江大学 Spine MRI image key point detection method based on deep learning

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5620668B2 (en) 2009-10-26 2014-11-05 学校法人北里研究所 Intervertebral disc degeneration evaluation apparatus and program
US9763636B2 (en) * 2013-09-17 2017-09-19 Koninklijke Philips N.V. Method and system for spine position detection
JP7191020B2 (en) * 2016-12-08 2022-12-16 コーニンクレッカ フィリップス エヌ ヴェ Simplified Navigation of Spine Medical Imaging Data
JP7120560B2 (en) 2017-07-03 2022-08-17 株式会社リコー Diagnosis support system, diagnosis support method and diagnosis support program
JP7135473B2 (en) 2018-01-31 2022-09-13 株式会社リコー MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING METHOD, PROGRAM AND MEDICAL IMAGE PROCESSING SYSTEM
CN109523523B (en) * 2018-11-01 2020-05-05 郑宇铄 Vertebral body positioning, identifying and segmenting method based on FCN neural network and counterstudy
CN109919903B (en) * 2018-12-28 2020-08-07 上海联影智能医疗科技有限公司 Spine detection positioning marking method and system and electronic equipment
CN110599508B (en) * 2019-08-01 2023-10-27 平安科技(深圳)有限公司 Artificial intelligence-based spine image processing method and related equipment
CN110866921A (en) * 2019-10-17 2020-03-06 上海交通大学 Weakly supervised vertebral body segmentation method and system based on self-training and slice propagation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780520A (en) * 2015-11-18 2017-05-31 周兴祥 The extraction method of vertebra in a kind of MRI lumbar vertebraes image
US20190370957A1 (en) * 2018-05-31 2019-12-05 General Electric Company Methods and systems for labeling whole spine image using deep neural network
CN110415291A (en) * 2019-08-07 2019-11-05 清华大学 Image processing method and relevant device
CN111402269A (en) * 2020-03-17 2020-07-10 东北大学 Vertebral canal segmentation method based on improved FC-DenseNuts
CN112184617A (en) * 2020-08-17 2021-01-05 浙江大学 Spine MRI image key point detection method based on deep learning

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230169644A1 (en) * 2021-11-30 2023-06-01 Pong Yuen Holdings Limited Computer vision system and method for assessing orthopedic spine condition
CN114881930A (en) * 2022-04-07 2022-08-09 重庆大学 3D target detection method, device, equipment and storage medium based on dimension reduction positioning
CN114881930B (en) * 2022-04-07 2023-08-18 重庆大学 3D target detection method, device, equipment and storage medium based on dimension reduction positioning
CN116309591A (en) * 2023-05-19 2023-06-23 杭州健培科技有限公司 Medical image 3D key point detection method, model training method and device
CN116309591B (en) * 2023-05-19 2023-08-25 杭州健培科技有限公司 Medical image 3D key point detection method, model training method and device
CN117474906A (en) * 2023-12-26 2024-01-30 合肥吉麦智能装备有限公司 Spine X-ray image matching method and intraoperative X-ray machine resetting method
CN117474906B (en) * 2023-12-26 2024-03-26 合肥吉麦智能装备有限公司 Intraoperative X-ray machine resetting method based on spine X-ray image matching

Also Published As

Publication number Publication date
JP2023530023A (en) 2023-07-12
CN112184617A (en) 2021-01-05
JP7489732B2 (en) 2024-05-24
CN112184617B (en) 2022-09-16

Similar Documents

Publication Publication Date Title
WO2022037548A1 (en) Mri spinal image keypoint detection method based on deep learning
CN110021025B (en) Region-of-interest matching and displaying method, device, equipment and storage medium
CN111047572B (en) Automatic spine positioning method in medical image based on Mask RCNN
Zhang et al. Automatic bone age assessment for young children from newborn to 7-year-old using carpal bones
Hogeweg et al. Clavicle segmentation in chest radiographs
CN111882560B (en) Lung parenchyma CT image segmentation method based on weighted full convolution neural network
US20220198214A1 (en) Image recognition method and device based on deep convolutional neural network
CN113420826B (en) Liver focus image processing system and image processing method
CN113284090B (en) Scoliosis detection method and medical platform
CN112365438B (en) Pelvis parameter automatic measurement method based on target detection neural network
CN112802019B (en) Leke typing method based on spine AIS image
WO2020215485A1 (en) Fetal growth parameter measurement method, system, and ultrasound device
Li et al. Developing an image-based deep learning framework for automatic scoring of the pentagon drawing test
Lin et al. Multitask deep learning for segmentation and lumbosacral spine inspection
CN114757873A (en) Rib fracture detection method and device, terminal equipment and readable storage medium
CN112927213B (en) Medical image segmentation method, medium and electronic device
Qin et al. Residual block-based multi-label classification and localization network with integral regression for vertebrae labeling
CN113326745A (en) Application system for judging and identifying stoma situation through image identification technology
Wang et al. Automatic and accurate segmentation of peripherally inserted central catheter (PICC) from chest X-rays using multi-stage attention-guided learning
CN116228660A (en) Method and device for detecting abnormal parts of chest film
Zhang et al. Automatic Lenke classification of adolescent idiopathic scoliosis with deep learning
Cui et al. Cobb Angle Measurement Method of Scoliosis Based on U-net Network
CN113362282B (en) Hip joint key point position detection method and system based on multi-task learning
NL2028748B1 (en) Automatic segmentation and identification method of spinal vertebrae based on X-ray film
CN115797307B (en) Skeleton coronary balance parameter detecting system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21857642

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022578644

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21857642

Country of ref document: EP

Kind code of ref document: A1