WO2022188547A1 - X线头影图像的关键点检测方法 - Google Patents

X线头影图像的关键点检测方法 Download PDF

Info

Publication number
WO2022188547A1
WO2022188547A1 PCT/CN2022/072240 CN2022072240W WO2022188547A1 WO 2022188547 A1 WO2022188547 A1 WO 2022188547A1 CN 2022072240 W CN2022072240 W CN 2022072240W WO 2022188547 A1 WO2022188547 A1 WO 2022188547A1
Authority
WO
WIPO (PCT)
Prior art keywords
key points
target
computer
implemented method
image
Prior art date
Application number
PCT/CN2022/072240
Other languages
English (en)
French (fr)
Inventor
马成龙
Original Assignee
杭州朝厚信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州朝厚信息科技有限公司 filed Critical 杭州朝厚信息科技有限公司
Publication of WO2022188547A1 publication Critical patent/WO2022188547A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the present application generally relates to a method for detecting key points of cephalometric images, in particular to a method for detecting key points of cephalometric images using artificial neural networks.
  • Cephalometric X-ray is an important means of projection measurement of the head for examining craniofacial growth and deformities.
  • the X-ray cephalometric measurement method is generally performed by professionals to manually mark key points (for example, nose root point, ear point, etc.) on the cephalometric image, and then measure and calculate relevant indicators (for example, upper, lower, middle, etc.) incisor angle, face height, etc.).
  • key points for example, nose root point, ear point, etc.
  • relevant indicators for example, upper, lower, middle, etc.
  • cephalogram key point detection methods mostly use the underlying features of the image, which are less robust and are only suitable for partial cephalogram key points. Specific scenarios, and the number of key points that can be detected is small, so it is not suitable for complex and changeable medical scenarios.
  • One aspect of the present application provides a computer-implemented method for detecting key points in a cephalometric image, which includes: acquiring a cephalometric image; and using a trained target detection artificial neural network to perform region division on the cephalometric image. , obtain N target area images; and use N trained key point detection artificial neural networks to respectively perform key point detection on the N target area images, where N is a natural number greater than or equal to 2.
  • the N target regions include five target regions: the skull core region, the lips, the teeth, the mandible, and the cervical spine.
  • the N target areas comprise measuring ruler target areas.
  • the corresponding key point detection artificial neural network For other target area images in the N target area images except the measurement ruler target area image, the corresponding key point detection artificial neural network generates a heat map for each key point, For the image of the target area of the measuring ruler, the corresponding key point detection artificial neural network only generates a heat map including a plurality of key points, wherein the key points of the image of the measuring ruler target area and the significant scale of the measuring ruler Corresponding.
  • the computer-implemented method for detecting key points of a cephalometric image further comprises: determining a scale based on the key points of the target area image of the measuring scale and the size of one of the other target areas.
  • the N target regions further include a cranial core region
  • the scale is determined based on the key points of the target region image of the measurement scale and the width of the cranial core region image.
  • the computer-implemented method for detecting key points in a cephalometric image further includes: calculating a medical measurement index based on key points of other target areas other than the measurement ruler target area and the scale.
  • the computer-implemented method for detecting key points in cephalometric images further includes: for the same key point detected in at least two of the N target area images, obtaining a detection method based on two detections. coordinates to calculate its final coordinates.
  • the computer-implemented method for detecting key points in cephalometric images further includes: for the same key point detected in at least two of the N target area images, obtaining a detection method based on two detections. The coordinates are averaged as their final coordinates.
  • the object detection artificial neural network is a YOLOv3 network.
  • the N keypoint detection artificial neural network is a HRNet network.
  • FIG. 1 is a schematic flowchart of a method for detecting key points in an X cephalogram image according to an embodiment of the present application
  • Figure 2 shows an example of X-ray cephalometric images using measuring rulers of different sizes
  • Figure 3 shows the target area division of the cephalogram image in an example
  • Figure 4 shows an example of the skull core region image and its key point heatmap
  • Figure 5 shows an example of the measurement ruler target area image and its key point heatmap.
  • One aspect of the present application provides a computer-implemented method for keypoint detection of cephalometric images.
  • FIG. 1 is a schematic flowchart of a method 100 for detecting key points in a cephalogram image according to an embodiment of the present application.
  • the X-ray cephalogram image includes two parts, one part is the image of the head, and the other part is the image of the measuring ruler.
  • the image of the measuring ruler is located at the upper left or upper right of the head image.
  • FIG. 2 shows the cephalometric images of two different measuring rulers.
  • the cephalogram image is divided into regions using the trained target detection artificial neural network.
  • the skull region and the measuring ruler region of the cephalogram image may be segmented for subsequent processing respectively.
  • the skull region can be further divided into five regions of the skull core region, lips, teeth, chin, and cervical vertebra, and subsequent processing is performed respectively to further improve the processing accuracy. That is to say, in this embodiment, the X-ray cephalogram image can be input into the trained target detection artificial neural network to detect and locate six target areas, including the skull core area, lips, teeth, mandible, cervical spine and measuring ruler, and output The position information of the six target areas, and each target area is cut out according to the position information, as the input of the subsequent operation.
  • FIG. 3 shows an example of an X-ray cephalogram image using the target detection artificial neural network of the present application divided into six target areas, including the skull core area, lips, teeth, mandible, cervical spine and measuring ruler.
  • the target detection artificial neural network can use the YOLOv3 network, which has the advantages of fast calculation speed and high precision.
  • the target detection artificial neural network can also adopt any other applicable network, for example, the SSD (Single Shot Detection) network and the Faster R-CNN network.
  • key points are detected based on each target area image using the corresponding key point detection artificial neural network.
  • a key point detection artificial neural network may be configured for each of the six target regions of the skull core region, lips, teeth, mandible, cervical spine, and measuring ruler, for a total of six key point detection artificial neural networks.
  • HRNet High-Resolution Net
  • HRNet High-Resolution Net
  • the artificial neural network for key point detection can also adopt any other suitable network, for example, Hourglass Net, etc.
  • Table 1 lists the key points of the five target areas of the skull core area, lips, teeth, mandible and cervical spine.
  • the key point to be detected is the significant scale of the measuring ruler (usually 5mm and 10mm scale lines). Since different X-ray cephalogram images may use measuring rulers of different specifications, the number of key points in the target area of the measuring ruler is uncertain. Therefore, the keypoint detection for the five target regions of the skull core region, lips, teeth, mandible and cervical spine is slightly different from the keypoint detection for the measurement ruler target region.
  • Each key point detection artificial neural network is used to detect the key points of each target area image, and output its coordinate information.
  • the key point detection artificial neural network corresponding to the target area needs to detect 72 key points, which receives the image of the skull core area, and generates and outputs 72 heat maps based on this, each heat map corresponds to one key point.
  • Figure 4 shows the image of the skull core area and its corresponding heat map in an example, wherein the coordinates of the maximum value on the first heat map are the coordinates of the first key point G.
  • Table 1 Take the C2p key point as an example.
  • the key point is located in the two target areas of the skull core area and the cervical spine. Therefore, a coordinate will be detected by the two key point detection artificial neural networks.
  • the average of multiple coordinates of the same key point can be taken as the final coordinate of the key point.
  • the weighted calculation of the key point can also be based on multiple coordinates.
  • the artificial neural network for key point detection corresponding to the measuring ruler only outputs a heat map.
  • FIG. 5 shows an image of the measuring ruler area and its corresponding heat map in an example, wherein each significant local maximum value on the heat map corresponds to the key points of the detected significant scales.
  • a scale is calculated based on the key points of the measuring scale.
  • the image of the measuring ruler area can be input into the corresponding key point detection artificial neural network
  • the network performs keypoint detection, and then, based on the detected keypoints, a scale (ie, the ratio of the image size to the real size, or the real size of image pixels) can be calculated.
  • the scale can be calculated according to the following method.
  • a scale can be calculated based on the width of the cranial core region. Since the key point of the measuring ruler may be a 5mm scale or a 10mm scale, there are two possibilities here: one, dist 12 is 10mm, and dist 13 is 20mm (that is, the key points are all 10mm scale); dist 12 is 5mm, dist 13 is 10mm (that is, the key point includes both 5mm scale and 10mm scale). Because the width of the cranial core area is usually in the range of 100mm to 150mm, the range of dist 12 *10 to dist 12 *15 and the range of dist 13 *10 to dist 13 *15 can be compared with the width of the cranial core area respectively. In which range the width of the skull core area is, the unit distance corresponding to this range (ie dist 12 or dist 13 ) is 10mm. Based on this, the scale can be calculated.
  • the scale in addition to the width of the skull core region, the scale can also be calculated based on other dimensions, for example, the width or height of other target regions.
  • the scale can be calculated based on the width of the cephalometric core area. Likewise, it will be appreciated that in addition to the width of the cranial core region, the scale may be calculated based on other dimensions, eg, the width or height of other target regions.
  • a measurement index is calculated based on the key point coordinate information and the scale.
  • various medical measurement indicators including angles and line distances, such as upper and lower central incisor angles, facial height, etc., can be calculated based on this.
  • the inventor of the present application through experimental comparison, found that the cephalometric image is divided into multiple target areas, and then the corresponding key point detection artificial neural network is used to detect key points based on the image of each target area. Perform key point detection, and the detection accuracy is higher.
  • the method for detecting key points in cephalometric images of the present application is automatically executed by a computer, which greatly improves the efficiency compared with the scheme of manually marking key points.
  • the various diagrams may illustrate exemplary architectural or other configurations of the disclosed methods and systems, which may be helpful in understanding the features and functionality that may be included in the disclosed methods and systems. What is claimed is not limited to the exemplary architectures or configurations shown, and the desired features may be implemented in various alternative architectures and configurations. Additionally, with respect to the flowcharts, functional descriptions, and method claims, the order of blocks presented herein should not be limited to various embodiments that are implemented in the same order to perform the functions, unless the context clearly dictates otherwise. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请的一方面提供了一种计算机执行的X线头影图像的关键点检测方法,它包括:获取X线头影图像;利用经训练的目标检测人工神经网络对所述X线头影图像进行区域划分,得到N个目标区域图像;以及利用N个经训练的关键点检测人工神经网络分别对所述N个目标区域图像进行关键点检测,其中,N为大于等于2的自然数。

Description

X线头影图像的关键点检测方法 技术领域
本申请总体上涉及X线头影图像的关键点检测方法,尤其是利用人工神经网络的X线头影图像的关键点检测方法。
背景技术
X线头影测量是对头部进行投影测量,用于检查颅面生长情况和畸形的重要手段。
当前,X线头影测量方法一般是由专业人员在头影图像上手动标注关键点(例如,鼻根点、耳点等),然后借助直尺、量角器等工具测量计算相关指标(例如,上下中切牙角、面部高度等)。然而,一方面,手动标注效率低下,另一方面,由于头影测量中关键点数量多且密集,这极大地提高了标注难度,又一方面,标注人员需要接受专业培训才能胜任标注工作,这进一步提高了人工成本。
虽然,目前有人开始尝试利用随机森林等方法自动检测头影关键点,但本申请的发明人发现此类头影关键点检测方法多是利用图像底层特征,鲁棒性较差,只适用于部分特定的场景,而且能够检测的关键点数目较少,因此不适用于复杂多变的医疗场景。
由于X光成像的关系,图片尺寸与真实尺寸存在一定的比例关系,部分测量指标(例如,面部高度)的计算需要该比例关系。在X线头影拍摄过程,患者头颅左侧或者右侧会放置带刻度的测量尺。本申请的发明人意识到,目前的X线头影自动测量方案缺乏对比例尺的计算,仍要求医生手动计算或估计比例尺,效率较低。
鉴于以上,有必要提供一种新的X线头影测量方法。
发明内容
本申请的一方面提供了一种计算机执行的X线头影图像的关键点检测方法,它包括:获取X线头影图像;利用经训练的目标检测人工神经网络对所述X线头影图像进行区域划分,得到N个目标区域图像;以及利用N个经训练的关键点检测人工神经网络分别对所述N个目标区域图像进行关键点检测,其中,N为大于等于2的自然数。
在一些实施方式中,所述N个目标区域包括:头颅核心区域、嘴唇、牙齿、下颌以及颈椎五个目标区域。
在一些实施方式中,所述N个目标区域包括测量尺目标区域。
在一些实施方式中,对于所述N个目标区域图像中除了所述测量尺目标区域图像之外的其他目标区域图像,对应的关键点检测人工神经网络针对每一关键点产生一张热力图,对于所述测量尺目标区域图像,对应的关键点检测人工神经网络仅产生一张包括多个关键点的热力图,其中,所述测量尺目标区域图像的关键点与所述测量尺的显著刻度相对应。
在一些实施方式中,所述计算机执行的X线头影图像的关键点检测方法还包括:基于所述测量尺目标区域图像的关键点以及其他目标区域之一的尺寸确定比例尺。
在一些实施方式中,所述N个目标区域还包括头颅核心区域,所述比例尺是基于所述测量尺目标区域图像的关键点以及所述头颅核心区域图像的宽度确定。
在一些实施方式中,所述计算机执行的X线头影图像的关键点检测方法还包括:基于除了所述测量尺目标区域之外的其他目标区域的关键点以及所述比例尺计算医学测量指标。
在一些实施方式中,所述计算机执行的X线头影图像的关键点检测方法还包括:对于在所述N个目标区域图像中的至少两个中检测出的同一关键点,基于两次检测得到的坐标计算其最终坐标。
在一些实施方式中,所述计算机执行的X线头影图像的关键点检测方法还包括:对于在所述N个目标区域图像中的至少两个中检测出的同一关键点,基于两次检测得到的坐标计算平均值,作为其最终坐标。
在一些实施方式中,所述目标检测人工神经网络是YOLOv3网络。
在一些实施方式中,所述N个关键点检测人工神经网络是HRNet网络。
附图说明
以下将结合附图及其详细描述对本申请的上述及其他特征作进一步说明。应当理解的是,这些附图仅示出了根据本申请的若干示例性的实施方式,因此不应被视为是对本申请保护范围的限制。除非特别指出,附图不必是成比例的,并且其中类似的标号表示类似的部件。
图1为本申请一个实施例中的X头影图像关键点检测方法的示意性流程图;
图2展示了一个例子中采用不同规格测量尺的X线头影图像;
图3展示了一个例子中X线头影图像的目标区域划分;
图4展示了一个例子中头颅核心区域图像及其关键点热力图;以及
图5展示了一个例子中测量尺目标区域图像及其关键点热力图。
具体实施方式
以下的详细描述中引用了构成本说明书一部分的附图。说明书和附图所提及 的示意性实施方式仅仅出于是说明性之目的,并非意图限制本申请的保护范围。在本申请的启示下,本领域技术人员能够理解,可以采用许多其他的实施方式,并且可以对所描述实施方式做出各种改变,而不背离本申请的主旨和保护范围。应当理解的是,在此说明并图示的本申请的各个方面可以按照很多不同的配置来布置、替换、组合、分离和设计,这些不同配置都在本申请的保护范围之内。
本申请的一方面提供了一种计算机执行的X线头影图像的关键点检测方法。
请参图1,为本申请一个实施例中的X线头影图像的关键点检测方法100的示意性流程图。
在101中,获取X线头影图像。
X线头影图像的拍摄为习知技术,故此处不再对其进行详细描述。
在一个实施例中,X线头影图像包括两部分,一部分是头颅成像,另一部分是测量尺成像,通常,测量尺成像是位于头颅成像的左上方或右上方。
在一些情况下,不同的机构在拍摄X线头影图像时,可能采用不同规格的测量尺。请参图2,展示了采用了两种不同规格测量尺的X线头影图像。
在103中,利用经训练的目标检测人工神经网络将X线头影图像进行区域划分。
在一个实施例中,可以将X线头影图像的头颅区域和测量尺区域分割开分别进行后续处理。
在一个优选的实施例中,可以进一步将头颅区域划分为头颅核心区域、嘴唇、牙齿、下巴、颈椎五个区域,分别进行后续处理,以进一步提高处理精度。也就是说,在该实施例中,可以把X线头影图像输入经训练的目标检测人工神经网络,检测并定位头颅核心区域、嘴唇、牙齿、下颌、颈椎以及测量尺共六个目标区域,输出该六个目标区域的位置信息,并根据该等位置信息裁剪出各个目标区域,作为后续操作的输入。
在本申请的启示下,可以理解,目标区域的划分并不限于以上例子,可以根据具体需求进行划分。
请参图3,展示了一个例子中利用本申请的目标检测人工神经网络划分为头颅核心区域、嘴唇、牙齿、下颌、颈椎以及测量尺共六个目标区域的X线头影图像。
在一个优选的实施例中,目标检测人工神经网络可以采用YOLOv3网络,它具有计算速度快且精度高的优势。在本申请的启示下,可以理解,目标检测人工神经网络也可以采用任何其他适用的网络,例如,SSD(Single Shot Detection)网络以及Faster R-CNN网络等。
在105中,利用对应的关键点检测人工神经网络基于各目标区域图像检测关键点。
在一个实施例中,对于头颅核心区域、嘴唇、牙齿、下颌、颈椎以及测量尺这六个目标区域的每一个可以分别配置一个关键点检测人工神经网络,共六个关键点检测人工神经网络。
在一个优选的实施例中,可以采用HRNet(High-Resolution Net)作为关键点检测人工神经网络。在本申请的启示下,可以理解,关键点检测人工神经网络也可以采用任何其他适用的网络,例如,Hourglass Net等。
请参下表1,列出了头颅核心区域、嘴唇、牙齿、下颌以及颈椎这五个目标区域的关键点。
Figure PCTCN2022072240-appb-000001
Figure PCTCN2022072240-appb-000002
表1
对于头颅核心区域、嘴唇、牙齿、下颌以及颈椎这五个目标区域而言,关键点的种类和数量是不变的。对于测量尺目标区域,需要检测的关键点是测量尺的显著刻度(一般为5mm和10mm刻度线),由于不同的X线头影图像可能采用不同规格的测量尺,测量尺目标区域的关键点数量是不确定的。因此,对头颅核心区域、嘴唇、牙齿、下颌以及颈椎这五个目标区域的关键点检测与对测量尺目标区域的关键点检测稍有不同。
各关键点检测人工神经网络用于检测各目标区域图像的关键点,并输出其坐标信息。
以头颅核心区域为例,与该目标区域对应的关键点检测人工神经网络需检测72个关键点,其接收头颅核心区域图像,并基于此产生并输出72张热力图,每张热力图对应一个关键点。请参图4,展示了一个例子中头颅核心区域图像及其对应的热力图,其中,第一张热力图上最大值的坐标既第一个关键点G的坐标。
需要注意的是,头颅核心区域、嘴唇、牙齿、下颌以及颈椎这五个目标区域存在重合的情况,因此,部分关键点会被多个关键点检测人工神经网络进行检测。请参表1,以C2p关键点为例,该关键点同时位于头颅核心区域和颈椎两个目标区域中,因此,会被两个关键点检测人工神经网络分别检测出一个坐标,在一个实施例中,可以把同一关键点的多个坐标取平均值作为该关键点最终的坐标。在本申请的启示下,可以理解,对于同一关键点被多个关键点检测人工神经网络检测的情况,除了取平均值作为该关键点的最终坐标之外,还可以基于多个坐标加权计算该关键点的最终坐标,或者直接取其中一个关键点检测人工神经网络输出的坐标作为该关键点的最终坐标。
由于测量尺上显著刻度的数量不固定,与测量尺对应的关键点检测人工神经网络仅输出一张热力图。请参图5,展示了一个例子中测量尺区域图像及其对应的热力图,其中,该热力图上各个显著的局部最大值即对应检测出的显著刻度的关键点。
在107中,基于测量尺的关键点计算比例尺。
在一个实施例中,若目标检测人工神经网络在X线头影图像中检测到的测量尺目标区域的置信度大于预定的阈值,那么,可以将该测量尺区域图像输入对应的关键点检测人工神经网络进行关键点检测,然后,可以基于检测到的关键点计算比例尺(即图片尺寸与真实尺寸的比例系数,或者说图片像素的真实尺寸)。
在一个实施例中,可以根据以下方法计算比例尺。
首先,在测量尺热力图上,从下至上选择依次相邻的三个关键点(即显著刻度)r 1、r 2及r 3,并计算r 1、r 2之间和r 1、r 3之间的距离dist 12和dist 13
接着,可以基于头颅核心区域的宽度来计算比例尺。由于测量尺关键点可能是5mm刻度,也可能是10mm刻度,因此,这里有两种可能性:其一,dist 12为10mm,dist 13为20mm(即关键点均为10mm刻度);其二,dist 12为5mm,dist 13为10mm(即关键点即包括5mm刻度,也包括10mm刻度)。因为头颅核心区域的宽度通常在100mm~150mm的范围内,可以将dist 12*10~dist 12*15的范围与 dist 13*10~dist 13*15的范围分别与头颅核心区域的宽度进行对比。头颅核心区域的宽度在哪个范围内,那么该范围所对应的单位距离(即dist 12或dist 13)就是10mm。基于此,就可计算出比例尺。
在本申请的启示下,可以理解,除了头颅核心区域的宽度,也可以基于其他尺寸来计算比例尺,例如,其他目标区域的宽度或高度。
若所述目标检测人工神经网络未检出测量尺,或检测到的测量尺目标区域的置信度低于所述预定的阈值,那么,可以基于头影核心区域的宽度计算比例尺。同样地,可以理解,除了头颅核心区域的宽度,也可以基于其他尺寸来计算比例尺,例如,其他目标区域的宽度或高度。
在109中,基于关键点坐标信息和比例尺计算测量指标。
在获得了所述关键点的坐标信息和比例尺之后,就可以基于此计算各项医学测量指标,包括角度和线距,例如,上下中切牙角、面部高度等。
本申请的发明人经过实验对比,发现将X线头影图像划分为多个目标区域,再分别利用对应的关键点检测人工神经网络基于各目标区域图像检测关键点,相比基于整个X线头影图像进行关键点检测,检测精度更高。
本申请的X线头影图像关键点检测方法由计算机自动执行,与手动标注关键点的方案相比,极大地提高了效率。
尽管在此公开了本申请的多个方面和实施例,但在本申请的启发下,本申请的其他方面和实施例对于本领域技术人员而言也是显而易见的。在此公开的各个方面和实施例仅用于说明目的,而非限制目的。本申请的保护范围和主旨仅通过后附的权利要求书来确定。
同样,各个图表可以示出所公开的方法和系统的示例性架构或其他配置,其有助于理解可包含在所公开的方法和系统中的特征和功能。要求保护的内容并不限于所示的示例性架构或配置,而所希望的特征可以用各种替代架构和配置来实现。除此之外,对于流程图、功能性描述和方法权利要求,这里所给出的方框顺 序不应限于以同样的顺序实施以执行所述功能的各种实施例,除非在上下文中明确指出。
除非另外明确指出,本文中所使用的术语和短语及其变体均应解释为开放式的,而不是限制性的。在一些实例中,诸如“一个或多个”、“至少”、“但不限于”这样的扩展性词汇和短语或者其他类似用语的出现不应理解为在可能没有这种扩展性用语的示例中意图或者需要表示缩窄的情况。

Claims (11)

  1. 一种计算机执行的X线头影图像的关键点检测方法,它包括:
    获取X线头影图像;
    利用经训练的目标检测人工神经网络对所述X线头影图像进行区域划分,得到N个目标区域图像;以及利用N个经训练的关键点检测人工神经网络分别对所述N个目标区域图像进行关键点检测,其中,N为大于等于2的自然数。
  2. 如权利要求1所述计算机执行的X线头影图像的关键点检测方法,其特征在于,所述N个目标区域包括:头颅核心区域、嘴唇、牙齿、下颌以及颈椎五个目标区域。
  3. 如权利要求1所述计算机执行的X线头影图像的关键点检测方法,其特征在于,所述N个目标区域包括测量尺目标区域。
  4. 如权利要求3所述计算机执行的X线头影图像的关键点检测方法,其特征在于,对于所述N个目标区域图像中除了所述测量尺目标区域图像之外的其他目标区域图像,对应的关键点检测人工神经网络针对每一关键点产生一张热力图,对于所述测量尺目标区域图像,对应的关键点检测人工神经网络仅产生一张包括多个关键点的热力图,其中,所述测量尺目标区域图像的关键点与所述测量尺的显著刻度相对应。
  5. 如权利要求3所述计算机执行的X线头影图像的关键点检测方法,其特征在于,它还包括:基于所述测量尺目标区域图像的关键点以及其他目标区域之一的尺寸确定比例尺。
  6. 如权利要求5所述计算机执行的X线头影图像的关键点检测方法,其特征在于,所述N个目标区域还包括头颅核心区域,所述比例尺是基于所述测量尺目标区域图像的关键点以及所述头颅核心区域图像的宽度确定。
  7. 如权利要求5所述计算机执行的X线头影图像的关键点检测方法,其特征在于,它还包括:基于除了所述测量尺目标区域之外的其他目标区域的关键点以及所述比例尺计算医学测量指标。
  8. 如权利要求1所述计算机执行的X线头影图像的关键点检测方法,其特征在于,它还包括:对于在所述N个目标区域图像中的至少两个中检测出的同一关键点,基于两次检测得到的坐标计算其最终坐标。
  9. 如权利要求8所述计算机执行的X线头影图像的关键点检测方法,其特征在于,它还包括:对于在所述N个目标区域图像中的至少两个中检测出的同一关键点,基于两次检测得到的坐标计算平均值,作为其最终坐标。
  10. 如权利要求1所述计算机执行的X线头影图像的关键点检测方法,其特征在于,所述目标检测人工神经网络是YOLOv3网络。
  11. 如权利要求1所述计算机执行的X线头影图像的关键点检测方法,其特征在于,所述N个关键点检测人工神经网络是HRNet网络。
PCT/CN2022/072240 2021-03-09 2022-01-17 X线头影图像的关键点检测方法 WO2022188547A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110256482.7A CN115049580A (zh) 2021-03-09 2021-03-09 X线头影图像的关键点检测方法
CN202110256482.7 2021-03-09

Publications (1)

Publication Number Publication Date
WO2022188547A1 true WO2022188547A1 (zh) 2022-09-15

Family

ID=83156628

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/072240 WO2022188547A1 (zh) 2021-03-09 2022-01-17 X线头影图像的关键点检测方法

Country Status (2)

Country Link
CN (1) CN115049580A (zh)
WO (1) WO2022188547A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797730A (zh) * 2023-01-29 2023-03-14 有方(合肥)医疗科技有限公司 模型训练方法及装置、头影测量关键点定位方法及装置
CN115937319A (zh) * 2023-02-16 2023-04-07 天河超级计算淮海分中心 基于x光影像的关键点标注方法、电子设备及存储介质
CN117372425A (zh) * 2023-12-05 2024-01-09 山东省工业技术研究院 一种头颅侧位片的关键点检测方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268591A (zh) * 2014-09-19 2015-01-07 海信集团有限公司 一种面部关键点检测方法及装置
CN106971147A (zh) * 2017-03-06 2017-07-21 武汉嫦娥医学抗衰机器人股份有限公司 一种基于人脸区域分割的中医面诊系统及面诊方法
CN108229293A (zh) * 2017-08-09 2018-06-29 北京市商汤科技开发有限公司 人脸图像处理方法、装置和电子设备
CN111160367A (zh) * 2019-12-23 2020-05-15 上海联影智能医疗科技有限公司 图像分类方法、装置、计算机设备和可读存储介质
CN111862047A (zh) * 2020-07-22 2020-10-30 杭州健培科技有限公司 一种级联的医学影像关键点检测方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268591A (zh) * 2014-09-19 2015-01-07 海信集团有限公司 一种面部关键点检测方法及装置
CN106971147A (zh) * 2017-03-06 2017-07-21 武汉嫦娥医学抗衰机器人股份有限公司 一种基于人脸区域分割的中医面诊系统及面诊方法
CN108229293A (zh) * 2017-08-09 2018-06-29 北京市商汤科技开发有限公司 人脸图像处理方法、装置和电子设备
CN111160367A (zh) * 2019-12-23 2020-05-15 上海联影智能医疗科技有限公司 图像分类方法、装置、计算机设备和可读存储介质
CN111862047A (zh) * 2020-07-22 2020-10-30 杭州健培科技有限公司 一种级联的医学影像关键点检测方法及装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797730A (zh) * 2023-01-29 2023-03-14 有方(合肥)医疗科技有限公司 模型训练方法及装置、头影测量关键点定位方法及装置
CN115937319A (zh) * 2023-02-16 2023-04-07 天河超级计算淮海分中心 基于x光影像的关键点标注方法、电子设备及存储介质
CN115937319B (zh) * 2023-02-16 2023-05-12 天河超级计算淮海分中心 基于x光影像的关键点标注方法、电子设备及存储介质
CN117372425A (zh) * 2023-12-05 2024-01-09 山东省工业技术研究院 一种头颅侧位片的关键点检测方法
CN117372425B (zh) * 2023-12-05 2024-03-19 山东省工业技术研究院 一种头颅侧位片的关键点检测方法

Also Published As

Publication number Publication date
CN115049580A (zh) 2022-09-13

Similar Documents

Publication Publication Date Title
WO2022188547A1 (zh) X线头影图像的关键点检测方法
KR101952887B1 (ko) 해부학적 랜드마크의 예측 방법 및 이를 이용한 디바이스
Power et al. Dolphin Imaging Software: an analysis of the accuracy of cephalometric digitization and orthognathic prediction
Ghoddousi et al. Comparison of three methods of facial measurement
EP2617012B1 (en) Method and system for analyzing images
Al-Khatib et al. Validity and reliability of tooth size and dental arch measurements: a stereo photogrammetric study
WO2020151119A1 (zh) 一种用于牙科手术的增强现实方法及装置
CN107481276A (zh) 一种三维医学图像中标记点序列的自动识别方法
Miloro et al. Is there consistency in cephalometric landmark identification amongst oral and maxillofacial surgeons?
CN113065552A (zh) 自动定位头影测量标志点的方法
Lee et al. Variation within physical and digital craniometrics
Ramos et al. A new method to geometrically represent bite marks in human skin for comparison with the suspected dentition
Almukhtar et al. " Direct DICOM Slice Landmarking” A Novel Research Technique to Quantify Skeletal Changes in Orthognathic Surgery
CN112545537A (zh) 头影测量描迹图生成方法及系统
Guedes et al. A comparative study of manual vs. computerized cephalometric analysis
CN113017868B (zh) 一种正畸前后头颅侧位片配准方法及设备
CN212037803U (zh) 一种自动化头影测量系统
Cui et al. Cobb Angle Measurement Method of Scoliosis Based on U-net Network
CN113270172A (zh) 一种头颅侧位片中轮廓线的构建方法及系统
Fan et al. Nasal characteristics in patients with asymmetric mandibular prognathism
WO2023045734A1 (zh) 基于x线头影图像确定发育阶段的方法
Liu et al. The reliability of the ‘Ortho Grid’in cephalometric assessment
CN112700487A (zh) 一种头颅侧位片中测量标尺刻度获取方法及系统
JP2002279404A (ja) 画像計測方法および装置
WO2021073120A1 (zh) 医学影像的肺部区域阴影标记方法、装置、服务器及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22766074

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22766074

Country of ref document: EP

Kind code of ref document: A1