CN116862869A - Automatic detection method for mandible fracture based on mark point detection - Google Patents

Automatic detection method for mandible fracture based on mark point detection Download PDF

Info

Publication number
CN116862869A
CN116862869A CN202310830048.4A CN202310830048A CN116862869A CN 116862869 A CN116862869 A CN 116862869A CN 202310830048 A CN202310830048 A CN 202310830048A CN 116862869 A CN116862869 A CN 116862869A
Authority
CN
China
Prior art keywords
fracture
mandibular
mandible
detection
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310830048.4A
Other languages
Chinese (zh)
Other versions
CN116862869B (en
Inventor
俞启明
史力伏
王洋
代茵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Ranhui Technology Co ltd
Original Assignee
Liaoning Ranhui Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Ranhui Technology Co ltd filed Critical Liaoning Ranhui Technology Co ltd
Priority to CN202310830048.4A priority Critical patent/CN116862869B/en
Publication of CN116862869A publication Critical patent/CN116862869A/en
Application granted granted Critical
Publication of CN116862869B publication Critical patent/CN116862869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/033Recognition of patterns in medical or anatomical images of skeletal patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of automatic detection methods, and particularly relates to an automatic detection method for mandibular fracture based on mark point detection. The automatic detection method for the mandibular fracture based on the mark point detection can reduce the difficulty of data annotation, realizes the automatic detection and positioning of the mandibular fracture based on a detection algorithm for the mark point of the mandibular medical image, and improves the detection efficiency. Comprising the following steps: step 1, performing mandibular mark point detection model learning; step 2, performing mandible fracture detection model learning; and 3, automatically detecting the mandible fracture.

Description

Automatic detection method for mandible fracture based on mark point detection
Technical Field
The invention belongs to the technical field of automatic detection methods, and particularly relates to an automatic detection method for mandibular fracture based on mark point detection.
Background
The mandible is the only bone with the head movable and is positioned at the lowest part of the maxillofacial region, and is easy to cause fracture. The fracture detection needs to be observed by a medical imaging technology, however, the data volume of medical images is huge, the jawbone fracture is often multiple, and some fine fractures are easily ignored, which brings great burden to the detection of the artificial mandibular fracture. If the computer can automatically detect the mandible fracture from the medical image, and position the subarea, judge the type, the detection efficiency can be improved.
The automatic fracture detection based on the medical image generally needs to find the region of interest in the medical image, and the existing method mostly adopts manual cutting and slice screening, which deviates from the original purpose of automatic detection. Since bone and other tissue have more pronounced pixel value differences in images, a threshold is also often used to determine the region of interest. However, the shape and position characteristics of bones cannot be considered, other image processing technologies are often needed to be matched, a plurality of parameters which cannot be learned are brought, and new difficulties are brought to automatic detection tasks. While automatic localization algorithms like registration, segmentation are also applied to the determination of the region of interest, they require relatively more manual labeling.
Furthermore, most of the current mandible fracture detection methods cannot automatically determine the region of interest and locate the region where the fracture occurs, which limits the application of fracture detection algorithms.
In the prior art, there is a technical scheme that a maxillofacial CT image is synthesized into a two-dimensional panoramic picture, then the mandible in the picture is partitioned and divided, and a two-dimensional image block after the division is input into a fracture judgment model, so that the partition where fracture occurs is determined. The disadvantage of this method is that.
1. Since it is time-consuming to divide the tag, it is difficult to obtain a large amount of tag data.
2. Information in the three-dimensional data is lost.
Patent publication nos. CN111967540a and CN111967539a are directed to maxillofacial CT (Computed Tomography) and CBCT (ConeBeamComputedTomography) data, respectively, and are first divided into a plurality of image blocks according to different anatomical regions, and then each image block is input into a fracture discrimination model, respectively, so as to determine the region where fracture occurs. The disadvantage of this method is that.
1. Fractures at the zone junction are easily ignored.
2. Affecting the detection of cross-zonal fractures.
Disclosure of Invention
The invention provides an automatic detection method for mandibular fracture based on mark point detection, aiming at the defects existing in the prior art.
In order to achieve the above purpose, the present invention adopts the following technical scheme that:
and 1, performing mandibular mark point detection model learning.
And 2, performing mandible fracture detection model learning.
And 3, automatically detecting the mandible fracture.
Further, the performing mandibular landmark detection model learning includes.
And 1.1, determining mandibular mark points.
And 1.2, establishing a mandibular mark point detection data set.
And 1.3, selecting and learning a marker point automatic detection algorithm.
Still further, in step 1.1, the determining a mandibular landmark includes: twelve mandibular landmark points were determined: left condylar process, right condylar process, left coronal process, right coronal process, left mandibular notch, right mandibular notch, left mandibular angle, right mandibular angle, left third molar root posterior side, right third molar root posterior side, left canine root medial side, right canine root medial side.
Still further, in step 1.2, the creating the mandibular marker point detection dataset includes: selecting medical image data containing mandible, wherein the medical image data comprises data of a mandible healthy subject and mandible fracture patient, and the quantity and population distribution characteristics of the mandible healthy subject and the mandible fracture patient are the same; (reconstruction data using multisource, artifact free, bone window;).
The selected data is divided into three parts, a training set, a validation set, and a test set, each of which contains the same proportion (i.e., fifty percent) of fracture data. Wherein the training set and the validation set are used for training the algorithm, and the test set is used for testing the final algorithm.
And (3) manufacturing a mark point detection label, wherein the mark point detection label comprises division of a fixed-size area with the mark point as a center, and the data is normal or fracture.
Further, in step 1.3, the automatic detection algorithm for selecting and learning the marker points includes.
And selecting a deep learning U-Net algorithm, and taking a fixed-size region with a mark point as a center as a segmentation target.
Preprocessing the mandibular landmark detection dataset, including resolution and gray value normalization, augmenting the data using random rotation, unifying the dimensions using random clipping.
Selecting an Adam optimization algorithm and a Dice loss function, learning a mandible marker point detection model by using a training set and a verification set of a mandible marker point detection data set, and obtaining the performance of the learned model by using a test set.
Further, in step 2, the performing mandibular fracture detection model learning includes: step 2.1, determining fracture classification standards; comprising the following steps: mandibular fracture is divided into: class a: no shift, class B: shift, class C: multi-branch injury.
Step 2.2, establishing a mandible fracture detection data set; including.
The medical image data containing the mandible is selected, the medical image data contains the data of a mandibular healthy subject and a mandibular fracture patient, the quantity and population distribution characteristics of the mandibular healthy subject and the mandibular fracture patient are the same, and the fracture data contains all types of fracture determined in the step 2.1. The data is reconstructed using multi-source, artifact free, bone windows.
The selected data is divided into three parts, a training set, a validation set, and a test set, each of which contains the same proportion (i.e., fifty percent) of fracture data. Wherein the training set and the validation set are used for training the algorithm, and the test set is used for testing the final algorithm.
And manufacturing a fracture detection label, wherein the label comprises a region of interest covering the mandible, a fracture position and a fracture type.
Step 2.3, selecting and learning an automatic fracture detection algorithm; comprising the following steps:
and selecting a deep learning FasterR-CNN automatic detection algorithm, setting a threshold value to divide the detection result into three types, wherein the three types correspond to A, B, C types of mandible fracture respectively.
Preprocessing the mandibular fracture detection dataset, including extracting and targeting a region of interest: resolution and gray value normalization, expanding data using random rotation, unifying size using random clipping.
And learning the mandibular fracture detection model by using a training set and a verification set of the mandibular fracture detection data set, and obtaining the performance of the learned model by using a test set.
Further, in step 3, the performing automatic detection of the mandibular fracture includes.
And 3.1, obtaining the mark points of the maxillofacial medical image through the mark point detection model.
And 3.2, determining the region of interest of the mandible according to the detection result of the mark points.
Step 3.3, inputting the region of interest into a fracture detection model to obtain a fracture detection result; the fracture detection result comprises a rectangular frame and probability.
Step 3.4, according to twelve mandible mark points: nine mandible partitions were determined: the mandible is combined, the left mandible, the right mandible, the left mandible angle and ascending branch, the right mandible angle and ascending branch, the left coronal process, the right coronal process, the left condylar process and the right condylar process.
And according to the size of the overlapping part of the rectangular frame and the subarea of the fracture detection result, selecting a threshold value to determine the subarea or the spanned subarea to which the detected fracture belongs.
And 3.5, automatically generating an algorithm result report.
Compared with the prior art, the invention has the beneficial effects.
The automatic detection method for the mandibular fracture based on the mark point detection can reduce the difficulty of data annotation, realizes the automatic detection and positioning of the mandibular fracture based on the mark point detection and fracture detection algorithm for the mandibular medical image, and improves the detection efficiency.
Drawings
The invention is further described below with reference to the drawings and the detailed description. The scope of the present invention is not limited to the following description.
Fig. 1 is a graph showing the detection result of the mandibular landmark.
Fig. 2 is a horizontal cut-away view of the mandibular fracture test.
Fig. 3 is a general flowchart of a mandibular fracture automatic detection method based on landmark detection.
Detailed Description
As shown in fig. 3, the automatic detection method of the mandibular fracture based on the mark point detection reduces the difficulty of data annotation, realizes the automatic detection and positioning of the mandibular fracture based on the mark point detection and fracture detection algorithm of the mandibular medical image, and improves the detection efficiency. The following scheme is adopted.
1. And (5) learning a mandibular landmark detection model.
1.1 Determining mandibular landmark points.
Twelve mandibular landmark points were determined: left/right condylar process, left/right coronal process, left/right mandibular incision, left/right mandibular angle, left/right third molar root posterior side, left/right canine root medial side.
1.2 A mandibular landmark detection dataset is established.
CBCT data containing mandible was selected to include data for both mandibular healthy subjects and mandibular fracture patients, the same number and demographic profile. Reconstructing data using a multi-source, artifact free bone window;
the selected data is divided into three parts, a training set, a validation set, and a test set, each of which contains the same proportion (i.e., fifty percent) of fracture data. Wherein the training set and the validation set are used for training the algorithm, and the test set is used for testing the final algorithm.
And (3) manufacturing a mark point detection label, wherein the mark point detection label comprises division of a fixed-size area with the mark point as a center, and the data is normal or fracture.
1.3 And selecting and learning a marker point automatic detection algorithm.
A deep learning U-Net algorithm is selected, and a fixed-size region taking a mark point as a center is used as a segmentation target; preprocessing a mandibular mark point detection data set, including normalizing resolution and gray values, expanding data by random rotation, and unifying sizes by random clipping; selecting an Adam optimization algorithm, a Dice loss function, using a training set and a verification set of a mandibular marker point detection data set to learn a mandibular marker point detection model, and using a test set to obtain the performance of the learned model.
2. And (5) learning a mandible fracture detection model.
2.1 Fracture classification criteria are determined.
Mandibular fractures are divided into three categories: a-non-shift (non-shift), B-shift (shift) and C-multi-branch injury (multi-branch-defect).
2.2 A mandibular fracture detection dataset is established.
CBCT data containing mandible was selected, including data for both healthy subjects with mandible and patients with mandibular fracture, the number and demographics of which should be the same, and the fracture data should contain all types of fractures as determined in step 2.1. Reconstructing data using a multi-source, artifact free bone window;
the selected data is divided into three parts, a training set, a validation set, and a test set, each of which contains the same proportion (i.e., fifty percent) of fracture data. The training set and the verification set are used for training the algorithm, and the testing set is used for testing the final algorithm;
and manufacturing a fracture detection label, wherein the label comprises a region of interest covering the mandible, a fracture position and a fracture type.
2.3 An automatic fracture detection algorithm is selected and learned.
Selecting a deep learning FasterR-CNN automatic detection algorithm, and selecting a threshold value to classify detection results into three categories which respectively correspond to A, B, C types of fractures; preprocessing the mandibular fracture detection dataset, including extracting and targeting a region of interest: normalizing resolution and gray values, expanding data by random rotation, and unifying size by random clipping; and learning the mandibular fracture detection model by using a training set and a verification set of the mandibular fracture detection data set, and obtaining the performance of the learned model by using a test set.
3. And (5) automatically detecting the fracture of the mandible.
3.1 And obtaining the mark points of the maxillofacial CBCT image by using the learned mark point detection model.
3.2 And determining the region of interest of the mandible according to the detection result of the mark points.
3.3 Inputting the region of interest into the learned fracture detection model to obtain a fracture detection result: rectangular box and probability.
3.4 Nine mandibular partitions may be determined from the twelve detected landmark points: mandibular union, left/right mandible angle and ascending branches, left/right coronal process and left/right condylar process. And according to the size of the overlapping part of the fracture detection rectangular frame and the subarea, selecting a threshold value to determine the subarea or the spanned subarea to which the detected fracture belongs.
3.5 An algorithm result report is automatically generated.
Specifically, the two models of the invention are.
The mandibular marker point detection model, the detection result is shown in fig. 1, wherein: 1/2-left/right condyle, 3/4-left/right coronal process, 5/6-left/right mandibular notch, 7/8-left/right mandibular angle, 9/10-left/right third molar root canal, 11/12-left/right canine root canal.
In the mandibular fracture detection model, a horizontal section of the detection result is shown in fig. 2.
According to the results obtained by the two models, the fracture is automatically positioned and an algorithm report is generated: mandibular joint type B fracture (probability 0.99) with right mandibular type B fracture (probability 0.95).
It should be understood that the foregoing detailed description of the present invention is provided for illustration only and is not limited to the technical solutions described in the embodiments of the present invention, and those skilled in the art should understand that the present invention may be modified or substituted for the same technical effects; as long as the use requirement is met, the invention is within the protection scope of the invention.

Claims (7)

1. The automatic detection method for mandibular fracture based on mark point detection is characterized by comprising the following steps: comprising the following steps:
step 1, performing mandibular mark point detection model learning;
step 2, performing mandible fracture detection model learning;
and 3, automatically detecting the mandible fracture.
2. The automatic detection method of mandibular fracture based on landmark detection according to claim 1, wherein: the performing mandibular marker point detection model learning includes:
step 1.1, determining mandibular mark points;
step 1.2, establishing a mandibular mark point detection data set;
and 1.3, selecting and learning a marker point automatic detection algorithm.
3. The automatic detection method of mandibular fracture based on landmark detection according to claim 2, wherein: in step 1.1, the determining the mandibular marker point includes: twelve mandibular landmark points were determined: left condylar process, right condylar process, left coronal process, right coronal process, left mandibular notch, right mandibular notch, left mandibular angle, right mandibular angle, left third molar root posterior side, right third molar root posterior side, left canine root medial side, right canine root medial side.
4. The automatic detection method of mandibular fracture based on landmark detection according to claim 2, wherein: in step 1.2, the establishing the mandibular marker point detection data set includes: selecting medical image data containing mandible, wherein the medical image data comprises data of a mandible healthy subject and mandible fracture patient, and the quantity and population distribution characteristics of the mandible healthy subject and the mandible fracture patient are the same;
dividing the selected data into a training set, a verification set and a test set, wherein each part contains fracture data with the same proportion;
and (3) manufacturing a mark point detection label, wherein the mark point detection label comprises division of a fixed-size area with the mark point as a center, and the data is normal or fracture.
5. The automatic detection method of mandibular fracture based on landmark detection according to claim 2, wherein: in step 1.3, the automatic detection algorithm for selecting and learning the marker points comprises:
a deep learning U-Net algorithm is selected, and a fixed-size region taking a mark point as a center is used as a segmentation target;
preprocessing a mandibular mark point detection data set, including normalizing resolution and gray values, expanding data by random rotation, and unifying sizes by random clipping;
selecting an Adam optimization algorithm and a Dice loss function, learning a mandible marker point detection model by using a training set and a verification set of a mandible marker point detection data set, and obtaining the performance of the learned model by using a test set.
6. The automatic detection method of mandibular fracture based on landmark detection according to claim 1, wherein:
in step 2, the performing mandibular fracture detection model learning includes:
step 2.1, determining fracture classification standards; comprising the following steps:
mandibular fracture is divided into: class a: no shift, class B: shift, class C: multi-branch injury;
step 2.2, establishing a mandible fracture detection data set; comprising the following steps:
selecting medical image data containing mandible, wherein the medical image data comprises data of a mandible healthy subject and mandible fracture patient, the quantity and population distribution characteristics of the mandible healthy subject and the mandible fracture patient are the same, and the fracture data contains all types of fracture determined in step 2.1;
dividing the selected data into a training set, a verification set and a test set, wherein each part contains fracture data with the same proportion;
manufacturing a fracture detection tag, wherein the tag comprises a region of interest covering the mandible, a fracture position and a fracture type;
step 2.3, selecting and learning an automatic fracture detection algorithm; comprising the following steps:
selecting a deep learning FasterR-CNN automatic detection algorithm, setting a threshold value to divide the detection result into three types, wherein the three types correspond to A, B, C types of mandible fracture respectively;
preprocessing the mandibular fracture detection dataset, including extracting and targeting a region of interest: normalizing resolution and gray values, expanding data by random rotation, and unifying size by random clipping;
and learning the mandibular fracture detection model by using a training set and a verification set of the mandibular fracture detection data set, and obtaining the performance of the learned model by using a test set.
7. The automatic detection method of mandibular fracture based on landmark detection according to claim 1, wherein: in step 3, the automatic detection of mandibular fracture comprises:
step 3.1, obtaining mark points of the maxillofacial medical image through a mark point detection model;
step 3.2, determining a mandible interested area according to the mark point detection result;
step 3.3, inputting the region of interest into a fracture detection model to obtain a fracture detection result; the fracture detection result comprises a rectangular frame and probability;
step 3.4, according to twelve mandible mark points: nine mandible partitions were determined: mandible combination, left mandible, right mandible, left mandible angle and ascending branch, right mandible angle and ascending branch, left coronal process, right coronal process, left condylar process, right condylar process;
and according to the size of the overlapping part of the rectangular frame and the subarea of the fracture detection result, selecting a threshold value to determine the subarea or the spanned subarea to which the detected fracture belongs.
CN202310830048.4A 2023-07-07 2023-07-07 Automatic detection method for mandible fracture based on mark point detection Active CN116862869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310830048.4A CN116862869B (en) 2023-07-07 2023-07-07 Automatic detection method for mandible fracture based on mark point detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310830048.4A CN116862869B (en) 2023-07-07 2023-07-07 Automatic detection method for mandible fracture based on mark point detection

Publications (2)

Publication Number Publication Date
CN116862869A true CN116862869A (en) 2023-10-10
CN116862869B CN116862869B (en) 2024-04-19

Family

ID=88233610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310830048.4A Active CN116862869B (en) 2023-07-07 2023-07-07 Automatic detection method for mandible fracture based on mark point detection

Country Status (1)

Country Link
CN (1) CN116862869B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609683A (en) * 2012-01-13 2012-07-25 北京邮电大学 Automatic labeling method for human joint based on monocular video
CN109255786A (en) * 2018-09-30 2019-01-22 杭州依图医疗技术有限公司 A kind of method and device detecting the stone age
CN111967539A (en) * 2020-09-29 2020-11-20 北京大学口腔医学院 Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment
CN113344950A (en) * 2021-07-28 2021-09-03 北京朗视仪器股份有限公司 CBCT image tooth segmentation method combining deep learning with point cloud semantics
CN113705613A (en) * 2021-07-27 2021-11-26 浙江工业大学 X-ray sheet distal radius fracture classification method based on spatial position guidance
CN113782184A (en) * 2021-08-11 2021-12-10 杭州电子科技大学 Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN113989290A (en) * 2021-10-19 2022-01-28 杭州颜云科技有限公司 Wrinkle segmentation method based on U-Net
WO2022103877A1 (en) * 2020-11-13 2022-05-19 Innopeak Technology, Inc. Realistic audio driven 3d avatar generation
CN114972881A (en) * 2022-06-16 2022-08-30 上海微创医疗机器人(集团)股份有限公司 Image segmentation data labeling method and device
CN115641324A (en) * 2022-11-04 2023-01-24 上海电气集团股份有限公司 Cervical vertebra key point prediction method and system, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609683A (en) * 2012-01-13 2012-07-25 北京邮电大学 Automatic labeling method for human joint based on monocular video
CN109255786A (en) * 2018-09-30 2019-01-22 杭州依图医疗技术有限公司 A kind of method and device detecting the stone age
CN111967539A (en) * 2020-09-29 2020-11-20 北京大学口腔医学院 Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment
WO2022103877A1 (en) * 2020-11-13 2022-05-19 Innopeak Technology, Inc. Realistic audio driven 3d avatar generation
CN113705613A (en) * 2021-07-27 2021-11-26 浙江工业大学 X-ray sheet distal radius fracture classification method based on spatial position guidance
CN113344950A (en) * 2021-07-28 2021-09-03 北京朗视仪器股份有限公司 CBCT image tooth segmentation method combining deep learning with point cloud semantics
CN113782184A (en) * 2021-08-11 2021-12-10 杭州电子科技大学 Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN113989290A (en) * 2021-10-19 2022-01-28 杭州颜云科技有限公司 Wrinkle segmentation method based on U-Net
CN114972881A (en) * 2022-06-16 2022-08-30 上海微创医疗机器人(集团)股份有限公司 Image segmentation data labeling method and device
CN115641324A (en) * 2022-11-04 2023-01-24 上海电气集团股份有限公司 Cervical vertebra key point prediction method and system, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIANG LIN MSC ET AL.: "Micro–Computed Tomography–Guided Artificial Intelligence for Pulp Cavity and Tooth Segmentation on Cone-beam Computed Tomography", 《JOURNAL OF ENDODONTICS》 *
熊峰: "基于深度学习的全景牙片牙周炎诊断研究", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》 *

Also Published As

Publication number Publication date
CN116862869B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
US11464467B2 (en) Automated tooth localization, enumeration, and diagnostic system and method
CN110599508B (en) Artificial intelligence-based spine image processing method and related equipment
Silva et al. Automatic segmenting teeth in X-ray images: Trends, a novel data set, benchmarking and future perspectives
CN108765417B (en) Femur X-ray film generating system and method based on deep learning and digital reconstruction radiographic image
US11734825B2 (en) Segmentation device and method of generating learning model
US11443423B2 (en) System and method for constructing elements of interest (EoI)-focused panoramas of an oral complex
ES2914387T3 (en) immediate study
CN109767841B (en) Similar model retrieval method and device based on craniomaxillofacial three-dimensional morphological database
US20200134815A1 (en) System and Method for an Automated Parsing Pipeline for Anatomical Localization and Condition Classification
JP2020513869A (en) How to restore a skull
US20220084267A1 (en) Systems and Methods for Generating Quick-Glance Interactive Diagnostic Reports
CN115205469A (en) Tooth and alveolar bone reconstruction method, equipment and medium based on CBCT
CN110236673B (en) Database-based preoperative design method and device for reconstruction of bilateral jaw defects
US20210217170A1 (en) System and Method for Classifying a Tooth Condition Based on Landmarked Anthropomorphic Measurements.
US20220361992A1 (en) System and Method for Predicting a Crown and Implant Feature for Dental Implant Planning
US20080273775A1 (en) Cartesian human morpho-informatic system
CN116862869B (en) Automatic detection method for mandible fracture based on mark point detection
US20220358740A1 (en) System and Method for Alignment of Volumetric and Surface Scan Images
Zhang et al. Jaw Segmentation from CBCT Images
Varghese et al. Segmentation and three dimensional visualization of mandible using active contour and visualization toolkit in craniofacial computed tomography images
CN116883428B (en) Mandible spiral CT image partition segmentation method
CN107408301B (en) Segmentation of objects in image data using channel detection
Pavaloiu et al. Teeth labeling from CBCT data using the Circular Hough Transform
KR20230030682A (en) Apparatus and Method for Automatically Detecting 3D Cephalometric Landmarks using Dental Computerized Tomography
CN111127636B (en) Intelligent complex intra-articular fracture desktop-level three-dimensional diagnosis system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant