CN115517792A - Dentition matching and jaw segmentation method, system and medium based on artificial intelligence - Google Patents

Dentition matching and jaw segmentation method, system and medium based on artificial intelligence Download PDF

Info

Publication number
CN115517792A
CN115517792A CN202211166949.XA CN202211166949A CN115517792A CN 115517792 A CN115517792 A CN 115517792A CN 202211166949 A CN202211166949 A CN 202211166949A CN 115517792 A CN115517792 A CN 115517792A
Authority
CN
China
Prior art keywords
dentition
model
matching
segmentation
jaw
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211166949.XA
Other languages
Chinese (zh)
Inventor
罗恩
朱照琨
邰岳
刘志凯
黄立维
唐丽
刘瑶
刘航航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202211166949.XA priority Critical patent/CN115517792A/en
Publication of CN115517792A publication Critical patent/CN115517792A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C7/00Orthodontics, i.e. obtaining or maintaining the desired position of teeth, e.g. by straightening, evening, regulating, separating, or by correcting malocclusions
    • A61C7/002Orthodontic computer assisted systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/51Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for dentistry
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5205Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C7/00Orthodontics, i.e. obtaining or maintaining the desired position of teeth, e.g. by straightening, evening, regulating, separating, or by correcting malocclusions
    • A61C7/002Orthodontic computer assisted systems
    • A61C2007/004Automatic construction of a set of axes for a tooth or a plurality of teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Dentistry (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Epidemiology (AREA)
  • General Physics & Mathematics (AREA)
  • Pulmonology (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The invention relates to a dentition matching and jaw bone segmentation method, a dentition matching and jaw bone segmentation system and a medium based on artificial intelligence, which comprise the following steps: s1, constructing a dentition matching algorithm under constrained motion to obtain a target function for matching a digital dentition model with a skull model dentition; and S2, carrying out minimum solving on the target function to obtain an automatic matching mode of the upper dentition, the lower dentition and the skull model dentition, and a translation and rotation mode of the upper dentition and the lower dentition. The jaw bone segmentation method comprises the following steps: replacing the teeth part on the three-dimensional skull model with the maxillary dentition and the mandibular dentition of the digital dentition model by adopting the dentition matching method; then, the condyles of the three-dimensional skull model are automatically segmented, and the automatic segmentation of the jaw bone of the three-dimensional skull model is completed. The application constructs an intelligent dentition matching method and a jaw segmentation method for orthognathic surgery, and can automatically complete an important link of jaw segmentation and dentition matching in the digital orthognathic surgery design through a computer, so that the workload of a clinician is reduced, and the working efficiency is improved.

Description

Dentition matching and jaw segmentation method, system and medium based on artificial intelligence
Technical Field
The invention relates to the technical field of orthodontic, in particular to a dentition matching and jaw segmentation method, system and medium based on artificial intelligence.
Background
Orthognathic surgery is the most common way to treat dento-maxillofacial deformities, and the precise implementation of the surgery needs to be based on precise diagnosis and design of the surgical plan. The traditional orthognathic surgery design is mainly evaluated and designed by means of clinical detailed examination, face image data, two-dimensional plain films, model analysis and the like. In spite of recent advances in imaging technology, digital technology such as Computer Assisted Surgery (CAS) is used in orthognatic surgery. Personalized, minimally invasive and precise digital orthognathic surgical operation design based on CAD/CAM becomes a main mode, but the technology application aiming at intelligent operation design of dentognathic face deformity is lacked at home and abroad at present, and the intelligent and automatic diagnosis and treatment design of dentognathic face deformity is difficult to realize.
As shown in FIG. 1, the digital orthognathic surgical planning mainly comprises the following steps: firstly, three-dimensional reconstruction of a skull model, jaw segmentation, dentition matching, surgical design and 3D printing of a corresponding surgical jaw plate or guide plate are carried out. Wherein, jaw bone segmentation is that the condyles and the glenoid fossa are adhered because of the upper jaw dentition and the lower jaw dentition after three-dimensional reconstruction and need to be separated manually; dentition matching is because the structure of teeth is complex, the opacity of the teeth is higher than that of bone tissues, and the tooth form reconstructed by CT data has large errors, so that the jaw dentition part needs to be replaced by the digital dentition model. At present, the jaw bone segmentation and dentition matching are manually operated by a clinician, and the method has the disadvantages of complicated process, high repetitive labor proportion, low efficiency and long time consumption.
In recent years, digital orthognatic surgery has also gradually moved toward artificial intelligence orthognatic surgery as the combination of artificial intelligence and medical treatment has become more and more intimate. If the upper jaw and the lower jaw can be automatically segmented and the dentition can be matched, the workload of a clinician can be effectively reduced, and the working time is saved.
Disclosure of Invention
The present application provides a method and system for jaw bone segmentation and dentition matching based on artificial intelligence to solve the above technical problems.
The application is realized by the following technical scheme:
the application provides a dentition matching method based on artificial intelligence, which comprises the following steps:
s1, constructing a dentition matching algorithm under constrained motion to obtain a target function E (R, t, lambda) for matching a digital dentition model and a skull model dentition;
Figure BDA0003862087920000021
in the above formula, u i Upper dentition node, l, being a digitized dentition model j A lower dentition node of the digital dentition model, a is a relative sliding axis between the upper dentition and the lower dentition, lambda is a relative sliding amount between the upper dentition and the lower dentition, q i And q is j Are all nodes on the skull, and R and t are integral rigid body rotation and translation;
and S2, carrying out minimum solution on the target function E (R, t, lambda), wherein the obtained optimal solution is an automatic matching mode of the upper dentition, the lower dentition and the skull model dentition, and a translation and rotation mode of the upper dentition and the lower dentition.
Wherein, before the step S1, a preparation operation is further included, and the preparation operation includes:
collecting spiral CT data of the maxillofacial region;
collecting a corresponding plaster occlusion model when the patient shoots the maxillofacial spiral CT, and recording the occlusion relation of the patient;
obtaining a digitized dentition model comprising:
step 1, respectively scanning maxillary dentition and mandibular dentition to obtain complete maxillary dentition and complete mandibular dentition;
step 2, positioning the upper dentition model and the lower dentition model on the optimal occlusion by using occlusion recording wax, and scanning the upper dentition model and the lower dentition model simultaneously to obtain the relative relation of the upper dentition and the lower dentition;
and 3, matching the relative relation between the upper and lower jaw dentitions obtained in the step 1 and the step 2 in software of scanning equipment, and exporting the upper and lower jaw dentition models with the relative position relation recorded after scanning.
The application provides a jaw segmentation method based on artificial intelligence, including the following steps:
reconstructing a three-dimensional skull model based on the maxillofacial spiral CT data;
replacing the tooth parts on the three-dimensional skull model with the maxillary dentition and the mandibular dentition of the digitized dentition model using the dentition matching method as claimed in claim 1 or 2;
segmenting the condyles of a three-dimensional skull model, comprising:
accurately positioning the condyles, and automatically finding the positions of the condyles;
and (3) cutting the condyles, classifying the condyles and the condylar fossae by surface point sets based on the normal surface characteristics of the condyles and the condylar fossae, cutting out unnecessary points according to the two point sets, and realizing the segmentation of the condyles.
In particular, the precise positioning method of the condyles is as follows:
taking the condyles obtained after manually segmenting the upper jaw and the lower jaw of a skull model as a reference model, taking the condyles of the skull model which needs to be automatically segmented additionally as a target model, and setting the node of the target model as P i The node of the reference model is u i,m The matched objective function is denoted as E (R, t),s):
Figure BDA0003862087920000031
In the above formula, R and t are integral rotation and translation, s is change of model scale, the objective function E (R, t, s) is solved in a minimization manner, and the obtained optimal solution is a node for locating the condyles.
In particular, the condylar resection method comprises the following steps:
denote the set of condylar surface points by F:
F={p i ∈N:n(p i ) T (p i -c)<0,n(p i ) T z<0
in the above formula, n (p) i ) Is node p i A normal vector of (c) is represented by N, a local region around the condyle is represented by c, a center point coordinate of the region N is represented by z, and a unit vector in a vertical direction is represented by z;
denote the set of condylar notch surface points by S:
S={p i ∈N:n(p i ) T (p i -c)>0,n(p i ) T z>0
and automatically cutting out unnecessary points according to the F and the S to realize the segmentation of the condyles.
It is worth to be noted that, because the bone threshold of the condylar portion is close to the standard bone threshold, and the bone threshold of the condylar notch portion is close to the cortical bone threshold, when the condylar segmentation is performed, threshold reconstruction is required to be performed first, and the condylar notch range are roughly identified through the threshold reconstruction, so that the purpose of primary precise condylar positioning is achieved; then, subsequent accurate positioning of the condyles is carried out.
The application provides a dentition matching and jaw segmentation system based on artificial intelligence includes:
the data import module is used for importing a three-dimensional skull model and a digital dentition model;
and the dentition matching and skull segmentation module adopts the jaw bone segmentation method to replace the tooth part on the three-dimensional skull model with the maxillary dentition and the mandibular dentition of the digital dentition model and the condylar part of the segmented three-dimensional skull model.
An apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the artificial intelligence based jaw bone segmentation method when executing the computer program.
The present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the artificial intelligence based jaw bone segmentation method described above.
Compared with the prior art, the method has the following beneficial effects:
based on the idea of artificial intelligence, the intelligent dentition matching method and the jaw bone segmentation method for the orthognathic surgery are constructed, and the important link of jaw bone segmentation and dentition matching in the digital orthognathic surgery design can be automatically completed by a computer, so that the workload of a clinician is reduced, and the working efficiency is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention.
FIG. 1 is a flow chart of a digital orthognathic surgical plan;
fig. 2 (a) is a schematic view of nodes of maxillary and mandibular dentitions, and (b) is a schematic view of nodes of digitized maxillary and mandibular dentitions;
FIG. 3 is a normal view of the pericondylar region and surface of an embodiment;
FIG. 4 is a schematic diagram of a construction procedure in the embodiment; wherein, (a) is collected spiral CT data and a plaster dentition model as a training set, (b) is a schematic diagram of nodes of maxillary and mandibular dentitions and digitized superior and inferior dentitions before matching, (c) is a schematic diagram of nodes of matched maxillary and mandibular dentitions and digitized superior and inferior dentitions, (d) is a schematic diagram of matched superior and mandibular dentitions in the system, (e) is a schematic diagram of preliminary positioning of condyles, (f) is a schematic diagram of fine positioning of condyles, and (g) is a condyles after automatic segmentation;
FIG. 5 is a schematic illustration of automatic dentition matching in an embodiment;
fig. 6 is a schematic diagram of automatic jaw bone segmentation in an embodiment;
FIG. 7 is a diagram illustrating overall maxillary deviation between automatic segmentation and manual segmentation in an embodiment;
FIG. 8 is a chart of maxilla ensemble deviation statistical analysis of automatic segmentation and manual segmentation in an embodiment;
fig. 9 is a schematic diagram of the deviation of the whole mandible of the automatic segmentation and the manual segmentation in the embodiment;
fig. 10 is a diagram of statistical analysis of mandible overall deviation of automatic segmentation and manual segmentation in the embodiment;
FIG. 11 is a comparison graph of working time of automatic segmentation and manual segmentation for the bony asymmetric malocclusion in clinic in the example;
FIG. 12 is a comparison graph of working time of automatic segmentation and manual segmentation for clinical class II malocclusion in the example;
fig. 13 is a comparison graph of the working time of automatic segmentation and manual segmentation for class iii malocclusion in the clinic in the example.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments. It is to be understood that the described embodiments are only a few embodiments of the present invention, and not all embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without inventive efforts based on the embodiments of the present invention, are within the scope of protection of the present invention.
It should be noted that the embodiments and features of the embodiments of the present invention may be combined with each other without conflict. It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The dentition matching and jaw bone segmentation method based on artificial intelligence disclosed by the embodiment comprises the following steps of:
1, preliminary preparation work
First, all maxillofacial spiral CT data are collected. In the step, PNMS MX16EVO (Philips, screw pitch: 7.25mm, layer thickness: 1mm, layer spacing 0.5mm, photographing range: 768mm × 768mm, scanning time: 26.2S, tube voltage: 120KV, tube current 282mA, voxel 0.33mm3) can be adopted for scanning to obtain medical digital imaging and communication format data.
Then, all the plaster dentition models were obtained: and collecting a corresponding plaster occlusion model, namely a dentition model when the patient shoots the maxillofacial spiral CT. Making a meshing model: the method comprises the steps of preparing upper and lower dentition impressions of a patient by adopting a silicon rubber impression material, filling a mould with superhard gypsum to obtain a dentition gypsum model of the patient, and recording the occlusion relation of the patient by occlusion recording wax.
Acquiring a digital dentition model: the plaster dentition model is converted into data which can be identified by a computer through scanning. The method specifically comprises the following steps: scanning the maxillary dentition to obtain a complete maxillary dentition; (2) scanning the mandibular dentition to obtain a complete mandibular dentition; (3) Positioning the upper dentition model and the lower dentition model on the optimal occlusion by using occlusion recording wax, fixing by using a scanning accessory, and scanning the upper dentition model and the lower dentition model simultaneously to obtain the relative relation of the upper dentition and the lower dentition; (4) Matching the relative relationship between the upper and lower dentitions of the first two steps and the step (3) in the software of the scanning device. After the scan is completed, the model of the upper and lower jaw dentition in which the relative positional relationship is recorded is derived in STL format. The step can use a bin type optical three-dimensional scanner for scanning.
It is worth noting that maxillofacial spiral CT data and a gypsum dentition model were collected as training sets. By performing commonality analysis on a certain number of CT models and digital dentition models, a dentition matching and segmentation algorithm can be obtained.
And performing three-dimensional reconstruction on cranio-maxillofacial bone tissues in the CT data, scanning to obtain a digital dentition model and constructing a data set. The data set can be used for accuracy tests, including comparison of cusp landmarks, whole dentition, and clinical work hours.
In some embodiments, the training set includes maxillofacial spiral CTs and corresponding plaster dentition models for a plurality of patients with bony class ii, class iii, and asymmetric deformities.
2, constructing an automatic dentition matching algorithm
The position constraint between the upper dentition and the lower dentition can be simplified to relative sliding along a certain known axial direction, and the upper and lower dentitions as a whole are matched with the skull dentition through rigid body transformation. Assume that the imported upper dentition node is u i The imported lower dentition node is l j And the relative sliding axis between the upper and lower dentitions is a and the relative sliding amount is λ, the matching of the imported dentition and the skull dentition can be expressed as an objective function E (R, t, λ), as shown in equation (1):
Figure BDA0003862087920000061
in the formula (1), q i And q is j Are all nodes on the skull and R and t are rigid rotations and translations of the whole body. The minimization of the objective function E (R, t, lambda) can be summarized as an unconstrained quadratic optimization problem, the equation has a standard solving process, and the obtained optimal solution is an automatic matching mode of the upper dentition, the lower dentition and the skull model, and a translation and rotation mode of the upper dentition and the lower dentition.
By constructing the dentition matching algorithm under the constraint motion, the upper dentition and the lower dentition can be matched at the same time, and the preliminary position relationship between the upper dentition and the lower dentition can be obtained, as shown in fig. 2.
3, constructing automatic jaw bone segmentation algorithm
This implementation establishes automatic maxillary and mandibular bone segmentation: the part needs to be established and processed, namely the segmentation of the dentition part and the segmentation of the joint area.
2.1 segmentation of dentition parts
After the dentition is matched in the previous stage, because the upper dentition and the lower dentition are scanned separately, the tooth part on the skull model is replaced by the upper dentition and the lower dentition, and the segmentation of the dentition part is also completed. In order to further divide the skull model into two parts of the maxilla and mandible, the condylar and dentition parts need to be divided.
2.2 dissection of the condylar portion.
After the dentition segmentation, the condyles need to be segmented. Because the bone threshold of the condylar portion is close to the standard bone threshold and the bone threshold of the condylar notch portion is close to the cortical bone threshold, when the condylar division is carried out, threshold reconstruction is required to be carried out firstly, the condylar and the condylar notch range are roughly identified through the threshold reconstruction, and the purpose of primary precise condylar positioning is achieved; then, subsequent accurate positioning of the condyles is carried out. The dissection of the condylar portion specifically comprises the following steps:
s2.1, initial positioning: establishing different threshold models: the method comprises the steps of after the required CT data and the corresponding STL dentition are obtained, reconstructing the CT according to the following three thresholds, namely a first standard bone threshold (standard bone model); second, cortical bone threshold (cortical bone model); third, the dental threshold (dental model).
S2.2, accurate positioning of the condylar process: the automatic finding of the position of the condyle is a precondition for segmenting the condyle. Firstly, taking the condyles obtained after manually segmenting the upper jaw and the lower jaw of a skull model as a reference model, taking the condyles of the skull model which needs to be automatically segmented as a target model, and setting the node of the target model as P i The node of the reference model is u i The matched objective function is expressed as E (R, t, s), as shown in equation (2):
Figure BDA0003862087920000071
in equation (2), R and t are the rotation and translation of the whole, and s is the change in the model scale. The minimization of the objective function E (R, t, s) can be summarized as an unconstrained quadratic optimization problem, a standard solving process is provided, and the obtained optimal solution is the node for positioning the condylar process.
S2.3, cutting condyles:
the condylar cropping algorithm is one of the keys to achieving condylar segmentation. After locating the condyles, the processing of the condylar-cropping algorithm is limited to a local area around the condyles where the surface normal information is distinctive, as shown in fig. 3.
In FIG. 3, n 1 Representing a normal vector at a point on the condylar surface, n 2 Representing a normal vector at a point on the surface of the condyle notch; it is characterized in that the normal vectors of the condylar surfaces are directed substantially outside the local area, while the normal vectors of the condylar notch surfaces are directed inwardly of the local area and are biased to point downwardly. Using this feature, the condyles and the condylar notch can be classified. With N representing the local area around the condyle and F representing the set of condyle surface points therein, F can be defined as:
F={p i ∈N:n(p i ) T (p i -c0<0,n(p i ) T z<0(3)
wherein n (p) i ) Is node p i C is the center point coordinates of the region N, and z is a unit vector in the vertical direction. The set F realizes the classification of the condylar surface points, and similarly can realize the classification of the condylar surface points; the set of glenoid surface points is S, which can be defined as:
S={p i ∈N:n(p i ) T (p i -c)>0,n(p i ) T z>0(4)
unnecessary points are cut according to the two types of point sets, and the condylar process is segmented.
Based on the above method, the application discloses a dentition matching and jaw bone segmentation system based on artificial intelligence, and the system can be used for implementing the above dentition matching and jaw bone segmentation method based on artificial intelligence, and specifically comprises:
the data import module is used for respectively importing a three-dimensional skull model and a digital dentition model;
and the dentition matching and skull segmentation module adopts the jaw bone segmentation method to replace the tooth part on the three-dimensional skull model with the maxillary dentition and the mandibular dentition of the digital dentition model and the condylar part for segmenting the three-dimensional skull model.
According to the above method, the present embodiment constructs an orthognathic surgery intelligent design program (hereinafter abbreviated as ido), as shown in fig. 4. The program interface is developed by adopting C/C + + language and is based on the C + +11 standard. Specifically, the Qt graphical interface library is used for realizing C/C + + programming and OpenGL three-dimensional graphics rendering function, can run in a cross-platform mode, and supports Windows 7 and Windows10. The three-dimensional skull model and the digital upper and lower dentition model can be respectively imported into a program, and dentition matching and segmentation of the upper and lower jawbones (condyles) are automatically completed by a computer, as shown in fig. 5 and 6.
In this embodiment, 320 cases of facial spiral CT and corresponding gypsum dentition models of patients with bone type ii, bone type iii and asymmetric deformity are selected as training sets for constructing an algorithm. After the construction is successful, 140 cases are selected for testing to verify the effectiveness of the algorithm. The verification is as follows: in an accuracy experiment for testing automatic matching of IDOS dentition, maxillofacial spiral CT data of 140 cases of maxillofacial deformity subjected to orthognathic surgery and digital dentition models corresponding to the same period are selected. And (3) automatically completing the matching of the jaw bone segmentation and the dentition by using IDOS (IDOS), manually completing the matching of the jaw bone segmentation and the dentition by a clinician, and analyzing the difference of each cusp mark point and the difference of the whole dentition of the upper jaw and the lower jaw obtained by the two modes. The results are shown in table 1 and fig. 7-10, wherein fig. 8 and 10 are, from left to right: average deviation distance, average positive deviation, average negative deviation, standard deviation.
Table 1 accuracy testing experiment: difference of mark points of cusp
Figure BDA0003862087920000081
Figure BDA0003862087920000091
Description of the marking points: a is an upper alveolar seat point, U1 (R) is a near-middle incisor corner point of a right-side middle incisor of the upper jaw, U3 (R/L) is a right/left-side cusp point of the upper jaw, U6 (R/L) is a near-middle buccal cusp point of a right/left-side first molar of the upper jaw, B is a lower alveolar seat point, L1 (R) is a near-middle incisor corner point of a right-side middle incisor of the lower jaw, L3 (R/L) is a right/left-side cusp point of the lower jaw, and L6 (R/L) is a near-middle buccal cusp point of a right/left-side first molar of the lower jaw.
The three-dimensional space positions of the 12 cusp points are compared and decomposed into XYZ three directions to carry out experimental tests again, the three mark points of U6 (R/L) and L6 (R) are removed, the rest mark points have no statistical difference, and meanwhile, the color atlas of the upper and lower jaw integral dentition shows that the integral deviation is within 2mm, which shows that the result of automatically matching the dentition by using the IDOS is more accurate.
The clinical work time spent on manually segmenting the jaw and matching dentition and the clinical work time spent on automatically segmenting the jaw and matching dentition are respectively recorded, and the results show that: it was shown that the clinical work time for matching dentition with intelligent jaw bone segmentation was significantly shortened from an average of 40min to an average of 2min, greatly reducing the workload of the clinician, as shown in fig. 11-13.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The above embodiments are provided to explain the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. The dentition matching method based on artificial intelligence is characterized by comprising the following steps: the method comprises the following steps:
s1, constructing a dentition matching algorithm under constrained motion to obtain a target function for matching a digital dentition model and a skull model dentition
Figure DEST_PATH_IMAGE001
Figure DEST_PATH_IMAGE003
In the above formula, the first and second carbon atoms are,
Figure 190328DEST_PATH_IMAGE004
to digitize the upper dentition nodes of the dentition model,
Figure DEST_PATH_IMAGE005
to digitize the lower dentition nodes of the dentition model,
Figure 519679DEST_PATH_IMAGE006
is a relative sliding shaft between an upper dentition and a lower dentition,
Figure DEST_PATH_IMAGE007
is the relative sliding quantity between the upper dentition and the lower dentition,
Figure 780896DEST_PATH_IMAGE008
and
Figure DEST_PATH_IMAGE009
are all nodes on the skull bone and are,
Figure 512091DEST_PATH_IMAGE010
and
Figure DEST_PATH_IMAGE011
is a solid body rotation and translation of the whole body;
s2, aiming at the objective function
Figure 567772DEST_PATH_IMAGE001
And carrying out minimum solving to obtain an automatic matching mode of the upper dentition, the lower dentition and the skull model dentition and a translation and rotation mode of the upper dentition and the lower dentition.
2. The artificial intelligence based dentition matching method of claim 1 wherein: before the step S1, a preparation operation is further included, and the preparation operation includes:
collecting spiral CT data of the maxillofacial region;
collecting a corresponding plaster occlusion model when the patient shoots the maxillofacial spiral CT, and recording the occlusion relation of the patient;
obtaining a digitized dentition model comprising:
step 1, respectively scanning maxillary dentition and mandibular dentition to obtain complete maxillary dentition and complete mandibular dentition;
step 2, positioning the upper dentition model and the lower dentition model on the optimal occlusion by using occlusion recording wax, and scanning the upper dentition model and the lower dentition model simultaneously to obtain the relative relation of the upper dentition and the lower dentition;
and 3, matching the relative relation between the upper and lower jaw dentitions obtained in the step 1 and the step 2 in software of scanning equipment, and exporting the upper and lower jaw dentition models with the relative position relation recorded after scanning.
3. The jaw bone segmentation method based on artificial intelligence is characterized by comprising the following steps: the method comprises the following steps:
reconstructing a three-dimensional skull model based on the maxillofacial spiral CT data;
replacing the tooth parts on the three-dimensional skull model with the maxillary dentition and the mandibular dentition of the digitized dentition model using the dentition matching method as claimed in claim 1 or 2;
automatically segmenting the condylar portion of the three-dimensional skull model, comprising:
accurately positioning the condyles, and automatically finding the positions of the condyles;
and (3) cutting the condyles, classifying the surface point sets of the condyles and the condylar fossa based on the surface normal characteristics of the condyles and the condylar fossa, cutting out unnecessary points according to the two types of point sets, and realizing automatic segmentation of the condyles.
4. The jawbone segmentation method according to claim 3, characterized in that: the precise positioning method of the condylar process comprises the following steps:
taking the condyle obtained after manually segmenting the upper jaw and the lower jaw of a skull model as a reference model, taking the condyle of the skull model needing automatic segmentation in addition as a target model, and setting the node of the target model as
Figure 915577DEST_PATH_IMAGE012
The nodes of the reference model are
Figure DEST_PATH_IMAGE013
The matched objective function is expressed as
Figure 714906DEST_PATH_IMAGE014
Figure DEST_PATH_IMAGE015
In the above formula, the first and second carbon atoms are,
Figure 300608DEST_PATH_IMAGE010
and
Figure 199293DEST_PATH_IMAGE011
is a whole body of rotation and translation, and the rotation and translation of the whole body,
Figure 49043DEST_PATH_IMAGE016
is a change in the scale of the model, for the objective function
Figure 793008DEST_PATH_IMAGE014
And carrying out minimization solving to obtain the node for positioning the condyles.
5. The jaw bone segmentation method according to claim 3 or 4, characterized in that: the method for cutting the condyles comprises the following steps:
by using
Figure DEST_PATH_IMAGE017
Representing a set of condylar surface points:
Figure DEST_PATH_IMAGE019
in the above formula, the first and second carbon atoms are,
Figure 92271DEST_PATH_IMAGE020
is a node
Figure DEST_PATH_IMAGE021
The normal vector of (c), representing the local area around the condyle with N,
Figure 755334DEST_PATH_IMAGE022
is the coordinates of the center point of the region N,
Figure DEST_PATH_IMAGE023
is a unit vector in the vertical direction;
by using
Figure 546572DEST_PATH_IMAGE024
Representing a set of condylar fossa surface points:
Figure 94228DEST_PATH_IMAGE026
according to
Figure 185681DEST_PATH_IMAGE017
And
Figure DEST_PATH_IMAGE027
and automatically cutting out unnecessary points to realize the segmentation of the condyles.
6. The jaw bone segmentation method according to claim 3 or 4, characterized in that: the method also comprises the step of carrying out initial positioning on the condyle before the precise positioning on the condyle, wherein the initial positioning on the condyle comprises the following steps: and establishing different threshold models, and roughly identifying the range of the condyles and the condylar fossa through threshold reconstruction.
7. The jaw bone segmentation method of claim 6, wherein: the threshold reconstruction specifically comprises: after the required CT data and the corresponding STL dentition are obtained, the CT data are reconstructed according to the following three thresholds:
first, standard bone threshold;
second, cortical bone threshold;
third, dentinal threshold.
8. Dentition matching and jaw segmentation system based on artificial intelligence, its characterized in that: the method comprises the following steps:
the data import module is used for importing a three-dimensional skull model and a digital dentition model;
dentition matching and skull segmentation module for replacing the tooth parts on the three-dimensional skull model with the maxillary and mandibular dentitions of the digitized dentition model and the condylar parts of the segmented three-dimensional skull model using the jaw bone segmentation method according to any one of claims 3-8.
9. An apparatus comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein: the processor, when executing the computer program, implements the method of any one of claims 3-8.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 3-8.
CN202211166949.XA 2022-09-23 2022-09-23 Dentition matching and jaw segmentation method, system and medium based on artificial intelligence Pending CN115517792A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211166949.XA CN115517792A (en) 2022-09-23 2022-09-23 Dentition matching and jaw segmentation method, system and medium based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211166949.XA CN115517792A (en) 2022-09-23 2022-09-23 Dentition matching and jaw segmentation method, system and medium based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN115517792A true CN115517792A (en) 2022-12-27

Family

ID=84699183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211166949.XA Pending CN115517792A (en) 2022-09-23 2022-09-23 Dentition matching and jaw segmentation method, system and medium based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN115517792A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117796936A (en) * 2024-03-01 2024-04-02 四川大学 Manufacturing method of serial repositioning biting plates guided by full-digital technology

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117796936A (en) * 2024-03-01 2024-04-02 四川大学 Manufacturing method of serial repositioning biting plates guided by full-digital technology
CN117796936B (en) * 2024-03-01 2024-04-30 四川大学 Manufacturing method of serial repositioning biting plates guided by full-digital technology

Similar Documents

Publication Publication Date Title
Elnagar et al. Digital Workflow for Combined Orthodontics and Orthognathic Surgery.
US7758345B1 (en) Systems and methods for design and manufacture of a modified bone model including an accurate soft tissue model
US8199988B2 (en) Method and apparatus for combining 3D dental scans with other 3D data sets
KR101590330B1 (en) Method for deriving shape information
EP1486900A1 (en) Method and system for manufacturing a surgical guide
CA3072415A1 (en) Systems and methods for computer-aided orthognathic surgical planning
CN111388125B (en) Method and device for calculating tooth movement amount before and after orthodontic treatment
CN110236673B (en) Database-based preoperative design method and device for reconstruction of bilateral jaw defects
Ye et al. Integration accuracy of laser-scanned dental models into maxillofacial cone beam computed tomography images of different voxel sizes with different segmentation threshold settings
CN115517792A (en) Dentition matching and jaw segmentation method, system and medium based on artificial intelligence
KR102215068B1 (en) Apparatus and Method for Registrating Implant Diagnosis Image
Dings et al. Reliability and accuracy of skin-supported surgical templates for computer-planned craniofacial implant placement, a comparison between surgical templates: With and without bony fixation
Jang et al. Fully automatic integration of dental CBCT images and full-arch intraoral impressions with stitching error correction via individual tooth segmentation and identification
Thawri et al. 3D technology used for precision in orthodontics
Abramson et al. Ct-based modeling of the dentition for craniomaxillofacial surgical planning
Champleboux et al. A fast, accurate and easy method to position oral implant using computed tomography: Clinical validations
Jamali et al. clinical applications of digital dental Technology in Oral and Maxillofacial Surgery
Patel et al. 3D Virtual Surgical Planning: From Blueprint to Construction
Edwards et al. Applications of Cone Beam Computed Tomography to Orthognathic Surgery Treatment Planning
EP4385455A1 (en) A method of generating manufacturing parameters during a dental procedure for a dental prosthesis
Ntovas et al. Accuracy of artificial intelligence‐based segmentation of the mandibular canal in CBCT
Mahmood et al. A Practical Guide to Virtual Planning of Orthognathic Surgery and Splint Design Using Virtual Dentoskeletal Model
Molina Moguel et al. Digital Flow to obtain Surgical Guides and Customized Plates in Minimally Invasive (MIS) procedures for Facial Orthognathic Surgery
Hettiarachchi et al. Research Article Linear and Volumetric Analysis of Maxillary Sinus Pneumatization in a Sri Lankan Population Using Cone Beam Computer Tomography
Paoli et al. A CAD-based methodology for dental implant surgery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination