CN113052902B - Tooth treatment monitoring method - Google Patents

Tooth treatment monitoring method Download PDF

Info

Publication number
CN113052902B
CN113052902B CN202011588692.8A CN202011588692A CN113052902B CN 113052902 B CN113052902 B CN 113052902B CN 202011588692 A CN202011588692 A CN 202011588692A CN 113052902 B CN113052902 B CN 113052902B
Authority
CN
China
Prior art keywords
tooth
dimensional
image
teeth
dimensional model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011588692.8A
Other languages
Chinese (zh)
Other versions
CN113052902A (en
Inventor
苏剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yinma Technology Co ltd
Original Assignee
Shanghai Yinma Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yinma Technology Co ltd filed Critical Shanghai Yinma Technology Co ltd
Priority to CN202011588692.8A priority Critical patent/CN113052902B/en
Publication of CN113052902A publication Critical patent/CN113052902A/en
Application granted granted Critical
Publication of CN113052902B publication Critical patent/CN113052902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The invention discloses a tooth treatment monitoring method, which comprises the following steps: 1. scanning the inside of an oral cavity and constructing an initial three-dimensional model of the inside of the oral cavity; 2. establishing a three-dimensional tooth parameterized model database based on the entity three-dimensional model; 3. establishing an intra-oral image deep learning network based on a two-dimensional image; 4. scanning the inside of the oral cavity again, and carrying out normalization processing and feature extraction on the oral cavity image obtained by the scanning; 5. constructing a reconstructed three-dimensional model of the interior of the oral cavity based on the image of the interior of the oral cavity and the three-dimensional tooth parameterized model database; 6. registering and measuring the initial three-dimensional model and the reconstructed three-dimensional model to obtain tooth movement data. The invention can realize the data of the shapes of the teeth and the soft tissues in the oral cavity and accurately measure the treatment process, thereby reducing the labor intensity of doctors, reducing the risk of oral repair treatment and improving the treatment efficiency and the treatment experience of patients.

Description

Tooth treatment monitoring method
Technical Field
The invention belongs to the technical field of tooth monitoring, and particularly relates to a tooth treatment monitoring method.
Background
In the field of modern medicine, when performing oral cavity restoration treatment, especially orthodontic treatment, preparation of an oral cavity impression is the first procedure for acquiring three-dimensional morphological information of tooth bodies and soft tissues in an oral cavity. The quality of the preparation of the mouth impression is thus directly related to the final finish of the dental restoration work of the practitioner. Existing dental impression taking apparatus include: a hand-held wand having at least one sensor and a plurality of light sources, wherein the light sources are configured to emit light of a first spectral range and a second spectral range for imaging a surface of the tooth, wherein the second spectral range is near infrared; and one or more processors operatively connected to the hand-held wand, the one or more processors configured to: a three-dimensional surface model of at least a portion of a subject's teeth is generated using light from the first spectral range and the second spectral range. The above-described apparatus has the following problems in performing a dental impression operation: 1. they are invasive and unpleasant for the patient; 2. capturing is a cumbersome, expensive and time-consuming process requiring expert knowledge; the reconstruction process is limited to only one tooth. 3. The portable notebook computer is inconvenient to carry, has huge calculation amount and needs to be prepared into a high-performance notebook computer or a host computer. 4. Expensive, limited to doctors and clinics, and has less applicable scenes. 5. There is a lack of data calculation and doctors need accurate data of the actual moving distance of teeth during clinical treatment in order to make a reasonable judgment on the treatment scheme. Therefore, it is a direction that those skilled in the art need to study how to develop a novel dental treatment monitoring method based on the existing schemes to overcome the above-mentioned problems.
Disclosure of Invention
The invention aims to provide a tooth treatment monitoring method based on intraoral images, which can realize the data of the morphology of the tooth body and soft tissues in the oral cavity, thereby realizing the technical effects of reducing the labor intensity of doctors, reducing the risk of oral restoration treatment and improving the treatment efficiency and the treatment experience of patients.
A method of monitoring dental treatment comprising the steps of:
Step 1, scanning the inside of an oral cavity and constructing an initial three-dimensional model of the inside of the oral cavity;
step 2, establishing a three-dimensional tooth parameterized model database based on the entity three-dimensional model;
Step 3, establishing an intra-oral image deep learning network based on the two-dimensional image;
Step 4, scanning the inside of the oral cavity again, and carrying out normalization processing and feature extraction on the oral cavity image obtained by the scanning;
step 5, constructing a reconstructed three-dimensional model in the oral cavity based on the oral cavity image obtained in the step 4 and the three-dimensional tooth parameterized model database obtained in the step 2;
And 6, registering and measuring the initial three-dimensional model obtained in the step 1 and the reconstructed three-dimensional model obtained in the step5 to obtain tooth movement data.
Preferably, in the above method for monitoring dental treatment, the step 1 includes:
Step 11, scanning the inside of an oral cavity to obtain a plurality of two-dimensional images of the inside of the oral cavity;
Step 12, constructing an initial three-dimensional model of the inside of the oral cavity based on any one mode of alginate impression materials, an intraoral laser scanner and TOF camera multi-angle shooting.
More preferably, in the above method for monitoring dental treatment, the step2 includes:
step 21: preparing a solid three-dimensional model of a plurality of dentitions;
step 22: sorting the entity three-dimensional models obtained in the step 21, respectively taking each entity three-dimensional model as a current three-dimensional model according to the sorting, and executing the steps 23 to 25;
Step 23: carrying out laser scanning on the current entity three-dimensional model, and marking characteristic information on each dental crown surface on the current entity three-dimensional model; the characteristic information comprises characteristic points and characteristic lines;
step 24: separating a plurality of crown surface models from the solid three-dimensional model of step 23, and marking edge features on each of the crown surface models;
step 25: performing space division on the entity three-dimensional model based on the characteristic information and the edge characteristics;
step 26: training the characteristic information and the edge characteristic based on pointnet classification models to respectively obtain template networks of incisors, cuspids, double cuspids and molars;
Step 27: performing feature and reference recognition on the initial three-dimensional model based on the template network, and acquiring feature information of each crown surface model;
Step 28: based on the characteristic information of each crown surface model obtained in the step 27, obtaining a tooth average shape S TA, a tooth average posture P TA, a tooth space division B ST, a tooth subspace posture P ST i, a tooth coordinate transformation T T, a rigidity change matrix T J of dentition and each axial scaling S J of dentition, which are matched with the characteristic information of each crown surface model;
Step 29: based on the formula And establishing a three-dimensional tooth parameterized model database, wherein the T represents a tooth model corresponding to the crown model.
More preferably, in the above method for monitoring dental treatment, the step 25 includes the steps of:
step 251: detecting the curvature of the dental crown surface and acquiring a ridge line of the dental crown surface;
Step 252: dispersing the ridge lines obtained in the step 251 and the characteristic lines, and setting a one-to-one corresponding link relation between the ridge lines and the characteristic lines;
step 253: all characteristic lines and ridges are linked through spline fitting to form an arithmetic line;
step 254: forming a tooth long axis based on the top center point of the crown surface and the center point of gravity of the parting line;
step 255: setting a dental crown: the ratio of the tooth root to the tooth root is 1:2, and the tooth root is extended along the tooth long axis direction at the edge of the tooth crown surface;
step 256: forming an arithmetic line in the tooth root direction based on the tangential line;
Step 257: triangulating the arithmetic lines obtained in step 253, namely forming data information of each tooth.
More preferably, in the above method for monitoring dental treatment, the step 3 includes:
Step 31: obtaining a plurality of sets of two-dimensional dental image samples, the dental image samples comprising: an deciduous tooth image sample, a permanent tooth image sample, a multi-tooth image sample and a barrier tooth image sample;
Step 32: marking an ROI area in the two-dimensional image sample;
step 33: performing contour marking on the characteristic objects in the two-dimensional image sample;
Step 34: carrying out contour marking on key teeth in the two-dimensional image sample;
step 35: training key teeth, non-key teeth and characteristic objects in the two-dimensional image sample based on any algorithm of Blendmask, mask RCNN and PolarMask, SOLO respectively, and establishing image network models of the key teeth, the non-key teeth and the characteristic objects respectively;
step 36: each tooth in the two-dimensional image sample is named based on the FDI naming scheme, and the object characteristic of the two-dimensional image sample is named based on medical requirements.
More preferably, in the above method for monitoring dental treatment, the step 4 includes:
step 41: scanning the inside of an oral cavity to acquire images of the inside of a plurality of oral cavities;
step 42: carrying out normalization processing on each image obtained in the step 41, and unifying the tooth size and the tooth axial direction in each image;
Step 43: based on the image network model of the key teeth, the non-key teeth and the characteristic objects obtained in the step 35, respectively identifying and detecting the key teeth and the characteristic objects in the images processed in the step 42, filtering the identified characteristic objects in the images, registering the key teeth and the non-key teeth, and respectively obtaining the outlines and the numbers of the key teeth and the non-key teeth;
step 44: and (3) filtering each image processed in the step (43) based on any algorithm in GVF-Snake, gabor and DEEPSNAKE to generate an update stage image respectively.
More preferably, in the above method for monitoring dental treatment, the step 5 includes:
Step 51: sorting the two-dimensional images obtained in the step 11, and respectively executing the steps 52 to 510 as initial two-dimensional images according to the sorting;
Step 52: randomly selecting one tooth on the initial two-dimensional image as a target tooth;
Step 53: based on the initial three-dimensional model and the initial two-dimensional image obtained in the step 51 and the target tooth obtained in the step 52, obtaining an initial camera position through a maximum likelihood function or an ICP contour matching algorithm; sorting the update stage images obtained in the step 44, respectively selecting each update stage image according to the sorting, and executing the steps 54 to 59;
Step 54: selecting teeth corresponding to the target teeth on the update stage image;
Step 55: taking the initial camera position obtained in the step 52 as a reference camera position, obtaining a reference three-dimensional model based on the reference camera position and the updated stage image selected in the step 54 and based on a maximum likelihood function or an ICP contour matching algorithm;
Step 56: capturing a screenshot of the reference three-dimensional model obtained in the step 55 to obtain a reference two-dimensional image;
Step 57: extracting features of the reference two-dimensional image obtained in the step 56 to obtain a target tooth reference contour feature;
Step 58: matching the target tooth obtained in the step 53 with the target tooth reference contour obtained in the step 57, and obtaining a best matching change matrix based on a maximum likelihood function or an ICP contour matching algorithm;
step 59: acquiring the matching degree of all the teeth except the target teeth on the reference two-dimensional image and the corresponding teeth on the initial three-dimensional model based on the optimal matching change matrix obtained in the step 57;
If the average value of the matching degrees of the three teeth with the highest matching degrees reaches the matching threshold value, replacing the current matching threshold value with the average value of the matching degrees of the three teeth with the highest matching degrees, and jumping to the step 54 after adjusting the position of the reference camera;
If the average value of the matching degrees of the three teeth with the highest matching degrees is smaller than the matching threshold value, the step 510 is skipped;
Step 510: respectively obtaining the matching degree of each tooth on each updating stage image with the corresponding tooth on the initial three-dimensional model under different reference camera positions; determining stationary teeth and moving teeth in an initial two-dimensional image, and obtaining model transformation data of the moving teeth;
Step 511: and importing the matching degree of each tooth on each initial stage image with the corresponding tooth on the initial three-dimensional model under different reference camera positions into a simulated annealing algorithm, and obtaining the three-dimensional model corresponding to the scheme with the highest mean value of the matching degree of each tooth as a reconstructed three-dimensional model.
More preferably, in the above method for monitoring dental treatment, the step 6 includes:
step 61: taking the fully coincident teeth in the initial three-dimensional model and the reconstructed three-dimensional model as stationary teeth, and taking the stationary teeth as reference coordinates to carry out rigid transformation on the initial three-dimensional model and the reconstructed three-dimensional model;
step 62: registration of the moved teeth is performed based on ICP registration, and tooth movement data is obtained.
Compared with the prior art, the invention has the following advantages:
1) The time required for the model parameterization process is greatly shortened.
2) The equipment is small in size, the use site is not limited to outpatient service any more, and the equipment can be applied to places such as user families, offices, physical examination institutions and the like, and is wider in audience and market space.
3) The patient doctor can communicate with the patient more quickly, and the patient has any problem in the treatment process, can communicate with the main doctor clearly and can be solved timely, so that the treatment period of the patient is ensured.
4) The equipment requirement precision is lower, and the equipment cost performance is higher.
The technical scheme of the invention is further described in detail through examples.
Drawings
FIG. 1 is a workflow diagram of one embodiment of the present invention;
FIG. 2 is a flowchart of the operation of step 5 of FIG. 1;
Detailed Description
As shown in fig. 1-2:
A method of monitoring dental treatment comprising the steps of:
Step 1, scanning the inside of an oral cavity and constructing an initial three-dimensional model of the inside of the oral cavity;
Specifically, the method comprises the following steps:
Step 11, scanning the inside of an oral cavity to obtain a plurality of two-dimensional images of the inside of the oral cavity;
Step 12, constructing an initial three-dimensional model of the inside of the oral cavity based on any one mode of alginate impression materials, an intraoral laser scanner and TOF camera multi-angle shooting.
In step 1: the initial state of the oral cavity of the customer is obtained, wherein the initial state comprises a two-dimensional image and a three-dimensional model, a reference basis is provided for screening and diagnosis of doctors, and preparation is made for later prediction and review work. The alginate impression material, the intraoral laser scanner and the TOF camera multi-angle shooting in the step are all mature technologies existing in the field.
Step 2, establishing a three-dimensional tooth parameterized model database based on the entity three-dimensional model;
Specifically, the present step 2 includes:
step 21: preparing a solid three-dimensional model of a plurality of dentitions;
Step 22: sorting the entity three-dimensional models obtained in the step 21, and respectively executing the steps 23 to 25 by taking each entity three-dimensional model as a current three-dimensional model according to the sorting;
step 23: carrying out laser scanning on the current solid three-dimensional model, and marking characteristic information on each crown surface on the solid three-dimensional model; the characteristic information comprises characteristic points and characteristic lines;
the feature points include: a center point of the crown surface, distal and proximal highest points and lowest points of the crown surface in the X-axis direction;
the feature line includes: the parting line of the crown plane from the gums and the direction perpendicular to the crown plane identify the line.
Step 24: separating a plurality of crown surface models from the solid three-dimensional model of step 23, and marking edge features on each of the crown surface models;
the edge feature includes:
step 25: performing space division on the entity three-dimensional model based on the characteristic information and the edge characteristics;
step 26: training the characteristic information and the edge characteristic based on pointnet classification models to respectively obtain template networks of incisors, cuspids, double cuspids and molars;
Step 27: performing feature and reference recognition on the initial three-dimensional model based on the template network, and acquiring feature information of each crown surface model;
Step 28: based on the characteristic information of each crown surface model obtained in the step 27, obtaining a tooth average shape S TA, a tooth average posture P TA, a tooth space division B ST, a tooth subspace posture P ST i, a tooth coordinate transformation T T, a rigidity change matrix T J of dentition and each axial scaling S J of dentition, which are matched with the characteristic information of each crown surface model;
Step 29: based on the formula And establishing a three-dimensional tooth parameterized model database, wherein the T represents a tooth model corresponding to the crown model.
In step 2: by constructing a parameterized tooth three-dimensional model, a triangular mesh model with the same polarity and different directions is adjusted to be a triangular or quadrangular mesh model with a relatively uniform data structure, so that the efficiency and the accuracy of tooth morphology definition and position placement are improved. Meanwhile, the three-dimensional model is subjected to deep learning, so that the reconstruction efficiency of the model is improved, and the scanning times of patients to clinic are reduced.
In this embodiment, step 25 specifically includes:
step 251: detecting the curvature of the dental crown surface and acquiring a ridge line of the dental crown surface;
Step 252: dispersing the ridge lines obtained in the step 251 and the characteristic lines, and setting a one-to-one corresponding link relation between the ridge lines and the characteristic lines;
step 253: all characteristic lines and ridges are linked through spline fitting to form an arithmetic line;
step 254: forming a tooth long axis based on the top center point of the crown surface and the center point of gravity of the parting line;
step 255: setting a dental crown: the ratio of the tooth root to the tooth root is 1:2, and the tooth root is extended along the tooth long axis direction at the edge of the tooth crown surface;
step 256: forming an arithmetic line in the tooth root direction based on the tangential line;
Step 257: triangulating the arithmetic lines obtained in step 253, namely forming data information of each tooth.
Step 3, establishing an intra-oral image deep learning network based on the two-dimensional image;
specifically, the present step 3 includes:
Step 31: obtaining a plurality of sets of two-dimensional dental image samples, the dental image samples comprising: an deciduous tooth image sample, a permanent tooth image sample, a multi-tooth image sample and a barrier tooth image sample;
Step 32: marking an ROI area in the two-dimensional image sample;
Step 33: contour marking is carried out on characteristic objects in the two-dimensional image sample, wherein the characteristic objects comprise gums, lips, tongues and instruments;
Step 34: carrying out contour marking on key teeth in the two-dimensional image sample, wherein the key teeth are incisors, cuspids, bicuspids, molars, deciduous teeth, permanent teeth, multi-tooth teeth and blocked teeth;
Step 35: training key teeth, non-key teeth and characteristic objects in the two-dimensional image sample based on any algorithm of Blendmask, mask RCNN and PolarMask, SOLO respectively, and establishing image network models of the key teeth, the non-key teeth and the characteristic objects respectively; the non-critical teeth are all teeth on the dentition except the critical teeth.
Step 36: each tooth in the two-dimensional image sample is named based on the FDI naming scheme, and the object characteristic of the two-dimensional image sample is named based on medical requirements.
In step 3:
The depth learning changes the calculation process of an image processing algorithm, greatly improves the efficiency and the precision of image detection and segmentation, takes teeth as objects with very obvious outline characteristics, and can rapidly realize tooth position naming through a depth learning network.
Step 4, scanning the inside of the oral cavity again, and carrying out normalization processing and feature extraction on the oral cavity image obtained by the scanning;
specifically, this step 4 includes:
step 41: scanning the inside of an oral cavity to acquire images of the inside of a plurality of oral cavities;
step 42: carrying out normalization processing on each image obtained in the step 41, and unifying the tooth size and the tooth axial direction in each image;
Step 43: based on the image network model of the key teeth, the non-key teeth and the characteristic objects obtained in the step 35, respectively identifying and detecting the key teeth and the characteristic objects in the images processed in the step 42, filtering the identified characteristic objects in the images, registering the key teeth and the non-key teeth, and respectively obtaining the outlines and the numbers of the key teeth and the non-key teeth;
step 44: and (3) filtering each image processed in the step (43) based on any algorithm in GVF-Snake, gabor and DEEPSNAKE to generate an update stage image respectively.
In step 4: for the interference information which reduces detection, identification and segmentation accuracy, such as treatment consumables, oral hygiene and the like possibly existing in the oral cavity of a patient in practical operation, the contour edge of the tooth is detected in a segmented mode, for example: connecting the edge boundary of the lip, the boundary of the teeth, the boundary of the gum and the boundary of the maxillofacial region, and simultaneously, in order to make the edge smoother and sharper, convolving the original input image with a Gabor filter group, obtaining a boundary map in different directions and the weight of each boundary direction, and calculating by integrating the comprehensive weights of the boundaries and the directions to obtain an optimal value.
Step 5, constructing a reconstructed three-dimensional model in the oral cavity based on the oral cavity image obtained in the step 4 and the three-dimensional tooth parameterized model database obtained in the step 2;
specifically, step 5 includes:
Step 51: sorting the two-dimensional images obtained in the step 11, and respectively executing the steps 52 to 510 as initial two-dimensional images according to the sorting;
Step 52: randomly selecting one tooth on the initial two-dimensional image as a target tooth;
Step 53: based on the initial three-dimensional model and the initial two-dimensional image obtained in the step 51 and the target tooth obtained in the step 52, obtaining an initial camera position through a maximum likelihood function or an ICP contour matching algorithm; sorting the update stage images obtained in the step 44, respectively selecting each update stage image according to the sorting, and executing the steps 54 to 59;
Step 54: selecting teeth corresponding to the target teeth on the update stage image;
Step 55: taking the initial camera position obtained in the step 52 as a reference camera position, obtaining a reference three-dimensional model based on the reference camera position and the updated stage image selected in the step 54 and based on a maximum likelihood function or an ICP contour matching algorithm;
Step 56: capturing a screenshot of the reference three-dimensional model obtained in the step 55 to obtain a reference two-dimensional image;
Step 57: extracting features of the reference two-dimensional image obtained in the step 56 to obtain a target tooth reference contour feature;
Step 58: matching the target tooth obtained in the step 53 with the target tooth reference contour obtained in the step 57, and obtaining a best matching change matrix based on a maximum likelihood function or an ICP contour matching algorithm;
step 59: acquiring the matching degree of all the teeth except the target teeth on the reference two-dimensional image and the corresponding teeth on the initial three-dimensional model based on the optimal matching change matrix obtained in the step 57;
If the average value of the matching degrees of the three teeth with the highest matching degrees reaches the matching threshold value, replacing the current matching threshold value with the average value of the matching degrees of the three teeth with the highest matching degrees, and jumping to the step 54 after adjusting the position of the reference camera;
If the average value of the matching degrees of the three teeth with the highest matching degrees is smaller than the matching threshold value, the step 510 is skipped;
Step 510: respectively obtaining the matching degree of each tooth on each updating stage image with the corresponding tooth on the initial three-dimensional model under different reference camera positions; determining stationary teeth and moving teeth in an initial two-dimensional image, and obtaining model transformation data of the moving teeth;
Step 511: and importing the matching degree of each tooth on each initial stage image with the corresponding tooth on the initial three-dimensional model under different reference camera positions into a simulated annealing algorithm, and obtaining the three-dimensional model corresponding to the scheme with the highest mean value of the matching degree of each tooth as a reconstructed three-dimensional model.
In step 5: and taking the tooth image obtained by the first scanning as an initial image, obtaining a reference camera position through a maximum likelihood function or an ICP contour matching algorithm, reversely obtaining a reference two-dimensional image based on the reference camera position through the maximum likelihood function or the ICP contour matching algorithm, and respectively calculating the matching degree of each corresponding tooth on the initial three-dimensional model and the reference two-dimensional image based on an optimal matching change matrix. If the average value of the matching degrees of the three teeth reaches the matching threshold value, a reference coordinate system is established by the teeth of the three different jaws, the initial camera position is switched, the steps are repeated, and if the average value of the matching degrees of the three teeth does not reach the matching threshold value, the next image is switched. By continuously adjusting the view angle, double iterations of different stages are obtained, each iteration respectively comprises iteration on the view angle and the tooth position, tooth matching degrees of all initial images under all initial camera angles are traversed, and the moving quantity optimal value of each tooth is calculated. Thereby confirming the optimal reconstructed three-dimensional model of the scheme.
And 6, registering and measuring the initial three-dimensional model obtained in the step 1 and the reconstructed three-dimensional model obtained in the step5 to obtain tooth movement data.
Specifically, step 6 includes:
step 61: taking the fully coincident teeth in the initial three-dimensional model and the reconstructed three-dimensional model as stationary teeth, and taking the stationary teeth as reference coordinates to carry out rigid transformation on the initial three-dimensional model and the reconstructed three-dimensional model;
step 62: registration of the moved teeth is performed based on ICP registration, and tooth movement data is obtained.
In this step: by comparing the initial three-dimensional model with the reconstructed three-dimensional model, movement data of the moving teeth are obtained, and doctors are helped to accurately grasp the movement distance and movement direction of the teeth, so that corresponding treatment measures are formulated. It should be noted that, step 6 is a mature technology existing in the art.
The foregoing describes specific embodiments of the present application. It is to be understood that the application is not limited to the particular embodiments described above, and that various changes or modifications may be made by those skilled in the art within the scope of the appended claims without affecting the spirit of the application. The embodiments of the application and the features of the embodiments may be combined with each other arbitrarily without conflict.

Claims (2)

1. A method of monitoring dental treatment comprising the steps of:
Step 1, scanning the inside of an oral cavity and constructing an initial three-dimensional model of the inside of the oral cavity;
The step 1 comprises the following steps:
Step 11, scanning the inside of an oral cavity to obtain a plurality of two-dimensional images of the inside of the oral cavity;
step 12, constructing an initial three-dimensional model of the inside of the oral cavity based on any one mode of alginate impression materials, an intraoral laser scanner and TOF camera multi-angle shooting;
step 2, establishing a three-dimensional tooth parameterized model database based on the entity three-dimensional model;
The step 2 comprises the following steps:
step 21: preparing a solid three-dimensional model of a plurality of dentitions;
step 22: sorting the entity three-dimensional models obtained in the step 21, respectively taking each entity three-dimensional model as a current three-dimensional model according to the sorting, and executing the steps 23 to 25;
Step 23: carrying out laser scanning on the current entity three-dimensional model, and marking characteristic information on each dental crown surface on the current entity three-dimensional model; the characteristic information comprises characteristic points and characteristic lines;
step 24: separating a plurality of crown surface models from the solid three-dimensional model of step 23, and marking edge features on each of the crown surface models;
step 25: performing space division on the entity three-dimensional model based on the characteristic information and the edge characteristics;
step 26: training the characteristic information and the edge characteristic based on pointnet classification models to respectively obtain template networks of incisors, cuspids, double cuspids and molars;
Step 27: performing feature and reference recognition on the initial three-dimensional model based on the template network, and acquiring feature information of each crown surface model;
Step 28: based on the feature information of each crown surface model obtained in the step 27, obtaining a tooth average shape S TA, a tooth average posture P TA, a tooth space division B ST, a tooth subspace posture P sta, a tooth coordinate transformation T T, a rigidity change matrix T J of dentition, and each axial scaling S J of dentition, which are matched with the feature information of each crown surface model;
Step 29: establishing a three-dimensional tooth parameterized model database based on a formula, wherein the T represents a tooth model corresponding to the crown model;
Step 3, establishing an intra-oral image deep learning network based on the two-dimensional image; the step 3 comprises the following steps:
Step 31: obtaining a plurality of sets of two-dimensional dental image samples, the two-dimensional dental image samples comprising: an deciduous tooth image sample, a permanent tooth image sample, a multi-tooth image sample and a barrier tooth image sample;
step 32: marking an ROI area in the two-dimensional tooth image sample;
Step 33: contour marking a characteristic object in the two-dimensional tooth image sample, wherein the characteristic object comprises gums, lips, tongue and instruments;
Step 34: performing contour marking on key teeth in the two-dimensional tooth image sample;
Step 35: training key teeth, non-key teeth and characteristic objects in the two-dimensional tooth image sample based on any algorithm of Blendmask, mask RCNN and PolarMask, SOLO respectively, and establishing image network models of the key teeth, the non-key teeth and the characteristic objects respectively;
step 36: naming each tooth in the two-dimensional tooth image sample based on the FDI naming mode, and naming the characteristic object in the two-dimensional tooth image sample based on medical requirements;
Step 4, scanning the inside of the oral cavity again, and carrying out normalization processing and feature extraction on the oral cavity image obtained by the scanning; the step 4 comprises the following steps:
step 41: scanning the inside of an oral cavity to acquire images of the inside of a plurality of oral cavities;
step 42: carrying out normalization processing on each image obtained in the step 41, and unifying the tooth size and the tooth axial direction in each image;
Step 43: based on the image network model of the key teeth, the non-key teeth and the characteristic objects obtained in the step 35, respectively identifying and detecting the key teeth and the characteristic objects in the images processed in the step 42, filtering the identified characteristic objects in the images, registering the key teeth and the non-key teeth, and respectively obtaining the outlines and the numbers of the key teeth and the non-key teeth;
step 44: filtering each image processed in the step 43 based on any algorithm in GVF-Snake, gabor and DEEPSNAKE to generate an updated phase image respectively;
Step 5, constructing a reconstructed three-dimensional model in the oral cavity based on the oral cavity image obtained in the step 4 and the three-dimensional tooth parameterized model database obtained in the step 2; the step 5 comprises the following steps:
Step 51: sorting the two-dimensional images obtained in the step 11, and respectively executing the steps 52 to 510 as initial two-dimensional images according to the sorting;
Step 52: randomly selecting one tooth on the initial two-dimensional image as a target tooth;
Step 53: based on the initial three-dimensional model and the initial two-dimensional image obtained in the step 51 and the target tooth obtained in the step 52, obtaining an initial camera position through a maximum likelihood function or an ICP contour matching algorithm; sorting the update stage images obtained in the step 44, respectively selecting each update stage image according to the sorting, and executing the steps 54 to 59;
Step 54: selecting teeth corresponding to the target teeth on the update stage image;
Step 55: taking the initial camera position obtained in the step 52 as a reference camera position, obtaining a reference three-dimensional model based on the reference camera position and the updated stage image selected in the step 54 and based on a maximum likelihood function or an ICP contour matching algorithm;
Step 56: capturing a screenshot of the reference three-dimensional model obtained in the step 55 to obtain a reference two-dimensional image;
Step 57: extracting features of the reference two-dimensional image obtained in the step 56 to obtain a target tooth reference contour feature;
Step 58: matching the target tooth obtained in the step 53 with the target tooth reference contour obtained in the step 57, and obtaining a best matching change matrix based on a maximum likelihood function or an ICP contour matching algorithm;
step 59: acquiring the matching degree of all the teeth except the target teeth on the reference two-dimensional image and the corresponding teeth on the initial three-dimensional model based on the optimal matching change matrix obtained in the step 57;
If the average value of the matching degrees of the three teeth with the highest matching degrees reaches the matching threshold value, replacing the current matching threshold value with the average value of the matching degrees of the three teeth with the highest matching degrees, and jumping to the step 54 after adjusting the position of the reference camera;
If the average value of the matching degrees of the three teeth with the highest matching degrees is smaller than the matching threshold value, the step 510 is skipped;
Step 510: respectively obtaining the matching degree of each tooth on each updating stage image with the corresponding tooth on the initial three-dimensional model under different reference camera positions; determining stationary teeth and moving teeth in an initial two-dimensional image, and obtaining model transformation data of the moving teeth;
Step 511: introducing the matching degree of each tooth on each initial stage image with the corresponding tooth on the initial three-dimensional model under different reference camera positions into a simulated annealing algorithm, and obtaining the three-dimensional model corresponding to the scheme with the highest mean value of the matching degree of each tooth as a reconstructed three-dimensional model;
Step 6, registering and measuring the initial three-dimensional model obtained in the step1 and the reconstructed three-dimensional model obtained in the step 5 to obtain tooth movement data; the step 6 comprises the following steps:
Step 61: taking the fully coincident teeth in the initial three-dimensional model and the reconstructed three-dimensional model as stationary teeth, and taking the stationary teeth as reference coordinates, and carrying out rigid transformation on the initial three-dimensional model and the reconstructed three-dimensional model;
step 62: registration of the moved teeth is performed based on ICP registration, and tooth movement data is obtained.
2. The method of dental treatment monitoring as in claim 1, wherein said step 25 comprises the steps of:
step 251: detecting the curvature of the dental crown surface and acquiring a ridge line of the dental crown surface;
Step 252: dispersing the ridge lines obtained in the step 251 and the characteristic lines, and setting a one-to-one corresponding link relation between the ridge lines and the characteristic lines;
step 253: all characteristic lines and ridges are linked through spline fitting to form an arithmetic line;
step 254: forming a tooth long axis based on the top center point of the crown surface and the center point of gravity of the parting line;
step 255: setting a dental crown: the ratio of the tooth root to the tooth root is 1:2, and the tooth root is extended along the tooth long axis direction at the edge of the tooth crown surface;
step 256: forming an arithmetic line in the tooth root direction based on the tangential line;
Step 257: triangulating the arithmetic lines obtained in step 253, namely forming data information of each tooth.
CN202011588692.8A 2020-12-29 2020-12-29 Tooth treatment monitoring method Active CN113052902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011588692.8A CN113052902B (en) 2020-12-29 2020-12-29 Tooth treatment monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011588692.8A CN113052902B (en) 2020-12-29 2020-12-29 Tooth treatment monitoring method

Publications (2)

Publication Number Publication Date
CN113052902A CN113052902A (en) 2021-06-29
CN113052902B true CN113052902B (en) 2024-05-14

Family

ID=76508269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011588692.8A Active CN113052902B (en) 2020-12-29 2020-12-29 Tooth treatment monitoring method

Country Status (1)

Country Link
CN (1) CN113052902B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463407B (en) * 2022-01-19 2023-02-17 西安交通大学口腔医院 System for realizing oral cavity shaping simulation display by combining 3D image with feature fusion technology
CN114429463B (en) * 2022-01-25 2023-12-22 首都医科大学附属北京同仁医院 Periodontal soft tissue treatment effect evaluation method and device
WO2024082284A1 (en) * 2022-10-21 2024-04-25 深圳先进技术研究院 Orthodontic automatic tooth arrangement method and system based on mesh feature deep learning
CN115619773B (en) * 2022-11-21 2023-03-21 山东大学 Three-dimensional tooth multi-mode data registration method and system
CN117058309B (en) * 2023-07-12 2024-04-16 广州医科大学附属肿瘤医院 Image generation method and system based on oral imaging

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016003257A2 (en) * 2014-07-04 2016-01-07 주식회사 인스바이오 Tooth model generation method for dental procedure simulation
CN110811653A (en) * 2019-10-21 2020-02-21 东南大学 Method for measuring tooth three-dimensional movement in orthodontic process
CN111931843A (en) * 2020-08-10 2020-11-13 深圳爱舒笑科技有限公司 Method for monitoring tooth position based on image processing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3027504B1 (en) * 2014-10-27 2022-04-01 H 43 METHOD FOR CONTROLLING THE POSITIONING OF TEETH
US10504386B2 (en) * 2015-01-27 2019-12-10 Align Technology, Inc. Training method and system for oral-cavity-imaging-and-modeling equipment
US11013578B2 (en) * 2017-10-11 2021-05-25 Dental Monitoring Method to correct a dental digital model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016003257A2 (en) * 2014-07-04 2016-01-07 주식회사 인스바이오 Tooth model generation method for dental procedure simulation
CN110811653A (en) * 2019-10-21 2020-02-21 东南大学 Method for measuring tooth three-dimensional movement in orthodontic process
CN111931843A (en) * 2020-08-10 2020-11-13 深圳爱舒笑科技有限公司 Method for monitoring tooth position based on image processing

Also Published As

Publication number Publication date
CN113052902A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN113052902B (en) Tooth treatment monitoring method
US11651494B2 (en) Apparatuses and methods for three-dimensional dental segmentation using dental image data
CN109310488B (en) Method for estimating at least one of shape, position and orientation of a dental restoration
US20200350059A1 (en) Method and system of teeth alignment based on simulating of crown and root movement
CN112200843B (en) Super-voxel-based CBCT and laser scanning point cloud data tooth registration method
Bulatova et al. Assessment of automatic cephalometric landmark identification using artificial intelligence
US9191648B2 (en) Hybrid stitching
US20170076443A1 (en) Method and system for hybrid mesh segmentation
CN112515787B (en) Three-dimensional dental data analysis method
EP4074259A1 (en) Method and apparatus for automatically detecting feature points of three-dimensional medical image data by using deep learning
Chen et al. Missing teeth and restoration detection using dental panoramic radiography based on transfer learning with CNNs
CN110811653A (en) Method for measuring tooth three-dimensional movement in orthodontic process
Deng et al. An automatic approach to establish clinically desired final dental occlusion for one-piece maxillary orthognathic surgery
Chifor et al. Three-dimensional periodontal investigations using a prototype handheld ultrasound scanner with spatial positioning reading sensor
Deleat-Besson et al. Automatic segmentation of dental root canal and merging with crown shape
CN115886863A (en) Tooth and facial bone three-dimensional overlapping measurement method and device with total skull base as datum plane
Tang et al. On 2d-3d image feature detections for image-to-geometry registration in virtual dental model
CN107564094A (en) A kind of tooth model characteristic point automatic identification algorithm based on local coordinate
Croquet et al. Automated landmarking for palatal shape analysis using geometric deep learning
Sornam Artificial Intelligence in Orthodontics-An exposition
EP4307229A1 (en) Method and system for tooth pose estimation
CN114463328B (en) Automatic orthodontic difficulty coefficient evaluation method
Younis et al. Classification of Lateral Incisor Teeth for Aesthetic Dentistry Using Intelligent Techniques
WO2024121067A1 (en) Method and system for aligning 3d representations
WO2020133180A1 (en) Orthodontic method and apparatus based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant