CN117137660A - Method for determining occlusion relation between upper dentition and lower dentition of digitized intraoral scanning model - Google Patents

Method for determining occlusion relation between upper dentition and lower dentition of digitized intraoral scanning model Download PDF

Info

Publication number
CN117137660A
CN117137660A CN202311087242.4A CN202311087242A CN117137660A CN 117137660 A CN117137660 A CN 117137660A CN 202311087242 A CN202311087242 A CN 202311087242A CN 117137660 A CN117137660 A CN 117137660A
Authority
CN
China
Prior art keywords
tooth
dentition
contour
intraoral
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311087242.4A
Other languages
Chinese (zh)
Inventor
陈晓军
陈怡洲
叶傲冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Shanghai Zhengya Dental Technology Co Ltd
Original Assignee
Shanghai Jiaotong University
Shanghai Zhengya Dental Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University, Shanghai Zhengya Dental Technology Co Ltd filed Critical Shanghai Jiaotong University
Priority to CN202311087242.4A priority Critical patent/CN117137660A/en
Publication of CN117137660A publication Critical patent/CN117137660A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C9/00Impression cups, i.e. impression trays; Impression methods
    • A61C9/004Means or methods for taking digitized impressions
    • A61C9/0046Data acquisition means or methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for determining the occlusion relationship between upper and lower dentitions of a digitized intraoral scanning model, which comprises the following steps: obtaining intraoral flaring photos of different angles, and extracting a tooth contour segmentation map by adopting a tooth semantic segmentation model; acquiring an upper dentition triangular patch file and a lower dentition triangular patch file of a digital intraoral scanning model; projecting the digitized intraoral scanning model according to parameters to be optimized and intraoral flaring photos at different angles, and extracting a tooth profile projection diagram; performing corresponding point relation matching according to the tooth profile segmentation map and the tooth profile projection map, and iteratively optimizing camera parameters, upper and lower dentition relative positions and orientation parameters based on a loss function until convergence; and determining the occlusion relation of the upper dentition and the lower dentition according to the optimal parameters, and converting the triangular patches of the upper dentition and the lower dentition under different coordinate systems into the same coordinate system. Compared with the prior art, the method can infer the relative position relationship between the upper dentition and the lower dentition of the digitized intraoral scanning model of the patient by using the intraoral flaring photo shot by the uncalibrated camera.

Description

Method for determining occlusion relation between upper dentition and lower dentition of digitized intraoral scanning model
Technical Field
The invention relates to the technical field of digital medical treatment, in particular to a method for determining the occlusion relationship between upper and lower dentitions of a digital intraoral scanning model.
Background
Orthodontic is focused on correcting the abnormal teeth and maxillofacial region, and relates to diagnosis, prevention and treatment of various abnormalities of the teeth and maxillofacial region structures, so as to improve the occlusion function, the attractive teeth and the oral health of patients, and is mainly applicable to the situation of abnormal teeth but normal jawbones. Common upper and lower dentition occlusion relationships include normal occlusion, deep coverage occlusion, open jaw occlusion, cross occlusion, and the like. In the orthodontic treatment process, dentists generally check the occlusion relationship between the upper dentition and the lower dentition of a patient through means such as clinical examination, dental model or oral impression, X-ray examination and the like, and make corresponding treatment schemes according to specific conditions. An intraoral scanner is an advanced digital technology, can be used for acquiring three-dimensional model data of the oral cavity of a patient, can provide detailed tooth and maxillofacial structure information, but often cannot directly judge complete occlusion relationship by the model data of the intraoral scanner alone. The intraoral flaring photo can show detailed structures of teeth and jawbone, including crowding, dislocation, missing, etc. The dentist can evaluate the oral health of the patient, the position of the teeth and the occlusion relationship by means of these photographs, determine whether orthodontic treatment is required and the proper treatment method. Therefore, it is highly desirable to provide a method for combining dental occlusion information in an intraoral flaring photograph with digitized upper and lower dentition models obtained by an intraoral scanner to determine occlusion relationship of the digitized upper and lower dentition models in three-dimensional space, so as to facilitate diagnosis analysis and subsequent treatment by dentists.
Disclosure of Invention
The invention aims to provide a method for determining the occlusion relationship between the upper dentition and the lower dentition of a digital intraoral scanning model, which is used for determining the occlusion relationship between the digital upper dentition and the lower dentition three-dimensional model obtained by an intraoral scanner through tooth occlusion information in three intraoral flaring photos with different angles and assisting an orthodontist in judging the occlusion condition of teeth of a patient.
The aim of the invention can be achieved by the following technical scheme:
a method for determining the bite relationship of upper and lower dentitions of a digitized intraoral scan model comprising the steps of:
step 1) obtaining three photos of mouth flaring with different angles on the front side, the left side and the right side;
step 2) carrying out tooth semantic segmentation on intraoral flaring photos of different angles by adopting a tooth semantic segmentation model based on deep learning, and extracting a tooth contour segmentation map with tooth numbering information;
step 3) acquiring upper and lower dentition triangular patch files of a digital intraoral scanning model with tooth number information under two different coordinate systems;
step 4) initializing camera parameters, upper and lower dentition relative positions and orientation parameters;
step 5) based on the current camera parameters, the relative positions and the orientation parameters of the upper dentition and the lower dentition and the standard small-hole camera model, projecting the upper dentition model and the lower dentition model obtained by digital intraoral scanning according to intraoral flaring photos with different angles, and extracting a visible tooth profile projection graph with tooth number information;
step 6), carrying out corresponding point relation matching according to the tooth profile segmentation map and the tooth profile projection map, defining a loss function, iteratively optimizing camera parameters, upper and lower dentition relative positions and orientation parameters by calculating a loss function value of a matching result, and repeating the steps 5) -6) until convergence to obtain an optimal solution;
and 7) determining an upper dentition occlusion relation and a lower dentition occlusion relation according to the calculated optimal upper dentition relative position and orientation parameters, and converting the upper dentition triangular surface patches and the lower dentition triangular surface patches under different coordinate systems in the digital intraoral scanning model to the same coordinate system according to the optimal upper dentition relative position and orientation parameters to generate a corresponding file.
Said step 2) comprises the steps of:
step 2-1) constructing a tooth semantic segmentation model based on a U-Net3+ encoder and decoder structure in deep learning, a multi-scale cavity space convolution pooling pyramid module and a multi-task learning double-branch structure, and outputting a tooth semantic segmentation map by taking an intraoral flaring photo as an input;
step 2-2) using a post-processing algorithm to adjust the output tooth semantic segmentation map and numbering the teeth;
step 2-3) determining the extraction sequence of the visible tooth contours based on the relative area relation of the upper and lower dentition tooth areas, classifying the extracted tooth contours according to the tooth numbers of the extracted tooth contours, and obtaining a tooth contour segmentation map with tooth number information.
The network structure of the tooth semantic segmentation model is that the output of a standard U-Net3+ image encoder is simultaneously input into a standard U-Net3+ tooth semantic segmentation decoder and a standard U-Net3+ tooth binary contour segmentation decoder, the outputs of the two decoders are stacked and then input into a region-contour fusion module based on a multi-scale cavity space convolution pooling pyramid, wherein the region-contour fusion module is formed by connecting three convolution layers in series with one multi-scale cavity convolution module and then connecting three convolution layers in series.
The post-processing algorithm performs the following operations on the tooth semantic segmentation map: determining the connected areas of the tooth semantic segmentation map, unifying the tooth numbers in the same connected area, extracting the largest connected area of the tooth numbers of different teeth positions, adjusting the tooth numbers of the connected areas according to a specific sequence, ensuring the uniqueness of the connected areas under the same tooth number, and smoothing the result by using a morphological algorithm.
The extraction sequence for determining the visible tooth profile based on the relative area relation of the upper and lower dentition tooth areas specifically comprises the following steps:
if the division area of the tooth area of the upper dentition is larger than that of the lower dentition, the visible outline of the teeth of the upper dentition is preferentially extracted, otherwise, the outline of the teeth of the lower dentition is preferentially extracted, and the blocked outline of the teeth is ignored according to the sequence from the middle to the left side and the right side when the outline is extracted.
The camera parameters include camera internal parameters including a focal length of the camera, principal point coordinates, and physical dimensions of the pixels on the horizontal and vertical axes, and external parameters including a position and orientation of the camera in the world coordinate system.
The matching relation for matching the corresponding point relation according to the tooth profile segmentation diagram and the tooth profile projection diagram is calculated by the following formula:
wherein c i τ Representing the coordinates of the ith point of the tooth contour of the tooth position tau extracted from the photograph in step 2) in the pixel coordinate system, representing the coordinates of the ith point of the visible contour corresponding to the tooth position tau projected in step 5) in the pixel coordinate system, < >>n i τ Representing the planar normal vector of the ith point of the tooth contour line of the tooth position tau extracted from the photograph of step 2) in the pixel coordinate system +.> Representing the ith visible contour line of the corresponding dental site tau projected in step 5)Plane normal vector of point in pixel coordinate system, < >> Representing the square of the vector's two norms, σ being an adjustable superparameter, ++>
The loss function comprises a contour corresponding point distance matching loss and a contour corresponding point normal vector matching loss, the total loss function is the sum of loss functions corresponding to photos with different angles, and the loss function to be optimized is corresponding to each photoExpressed as:
wherein L is p Matching the loss function for the distance between corresponding points of the profile, L n Matching loss function lambda for normal vector of corresponding point of profile n Is the parameter of the ultrasonic wave to be used as the ultrasonic wave,
the distance matching loss function L of the corresponding points of the outline p Expressed as:
wherein N is the total number of tooth contour points obtained by segmentation in the photo, T is the number of tooth categories obtained by segmentation in the photo, tau is the tooth number, and N τ Representing the number of contour points of the tau-th tooth, c i τ Representing step 2) the ith of the tooth profile of the extracted tooth position tau from the photographCoordinates of the individual points in the pixel coordinate system, representing the coordinates of the ith point of the visible contour corresponding to the tooth position tau projected in step 5) in the pixel coordinate system, < >>
The normal vector of the corresponding point of the contour is matched with a loss function L n Expressed as:
wherein N is the total number of tooth contour points obtained by segmentation in the photo, T is the number of tooth categories obtained by segmentation in the photo, tau is the tooth number, and N τ Representing the number of contour points of the tau-th tooth, c i τ Representing the coordinates of the ith point of the tooth contour of the tooth position tau extracted from the photograph in step 2) in the pixel coordinate system, representing the coordinates of the ith point of the visible contour corresponding to the tooth position tau projected in step 5) in the pixel coordinate system, < >> Representing the normal vector of the i-th point of the visible contour line of the corresponding dental site tau projected in step 5) in the pixel coordinate system,/-><·,·>Representing a vector inner product operation.
Compared with the prior art, the invention has the following beneficial effects:
(1) According to the invention, through carrying out tooth semantic segmentation on intraoral flaring photos at different angles by adopting a tooth semantic segmentation model based on deep learning, a tooth contour segmentation map with tooth numbering information is accurately extracted.
(2) According to the invention, the upper and lower dentition of the digital intraoral scanning model are projected according to intraoral flaring photos of different angles, a visible tooth profile projection diagram with tooth numbering information is extracted, camera parameters and upper and lower dentition relative position parameters are iteratively optimized according to a defined loss function between profiles until an optimal solution is obtained by convergence, the optimal upper and lower dentition relative position parameters are calculated, upper and lower dentition triangular patches in different coordinate systems in the digital model are converted into upper and lower dentition occlusion relations under the same coordinate system, the tooth occlusion relations can be intuitively determined from the digital model, and a dentist is assisted in making decisions.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
fig. 2 is a schematic diagram of a network structure of the tooth semantic segmentation model according to the present invention.
Detailed Description
The invention will now be described in detail with reference to the drawings and specific examples. The present embodiment is implemented on the premise of the technical scheme of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection scope of the present invention is not limited to the following examples.
The embodiment provides a method for determining the occlusion relationship between upper and lower dentitions of a digitized intraoral scanning model, which is to input the triangular facial mask file of the upper and lower dentitions of the digitized intraoral scanning model with tooth number information obtained by scanning by an intraoral scanner of a patient under two different coordinate systems and three intraoral flaring photographs of different angles of the front side, the left side and the right side, and output the triangular facial mask file of the upper and lower dentitions of the digitized intraoral scanning model under the same coordinate system and the relative position relationship between the upper and lower dentitions, namely the occlusion relationship between teeth. Specifically, as shown in fig. 1, the method comprises the following steps:
step 1) obtaining three photos of the intraoral flaring at different angles on the front side, the left side and the right side.
And 2) carrying out tooth semantic segmentation on the three input intraoral flaring photos with different angles by adopting a tooth semantic segmentation model based on deep learning, and extracting a tooth contour segmentation map with tooth number information.
Step 2-1) constructing a tooth semantic segmentation model based on a U-Net3+ encoder and decoder structure in deep learning, a multi-scale cavity space convolution pooling pyramid module and a multi-task learning double-branch structure, and outputting corresponding three tooth semantic segmentation graphs by taking three intraoral flaring photos with different angles of the case as input;
as shown in fig. 2, the network structure of the tooth semantic segmentation model adopted in this embodiment is that the output of a standard U-net3+ image encoder is simultaneously input into a standard U-net3+ tooth semantic segmentation decoder and a standard U-net3+ tooth binary contour segmentation decoder, the outputs of the two decoders are stacked and then input into a region-contour fusion module based on a multi-scale cavity space convolution pooling pyramid, wherein the region-contour fusion module is formed by connecting three convolution layers in series with a multi-scale cavity convolution module and then connecting three convolution layers in series, and the output is a tooth semantic segmentation graph.
In the embodiment, a tooth semantic segmentation model is trained on a training set of about 15000 pictures, the model with the best effect is stored on a test set, the structure and parameters of the model are stored as local files, the next direct call is convenient, the input picture size of the tooth semantic segmentation model is 256,256,3, the output tooth semantic segmentation picture size is 256,256,33, the tooth semantic segmentation picture adopts single-heat coding, 33 categories are included, 1 background category and 32 tooth categories are included, the convolution layer convolution kernel size in the model is 3*3, a ReLU function is used as an activation function of each layer, and the PyTorch frame is used for realizing the training and evaluation process.
Step 2-2) respectively adjusting the three output tooth semantic segmentation graphs by using a post-processing algorithm and numbering the teeth;
specifically, the post-processing algorithm performs the following operations on the tooth semantic segmentation map: determining a communication area of the tooth semantic segmentation graph, unifying all tooth numbers in the communication area according to the tooth number with the largest number of pixels in the communication area, extracting the largest communication area of the tooth numbers of different teeth, adjusting the tooth numbers of the communication area according to a specific sequence, guaranteeing the uniqueness of the communication area under the same tooth number, modifying the tooth numbers of the redundant communication area, and smoothing the result by using morphological algorithms such as erosion, expansion and the like. In this embodiment, the post-processing algorithm is implemented based on an OpenCV algorithm library.
Step 2-3) determining the extraction sequence of visible tooth contours based on the relative area relation of the upper and lower dentition tooth areas, if the segmentation area of the upper dentition tooth area is larger than that of the lower dentition, preferentially extracting the visible contours of the upper dentition teeth, otherwise, preferentially extracting the tooth contours of the lower dentition, neglecting the blocked tooth contours according to the sequence from the middle to the left and right sides when extracting the contours, and classifying the extracted tooth contours according to the tooth numbers to obtain a tooth contour segmentation diagram with tooth number information.
Step 3) obtaining upper and lower dentition triangular surface patch files of the digital intraoral scanning model with tooth number information under two different coordinate systems.
Step 4) initializing camera parameters, upper and lower dentition relative positions and orientation parameters.
For intraoral flaring photographs of different angles, respectively initializing an internal parameter and an external parameter of a camera by using empirical values, wherein the internal parameter comprises a focal length of the camera, a principal point coordinate and physical dimensions of pixels on a transverse axis and a longitudinal axis, the external parameter comprises a position and an orientation of the camera under a world coordinate system, and the relative position and the orientation of upper dentition and the lower dentition are initialized so that the three-dimensional model presents a standard normal occlusion relationship, specifically, the relative position initial relative rotation vector of the upper dentition and the lower dentition is [0, 0], the relative position relationship is that the upper dentition is positioned 7mm above the lower dentition, the front is 2mm, and the left-right direction offset is 0.
And 5) projecting the vertexes of the pre-processed digital intraoral scanning upper and lower dentition models with tooth number information according to intraoral flaring photos of different angles based on current camera parameters, upper and lower dentition relative positions and orientation parameters and a standard small-hole camera model, and extracting visible edge contour points of each tooth to obtain a visible tooth contour projection diagram with tooth number information.
And 6) carrying out corresponding point relation matching according to the tooth profile segmentation map and the tooth profile projection map, defining a loss function, iteratively optimizing camera parameters, upper and lower dentition relative positions and orientation parameters by calculating a loss function value of a matching result, and repeating the steps 5) -6) until convergence to obtain an optimal solution.
And carrying out corresponding point relation matching according to the tooth contour segmentation map and the tooth contour points in the tooth contour projection map, wherein the matching relation is calculated by the following formula:
wherein c i τ Representing the coordinates of the ith point of the tooth contour of the tooth position tau extracted from the photograph in step 2) in the pixel coordinate system, representing the coordinates of the ith point of the visible contour corresponding to the tooth position tau projected in step 5) in the pixel coordinate system, < >>n i τ Representing the planar normal vector of the ith point of the tooth contour line of the tooth position tau extracted from the photograph of step 2) in the pixel coordinate system +.> Representing the normal vector of the i-th point of the visible contour line of the corresponding dental site tau projected in step 5) in the pixel coordinate system,/-> Representing the square of the vector's two norms, σ being an adjustable superparameter, ++>
In this embodiment, the loss function includes a contour corresponding point distance matching loss and a contour corresponding point normal vector matching loss, and the total loss function is a sum of loss functions corresponding to photographs at different angles, and the loss function to be optimized is corresponding to each photographExpressed as:
wherein L is p Matching the loss function for the distance between corresponding points of the profile, L n Matching loss function lambda for normal vector of corresponding point of profile n Is the parameter of the ultrasonic wave to be used as the ultrasonic wave,in the present embodiment, lambda n The value is 0.05.
Specifically, the contour corresponding point distance matches the loss function L p Expressed as:
wherein N is the total number of tooth contour points obtained by segmentation in the photo, T is the number of tooth categories obtained by segmentation in the photo, tau is the tooth number, and N τ Representing the number of contour points of the tau-th tooth, c i τ Representing the coordinates of the ith point of the tooth contour of the tooth position tau extracted from the photograph in step 2) in the pixel coordinate system, representing the coordinates of the ith point of the visible contour corresponding to the tooth position tau projected in step 5) in the pixel coordinate system, < >>
Profile corresponding point normal vector matching loss function L n Expressed as:
wherein N is the total number of tooth contour points obtained by segmentation in the photo, T is the number of tooth categories obtained by segmentation in the photo, tau is the tooth number, and N τ Representing the number of contour points of the tau-th tooth, c i τ Representing the coordinates of the ith point of the tooth contour of the tooth position tau extracted from the photograph in step 2) in the pixel coordinate system, representing the coordinates of the ith point of the visible contour corresponding to the tooth position tau projected in step 5) in the pixel coordinate system, < >> Representing the normal vector of the i-th point of the visible contour line of the corresponding dental site tau projected in step 5) in the pixel coordinate system,/-><·,·>Representing a vector inner product operation.
And iterating the step 5) and the step 6) for a plurality of times until the loss function in the step 6) converges, wherein the value of the loss function tends to be stable, and the optimized camera parameters and the relative position and orientation parameters of the upper dentition and the lower dentition are obtained. Specifically, the camera parameters and the relative positions of the upper and lower dentitions are iterated 15 rounds, and the gradient of the optimization function is determined through explicit deduction, and the optimization is performed by adopting a sequence least square method as an optimization algorithm.
And 7) determining an upper dentition occlusion relation and a lower dentition occlusion relation according to the calculated optimal upper dentition relative position and orientation parameters, and converting the upper dentition triangular surface patches and the lower dentition triangular surface patches under different coordinate systems in the digital intraoral scanning model to the same coordinate system according to the optimal upper dentition relative position and orientation parameters to generate a corresponding file.
The specific operation is that the lower dentition triangular patch model is transformed into the coordinate system of the upper dentition according to the relation between the upper dentition position and the lower dentition position and the orientation obtained in the step 6), then the triangular patches of the two models are combined into one file, and the file in obj format is generated as the final output.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention by one of ordinary skill in the art without undue burden. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by a person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.

Claims (10)

1. A method for determining the bite relationship of upper and lower dentitions of a digitized intraoral scan model comprising the steps of:
step 1) obtaining three photos of mouth flaring with different angles on the front side, the left side and the right side;
step 2) carrying out tooth semantic segmentation on intraoral flaring photos of different angles by adopting a tooth semantic segmentation model based on deep learning, and extracting a tooth contour segmentation map with tooth numbering information;
step 3) acquiring upper and lower dentition triangular patch files of a digital intraoral scanning model with tooth number information under two different coordinate systems;
step 4) initializing camera parameters, upper and lower dentition relative positions and orientation parameters;
step 5) based on the current camera parameters, the relative positions and the orientation parameters of the upper dentition and the lower dentition and the standard small-hole camera model, projecting the upper dentition model and the lower dentition model obtained by digital intraoral scanning according to intraoral flaring photos with different angles, and extracting a visible tooth profile projection graph with tooth number information;
step 6), carrying out corresponding point relation matching according to the tooth profile segmentation map and the tooth profile projection map, defining a loss function, iteratively optimizing camera parameters, upper and lower dentition relative positions and orientation parameters by calculating a loss function value of a matching result, and repeating the steps 5) -6) until convergence to obtain an optimal solution;
and 7) determining an upper dentition occlusion relation and a lower dentition occlusion relation according to the calculated optimal upper dentition relative position and orientation parameters, and converting the upper dentition triangular surface patches and the lower dentition triangular surface patches under different coordinate systems in the digital intraoral scanning model to the same coordinate system according to the optimal upper dentition relative position and orientation parameters to generate a corresponding file.
2. A method for determining the bite relationship between the upper and lower dentitions of a digitized intraoral scan model as claimed in claim 1 wherein said step 2) comprises the steps of:
step 2-1) constructing a tooth semantic segmentation model based on a U-Net3+ encoder and decoder structure in deep learning, a multi-scale cavity space convolution pooling pyramid module and a multi-task learning double-branch structure, and outputting a tooth semantic segmentation map by taking an intraoral flaring photo as an input;
step 2-2) using a post-processing algorithm to adjust the output tooth semantic segmentation map and numbering the teeth;
step 2-3) determining the extraction sequence of the visible tooth contours based on the relative area relation of the upper and lower dentition tooth areas, classifying the extracted tooth contours according to the tooth numbers of the extracted tooth contours, and obtaining a tooth contour segmentation map with tooth number information.
3. The method for determining the occlusion relation between upper and lower dentitions of a digitized intraoral scan model according to claim 2, wherein the network structure of the tooth semantic segmentation model is that the output of a standard U-Net3+ image encoder is simultaneously input into a standard U-Net3+ tooth semantic segmentation decoder and a standard U-Net3+ tooth binary contour segmentation decoder, the outputs of the two decoders are stacked and then input into a region-contour fusion module based on a multi-scale cavity space convolution pooling pyramid, and the region-contour fusion module is formed by connecting three convolution layers in series with one multi-scale cavity convolution module and connecting three convolution layers in series.
4. A method for determining the bite relationship between the upper and lower dentitions of a digitized intraoral scan model according to claim 2 wherein said post-processing algorithm performs the following operations on the tooth semantic segmentation map: determining the connected areas of the tooth semantic segmentation map, unifying the tooth numbers in the same connected area, extracting the largest connected area of the tooth numbers of different teeth positions, adjusting the tooth numbers of the connected areas according to a specific sequence, ensuring the uniqueness of the connected areas under the same tooth number, and smoothing the result by using a morphological algorithm.
5. The method for determining the occlusion relationship between upper and lower dentitions of a digitized intraoral scan model of claim 2, wherein said determining the extraction sequence of visible tooth contours based on the relative area relationship between upper and lower dentition tooth regions is specifically:
if the division area of the tooth area of the upper dentition is larger than that of the lower dentition, the visible outline of the teeth of the upper dentition is preferentially extracted, otherwise, the outline of the teeth of the lower dentition is preferentially extracted, and the blocked outline of the teeth is ignored according to the sequence from the middle to the left side and the right side when the outline is extracted.
6. A method of determining the bite relationship of the dentition above and below a digitized intraoral scan model according to claim 1 wherein said camera parameters comprise camera internal parameters including the focal length of the camera, principal point coordinates and physical dimensions of the pixels on the horizontal and vertical axes and external parameters including the position and orientation of the camera in the world coordinate system.
7. The method for determining the occlusion relationship between the upper dentition and the lower dentition of the digitized intraoral scan model according to claim 1, wherein the matching relationship for matching the corresponding point relationship according to the tooth contour segmentation map and the tooth contour projection map is calculated by the following formula:
wherein c i τ Representing the coordinates of the ith point of the tooth contour of the tooth position tau extracted from the photograph in step 2) in the pixel coordinate system, representing the coordinates of the ith point of the visible contour corresponding to the tooth position tau projected in step 5) in the pixel coordinate system, < >>n i τ Representation ofStep 2) plane normal vector of the ith point of the tooth contour line of the tooth position τ extracted from the photograph in the pixel coordinate system +.> Representing the normal vector of the i-th point of the visible contour line of the corresponding dental site tau projected in step 5) in the pixel coordinate system,/-> Representing the square of the vector's two norms, σ being an adjustable superparameter, ++>
8. The method for determining the occlusion relationship between upper and lower dentitions of a digitized intraoral scan model of claim 1 wherein said loss functions include contour corresponding point distance matching loss and contour corresponding point normal vector matching loss, the total loss function being the sum of the loss functions corresponding to different angle photographs, each photograph corresponding to the loss function to be optimizedExpressed as:
wherein L is p Matching the loss function for the distance between corresponding points of the profile, L n Matching loss function lambda for normal vector of corresponding point of profile n Is the parameter of the ultrasonic wave to be used as the ultrasonic wave,
9. the method for determining the occlusion relationship between the upper and lower dentitions of a digitized intraoral scan model of claim 8 wherein said contour correspondence point distance matches a loss function L p Expressed as:
wherein N is the total number of tooth contour points obtained by segmentation in the photo, T is the number of tooth categories obtained by segmentation in the photo, tau is the tooth number, and N τ Representing the number of contour points of the tau-th tooth, c i τ Representing the coordinates of the ith point of the tooth contour of the tooth position tau extracted from the photograph in step 2) in the pixel coordinate system, representing the coordinates of the ith point of the visible contour corresponding to the tooth position tau projected in step 5) in the pixel coordinate system, < >>
10. The method for determining the occlusion relationship between the upper and lower dentitions of a digitized intraoral scan model of claim 8 wherein said contour correspondence point normal vector matches a loss function L n Expressed as:
wherein N is taken asThe total number of tooth profile points obtained by segmentation in the sheet, T is the number of tooth categories obtained by segmentation in the photo, tau represents the tooth number, n τ Representing the number of contour points of the tau-th tooth, c i τ Representing the coordinates of the ith point of the tooth contour of the tooth position tau extracted from the photograph in step 2) in the pixel coordinate system, representing the coordinates of the ith point of the visible contour corresponding to the tooth position tau projected in step 5) in the pixel coordinate system, < >> Representing the normal vector of the i-th point of the visible contour line of the corresponding dental site tau projected in step 5) in the pixel coordinate system,/-><·,·>Representing a vector inner product operation.
CN202311087242.4A 2023-08-28 2023-08-28 Method for determining occlusion relation between upper dentition and lower dentition of digitized intraoral scanning model Pending CN117137660A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311087242.4A CN117137660A (en) 2023-08-28 2023-08-28 Method for determining occlusion relation between upper dentition and lower dentition of digitized intraoral scanning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311087242.4A CN117137660A (en) 2023-08-28 2023-08-28 Method for determining occlusion relation between upper dentition and lower dentition of digitized intraoral scanning model

Publications (1)

Publication Number Publication Date
CN117137660A true CN117137660A (en) 2023-12-01

Family

ID=88911265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311087242.4A Pending CN117137660A (en) 2023-08-28 2023-08-28 Method for determining occlusion relation between upper dentition and lower dentition of digitized intraoral scanning model

Country Status (1)

Country Link
CN (1) CN117137660A (en)

Similar Documents

Publication Publication Date Title
US10916053B1 (en) Systems and methods for constructing a three-dimensional model from two-dimensional images
CN111784754B (en) Tooth orthodontic method, device, equipment and storage medium based on computer vision
US10096094B2 (en) Alignment of mixed-modality data sets for reduction and removal of imaging artifacts
WO2020034668A1 (en) Method and system for evaluating planting precision of implant
US11850113B2 (en) Systems and methods for constructing a three-dimensional model from two-dimensional images
CN106846307B (en) Image processing method and device based on cone beam computed tomography
CN111685899A (en) Dental orthodontic treatment monitoring method based on intraoral images and three-dimensional models
CN111727022B (en) Method for aligning a three-dimensional model of a patient&#39;s dentition with a facial image of a patient
CN112308895A (en) Method for constructing realistic dentition model
US11887209B2 (en) Method for generating objects using an hourglass predictor
CN114586069A (en) Method for generating dental images
CN117137660A (en) Method for determining occlusion relation between upper dentition and lower dentition of digitized intraoral scanning model
US20220358740A1 (en) System and Method for Alignment of Volumetric and Surface Scan Images
CN104603859A (en) Method for dental prosthodontic and prosthesis digital archiving and fabrication, and teaching and training thereof
US20220175491A1 (en) Method for estimating and viewing a result of a dental treatment plan
Hsung et al. Image to geometry registration for virtual dental models
CN113039587B (en) Hybrid method for acquiring 3D data using intraoral scanner
EP4276765A1 (en) Method to correct scale of dental impressions
US20240024076A1 (en) Combined face scanning and intraoral scanning
CN117115352A (en) Method and device for generating orthodontic effect preview image
EP4307229A1 (en) Method and system for tooth pose estimation
EP4328862A2 (en) Dental treatment simulation data generation apparatus, data generation method, and data generation program
Yamany et al. Orthodontics measurements using computer vision
CN113397585B (en) Tooth body model generation method and system based on oral CBCT and oral scan data
US20230013902A1 (en) System and Method for Correcting for Distortions of a Diagnostic Image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination