CN115223205A - Three-dimensional oral-scanning tooth separation model tooth position identification method, medium and device based on deep learning - Google Patents

Three-dimensional oral-scanning tooth separation model tooth position identification method, medium and device based on deep learning Download PDF

Info

Publication number
CN115223205A
CN115223205A CN202210898084.XA CN202210898084A CN115223205A CN 115223205 A CN115223205 A CN 115223205A CN 202210898084 A CN202210898084 A CN 202210898084A CN 115223205 A CN115223205 A CN 115223205A
Authority
CN
China
Prior art keywords
tooth
model
classification
neural network
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210898084.XA
Other languages
Chinese (zh)
Inventor
王登海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Renheng Meiguang Technology Co ltd
Original Assignee
Chengdu Renheng Meiguang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Renheng Meiguang Technology Co ltd filed Critical Chengdu Renheng Meiguang Technology Co ltd
Priority to CN202210898084.XA priority Critical patent/CN115223205A/en
Publication of CN115223205A publication Critical patent/CN115223205A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Abstract

The invention provides a tooth position identification method, medium and device of a three-dimensional mouth-scanning tooth separation model based on deep learning, wherein the method comprises the following steps: acquiring a training set and a test set of a three-dimensional oral scanning tooth separation model; carrying out data preprocessing on the training set and the test set; constructing a convolution neural network model for the three-dimensional oral scanning tooth separation model; training the constructed convolutional neural network model by utilizing the preprocessed training set; testing the trained convolutional neural network model by utilizing the preprocessed test set; inputting the three-dimensional oral scanning tooth separation model to be identified into the tested convolutional neural network model to obtain tooth position classification information and tooth classification information; and carrying out auxiliary correction on the tooth position classification information by utilizing the tooth classification information. According to the invention, under the condition that repeated manual labeling of tooth classification data is not needed, after the tooth classification full-connection layer is introduced, some obvious tooth classification errors can be corrected by combining tooth classification.

Description

Three-dimensional oral-scanning tooth separation model tooth position identification method, medium and device based on deep learning
Technical Field
The invention relates to the technical field of tooth position identification, in particular to a tooth position identification method, medium and device of a three-dimensional mouth-scanning tooth separation model based on deep learning.
Background
The three-dimensional oral-scanning tooth separation model refers to a regular model in which irregular triangular patches in an original three-dimensional oral-scanning model are automatically or manually subjected to tooth separation and then sequentially stored, and the tooth model and the alveolar bone model can be separated from the three-dimensional oral-scanning tooth separation model by sequentially reading the triangular patches and carrying out sealing detection.
After the original three-dimensional oral scan model is subjected to tooth separation by tooth separation software or three-dimensional software manually, auxiliary diagnosis is carried out, and tooth position identification is the basis and is particularly important. At present, a complete tooth position identification scheme aiming at a three-dimensional oral scanning tooth separation model does not exist, and the tooth position identification method of the tooth separation model in the prior art comprises the following steps: a large amount of data needs to be marked to carry out deep classification model training, and then the model is utilized to carry out tooth position identification. The problems with this approach are: a large amount of data needs to be marked, and the depth classification model cannot well carry out inter-class constraint, so that the problem that the tooth positions are repeated at a certain probability and the problems that the tooth positions are covered and missed are caused. Data preprocessing generally adopts a farthest point sampling algorithm, and increases the time for preprocessing data and the instability of tooth position prediction.
Disclosure of Invention
The invention aims to provide a tooth position identification method, medium and device of a three-dimensional oral scanning tooth separation model based on deep learning, and aims to solve the problems of the tooth position identification method of the tooth separation model in the prior art.
The invention provides a tooth position identification method of a three-dimensional oral scanning tooth separation model based on deep learning, which comprises the following steps:
acquiring a training set and a test set of a three-dimensional oral scanning tooth separation model;
carrying out data preprocessing on the training set and the test set;
constructing a convolution neural network model for the three-dimensional oral scanning tooth separation model; the convolutional neural network model comprises a feature extraction network, a tooth position classification full-connection layer and a tooth classification full-connection layer, wherein the tooth position classification full-connection layer and the tooth classification full-connection layer are connected with the feature extraction network;
training the constructed convolutional neural network model by utilizing the preprocessed training set;
testing the trained convolutional neural network model by utilizing the preprocessed test set;
inputting the three-dimensional oral scanning tooth separation model to be identified into the tested convolutional neural network model to obtain tooth position classification information and tooth classification information;
and carrying out auxiliary correction on the tooth position classification information by utilizing the tooth classification information.
Further, the method for acquiring the training set and the test set of the three-dimensional oral scanning tooth separation model comprises the following steps:
acquiring a three-dimensional oral-scan tooth separation model;
carrying out data annotation on the three-dimensional oral scanning tooth separation model to obtain annotation data;
and dividing the labeled data into a training set and a test set according to a proportion.
Further, the method for performing data annotation on the three-dimensional oral scanning tooth separation model comprises the following steps:
dividing the three-dimensional oral-scanning tooth separation model into an upper tooth model and a lower tooth model; wherein, the upper tooth model and the lower tooth model are respectively a binary stl format file; the initial 80 bytes in the stl format file are a file header, then the number of triangular patches of the upper tooth model or the lower tooth model is described by an integer of 4 bytes, and the geometric information of each triangular patch is given one by one; the geometric information of the triangular patch refers to three vertexes of the triangular patch in a three-dimensional space, and each vertex is determined by coordinates (x, y, z);
sequentially reading the triangular surface patch, taking the product of x, y and z in the coordinates (x, y, z) of each vertex of the triangular surface patch as the unique identifier of each vertex, and taking the product of the unique identifiers of two adjacent vertices as the unique identifier of the edge between the two vertices; creating a key value pair object, using the unique identifier of the edge as a key, wherein the value is 1, if the key value pair object does not have the unique identifier of the edge, the unique identifier of the edge is put into the key value pair object, and if the key value pair object has the unique identifier of the same edge, the unique identifier of the edge is deleted from the key value pair object;
because the triangular patches of the teeth or the alveolar bones in the three-dimensional oral-scanning tooth separation model are stored in sequence and are closed, when the key value pair object is traversed in sequence, the triangular patches during sequential traversal are shown to be the triangular patches forming one tooth or alveolar bone, and data annotation is completed by recording the corresponding relation between the positions of the teeth or alveolar bones and the start and stop subscripts of the triangular patches of the teeth or alveolar bones in the stl format file; the upper tooth model and the lower tooth model are labeled in pairs and are placed in the same json format file to serve as labeling data.
Further, the method for preprocessing the data of the training set and the test set comprises the following steps:
(1) For the labeled data in the training set and the test set, the following data preprocessing is carried out on the lower tooth model in the labeled data:
arranging triangular patches corresponding to each tooth in the lower tooth model according to the vertex sequence to form a matrix (m, 3), wherein m represents the number of the vertices of each tooth, and 3 represents three dimensions of x, y and z; because the vertexes are repeated, filtering the repeated vertexes to form a vertex matrix, and randomly taking 1024 vertexes from the vertex matrix to form a (1024,3) matrix to represent one tooth or alveolar bone; randomly losing one tooth in the lower tooth model, sequencing each tooth or alveolar bone according to a central point in a reverse time direction, splicing the matrix, arranging the alveolar bones at the first position, and obtaining a splicing matrix of which the dimension of the single lower tooth model is (((tooth number-1 (lost one)) +1 (alveolar bone))) 1024,3); finding a vertex farthest from the coordinate origin in the lower dental model, and dividing the splicing matrix by the radius by taking the distance between the farthest vertex and the coordinate origin as the radius to realize equal scaling so as to normalize x, y and z to be (0,1); splicing the labeled tooth position classification data as a training and testing target while splicing the matrix, thereby obtaining lower tooth model preprocessing data; wherein the tooth classification target is mapped according to tooth position classification;
(2) For the labeled data in the training set and the test set, the upper tooth model in the labeled data is subjected to the following data preprocessing:
arranging triangular patches corresponding to each tooth in the upper tooth model according to the vertex sequence to form a matrix (m, 3), wherein m represents the number of the vertices of each tooth, and 3 represents three dimensions of x, y and z; because the vertexes are repeated, filtering the repeated vertexes to form a vertex matrix, and randomly taking 1024 vertexes from the vertex matrix to form a (1024,3) matrix to represent one tooth or alveolar bone; randomly losing one tooth in the upper tooth model, sequencing each tooth or alveolar bone according to a central point in a reverse time direction, splicing the matrix, arranging the alveolar bones at the first position, and obtaining a splicing matrix of which the dimension of a single upper tooth model is (((tooth number-1 (lost one)) +1 (alveolar bone))) 1024,3); finding a vertex farthest from the coordinate origin in the upper dental model, and dividing the splicing matrix by the radius by taking the distance between the farthest vertex and the coordinate origin as the radius to realize equal scaling so as to normalize x, y and z to be (0,1); splicing and marking tooth position classification data as a training and testing target while splicing the matrix, thereby obtaining upper tooth model preprocessing data; wherein the tooth classification target is mapped according to tooth position classification;
further, the method for constructing the convolutional neural network model for the three-dimensional oral scanning tooth separation model comprises the following steps:
constructing a feature extraction network; the feature extraction network comprises an STN3d module, a first deformation module, a coordinate axis transformation module, three one-dimensional convolution layers, a Max layer and a second deformation module which are sequentially connected with a PointNet network model;
constructing a tooth position classification full-connection layer; the tooth position classification full-connection layer adopts three full-connection layers and is connected with the output end of the second deformation module;
constructing a tooth classification full-connection layer; the tooth classification full-connection layer adopts three full-connection layers and is connected with the output end of the second deformation module.
Further, the method for training the constructed convolutional neural network model by using the preprocessed training set includes:
inputting the preprocessed training set into the constructed convolutional neural network model, and outputting tooth position classification information and tooth classification information;
calculating cross entropy loss of tooth position classification information output by the convolutional neural network model and a tooth position classification target in the preprocessed training set, and calculating cross entropy loss of the output tooth classification information and the tooth classification target in the preprocessed training set; taking the sum of cross entropy losses of the two as a function of total loss; then using an Adam optimizer to perform back propagation optimization on the total loss function; and stopping training after the training reaches the maximum iteration number or the total loss tends to be stable.
Further, the method for testing the trained convolutional neural network model by using the preprocessed test set includes:
inputting the preprocessed test set into a trained convolutional neural network model, and outputting tooth position classification information and tooth classification information;
and respectively comparing the tooth position classification information and the tooth classification information output by the convolutional neural network model with the tooth position classification target and the tooth classification target in the preprocessed test set to evaluate the test effect:
if the test effect is not ideal, adjusting the structure of the convolutional neural network model or repeating the training process with super parameters;
and if the test effect meets the requirement, carrying out tooth position identification by using the tested convolutional neural network model.
Further, the method for auxiliary correction of the tooth position classification information by using the tooth classification information comprises the following steps:
determining incisors according to the tooth classification information;
correcting the tooth position from the incisors to the left side;
the tooth position is corrected from the incisors to the right.
The invention also provides a computer terminal storage medium which stores computer terminal executable instructions and is characterized in that the computer terminal executable instructions are used for executing the tooth position identification method of the three-dimensional oral scanning tooth separation model based on deep learning.
The present invention also provides a computing device comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the deep learning based three-dimensional mouth scan tooth separation model tooth position identification method as described above.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
according to the invention, under the condition that repeated manual labeling of tooth classification data is not needed, after the tooth classification full-connection layer is introduced, some obvious tooth classification errors can be corrected by combining tooth classification.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flowchart of a tooth position recognition method of a three-dimensional oral scanning tooth separation model based on deep learning according to an embodiment of the present invention.
Fig. 2 is a schematic view of an upper tooth model and a lower tooth model in a three-dimensional oral-scan tooth separation model according to an embodiment of the present invention.
FIG. 3 is a schematic view of the tooth position marking of the lower tooth model according to the embodiment of the present invention.
Fig. 4 is a schematic diagram of annotation data of a json-format file according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a mosaic matrix in an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a convolutional neural network model constructed in an embodiment of the present invention.
FIG. 7 is a flowchart of training and testing a convolutional neural network model according to an embodiment of the present invention.
Fig. 8 is a schematic diagram illustrating auxiliary correction of tooth position classification information by using tooth classification information according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Examples
As shown in fig. 1, the present embodiment provides a tooth position recognition method of a three-dimensional oral scan tooth separation model based on deep learning, which includes the following steps:
s100, acquiring a training set and a test set of a three-dimensional oral scanning tooth separation model;
s200, carrying out data preprocessing on the training set and the test set;
s300, constructing a convolution neural network model for the three-dimensional oral-scanning tooth separation model;
s400, training the constructed convolutional neural network model by utilizing the preprocessed training set;
s500, testing the trained convolutional neural network model by using the preprocessed test set;
s600, inputting the three-dimensional oral scanning tooth separation model to be identified into the tested convolutional neural network model to obtain tooth position classification information and tooth classification information;
and S700, carrying out auxiliary correction on the tooth position classification information by utilizing the tooth classification information.
Specifically, the method comprises the following steps:
s100, acquiring a training set and a testing set of the three-dimensional oral scanning tooth separation model:
s110, acquiring a three-dimensional oral-scanning tooth separation model;
s120, carrying out data annotation on the three-dimensional oral-scanning tooth separation model to obtain annotation data:
s121, dividing the three-dimensional oral-scanning tooth separation model into an upper tooth model and a lower tooth model, as shown in figure 2; wherein, the upper tooth model and the lower tooth model are respectively a binary stl format file; the initial 80 bytes in the stl format file are a file header, then the number of triangular patches of the upper tooth model or the lower tooth model is described by an integer of 4 bytes, and the geometric information of each triangular patch is given one by one; the geometric information of the triangular patch refers to three vertexes of the triangular patch in a three-dimensional space, and each vertex is determined by coordinates (x, y, z);
s122, reading the triangular surface patch in sequence, taking the product of x, y and z in the coordinates (x, y, z) of each vertex of the triangular surface patch as the unique identifier of each vertex, and taking the product of the unique identifiers of two adjacent vertices as the unique identifier of the edge between the two vertices; creating a key value pair object, using the unique identifier of the edge as a key, wherein the value is 1, if the key value pair object does not have the unique identifier of the edge, the unique identifier of the edge is put into the key value pair object, and if the key value pair object has the unique identifier of the same edge, the unique identifier of the edge is deleted from the key value pair object;
s123, because the triangular patches of the teeth or the alveolar bones in the three-dimensional oral scanning tooth separation model are stored in sequence and are closed, when the key value pair object is traversed in sequence, the triangular patches during sequential traversal are shown as triangular patches forming one tooth or alveolar bone, and data labeling is completed by recording the corresponding relation between the positions of the teeth or alveolar bones and the start and stop subscripts of the triangular patches of the teeth or alveolar bones in the stl format file; taking the following dental model as an example, data labeling is performed from left to right, and alveolar bones are 0 in sequence: [632520, 714492], wisdom tooth is 1: [489654, 572760], post-2 molar is 2: [572760, 632520], and so on, as shown in FIG. 3, where 0,1,2.. 16 denotes the dentition classification and "[ ]" denotes the starting index; the upper teeth model also carries out data annotation from left to right; therefore, the upper tooth model and the lower tooth model are labeled in pairs and are placed in the same json format file as labeled data, as shown in fig. 4.
And S130, dividing the labeled data into a training set and a test set in proportion, wherein in the embodiment, 90% of the labeled data is used as the training set, and 10% of the labeled data is used as the test set.
S200, carrying out data preprocessing on the training set and the test set:
s210, for the labeled data in the training set and the test set, performing the following data preprocessing on the lower tooth model in the labeled data:
arranging triangular patches corresponding to each tooth in the lower tooth model according to the vertex sequence to form a matrix (m, 3), wherein m represents the number of the vertices of each tooth, and 3 represents three dimensions of x, y and z; because the vertexes are repeated, filtering the repeated vertexes to form a vertex matrix, and randomly taking 1024 vertexes from the vertex matrix to form a matrix (1024,3) to represent one tooth or alveolar bone; randomly losing one tooth in the lower tooth model, sequencing each tooth or alveolar bone according to the central point in a reverse time direction, splicing the matrix, arranging the alveolar bone at the first position, and obtaining a spliced matrix of which the dimension of the single lower tooth model is (((number of teeth-1 (one tooth is lost)) +1 (alveolar bone))) 1024,3) as shown in fig. 5; finding a vertex farthest from the coordinate origin in the lower dental model, and dividing the splicing matrix by the radius by taking the distance between the farthest vertex and the coordinate origin as the radius to realize equal scaling so as to normalize x, y and z to be (0,1); splicing the marked tooth position classification data as a training and testing target while splicing the matrix, thereby obtaining lower tooth model preprocessing data; wherein, the tooth classification target is mapped according to tooth position classification, such as 0 represents alveolar bone, 1 incisor, 2 cuspids, 3 premolars and 4 posterior molars;
s220, for the labeled data in the training set and the test set, performing the following data preprocessing on the upper teeth model in the labeled data:
arranging triangular patches corresponding to each tooth in the upper tooth model according to the vertex sequence to form a matrix (m, 3), wherein m represents the number of the vertices of each tooth, and 3 represents three dimensions of x, y and z; because the vertexes are repeated, filtering the repeated vertexes to form a vertex matrix, and randomly taking 1024 vertexes from the vertex matrix to form a matrix (1024,3) to represent one tooth or alveolar bone; randomly losing one tooth in the upper tooth model, sequencing each tooth or alveolar bone according to a central point in a reverse time direction, splicing the matrix, arranging the alveolar bones at the first position, and obtaining a splicing matrix of which the dimension of a single upper tooth model is (((tooth number-1 (lost one)) +1 (alveolar bone))) 1024,3); finding a vertex farthest from the coordinate origin in the upper dental model, taking the distance between the farthest vertex and the coordinate origin as a radius, dividing the splicing matrix by the radius to realize equal scaling, and normalizing x, y and z to be (0,1); splicing and marking tooth position classification data as a training and testing target while splicing the matrix, thereby obtaining upper tooth model preprocessing data; wherein the tooth classification goal is mapping according to tooth position classification.
It can be seen that the data preprocessing of the lower and upper tooth models is the same.
S300, constructing a convolutional neural network model for the three-dimensional oral scanning tooth separation model, as shown in FIG. 6:
s310, constructing a feature extraction network; the feature extraction network comprises an STN3d module, a first deformation module, a coordinate axis transformation module, three one-dimensional convolution layers, a Max layer and a second deformation module which are sequentially connected with a PointNet network model; firstly, an STN3d module in a PointNet network model is used for transforming a splicing matrix (1, 3, n × 1024), then a first transformation module Reshape is transformed into (3, n, 1024), then a coordinate axis transformation module is used for transforming coordinate axes into (n, 3, 1024), then three one-dimensional convolutions are used for promoting the transformed 3 channels to 1024 channels, then 1 Max layer is used for transforming 1024 vertexes into 1 characteristic, and finally a second transformation module Reshape is transformed into (1, n, 1024);
s320, constructing a tooth position classification full-connection layer; the tooth position classification full-connection layer adopts three full-connection layers and is connected with the output end of the second deformation module; the dentition classification fully-connected layer converts 1024 features into (16 + 1) = =17 classifications (in, 16 represents 16 dentitions; 1 represents alveolar bone).
S330, constructing a tooth classification full-connection layer; the tooth classification full-connection layer adopts three full-connection layers and is connected with the output end of the second deformation module; tooth classification the fully-connected layer converted 1024 features into (4+1) = =5 classifications (where 4 represents incisor, cuspid, premolar, and posterior molar; 1 represents alveolar bone).
In this embodiment, the fully-connected layer is added after the feature extraction network of the convolutional neural network model, and the fully-connected layer has the same input line number (tooth number) as the output line number. Thus, the order of the teeth corresponds one-to-one to the result output by the convolutional neural network model.
S400, training the constructed convolutional neural network model by using the preprocessed training set, as shown in fig. 7:
s410, inputting the preprocessed training set into the constructed convolutional neural network model, outputting tooth position classification information and tooth classification information, and outputting the tooth position classification information and the tooth classification information; the batch size is 1 because batch training cannot be performed due to different tooth numbers;
s420, solving cross entropy loss of the tooth position classification information output by the convolutional neural network model and a tooth position classification target in the preprocessed training set, and solving cross entropy loss of the output tooth classification information and the tooth classification target in the preprocessed training set; taking the sum of the cross entropy losses of the two as a total loss function; the expression of the total loss function is as follows:
L=loss classification of tooth positions +loss Tooth classification
Wherein L represents the total loss function; loss Classification of tooth positions Representing the tooth position classification cross entropy loss; loss Tooth classification Representing the tooth classification cross entropy loss.
loss Classification of tooth positions Is disclosedThe formula is as follows:
Figure BDA0003769797950000101
wherein, y Classification of tooth positions Representing the outputted dentition classification information;
Figure BDA0003769797950000102
representing the tooth position classification target in the labeling data.
loss Tooth classification The formula (c) is as follows:
Figure BDA0003769797950000111
wherein, y Tooth classification Representing the output tooth classification information;
Figure BDA0003769797950000112
representing the tooth classification objective in the labeling data.
Finally, performing back propagation optimization on the total loss function by using an Adam optimizer, wherein the learning rate is set according to requirements, and is set to be 0.0001 in the embodiment; training is stopped after training to the maximum number of iterations (epoch, set to 500 in this embodiment) or total loss has stabilized.
S500, testing the trained convolutional neural network model by using the preprocessed test set:
s510, inputting the preprocessed test set into a trained convolutional neural network model, and outputting tooth position classification information and tooth classification information;
s520, comparing the tooth position classification information and the tooth classification information output by the convolutional neural network model with the tooth position classification target and the tooth classification target in the preprocessed test set respectively to judge the test effect:
if the test effect is not ideal, adjusting the structure of the convolutional neural network model or repeating the training process with super parameters;
and if the test effect meets the requirement, carrying out tooth position identification by using the tested convolutional neural network model.
In training and testing a convolutional neural network model, 1024 vertexes are randomly sampled in a separated tooth or alveolar bone; in production, separated teeth or alveolar bones are sampled with 1024 vertexes at equal intervals according to subscripts, and a farthest point sampling algorithm is not adopted, so that the sampling time can be greatly saved.
S600, inputting the three-dimensional oral scanning tooth separation model to be identified into the tested convolutional neural network model to obtain tooth position classification information and tooth classification information;
s700, carrying out auxiliary correction on the tooth position classification information by utilizing the tooth classification information:
s710, determining incisors according to the tooth classification information:
normally, four incisors are classified according to teeth, and the sequence of the four incisors can be obtained in sequence; if 5 incisors exist, one of the incisors is a cuspid, and the left side or the right side of the 5 incisors is judged to be the cuspid according to the classification conditions of the cuspids on the left side and the right side (the probability is lower); if the number of the teeth is less than 4, the tooth position classification result is adopted without any adjustment (in the practical application process, the tooth position classification sometimes appears repeatedly at the molar position, the effect on incisors is good, and the probability of misjudgment is low).
S720, correcting the tooth position from the incisor to the left:
s721, determining the cuspids: and finding whether cuspids exist or not from the incisors to the left side, if yes, determining the tooth positions, if not, checking the distance between the current tooth and the leftmost incisors, and if the distance is less than 3mm (set value), and if the left side is not the current tooth and the number of premolar teeth is two, judging that the current tooth is the cuspid tooth, as shown in fig. 8. If there are two cuspids on the left side of the tooth classification, one close to the incisors is the cuspid tooth and the other is the premolar tooth and the classification is modified.
And S722, determining the premolar: if two premolars exist, determining the sequence of the premolars; if three, the leftmost one is posterior molars and the classification is modified; if the distance between the first premolar and the previous premolar is less than 3mm, the first premolar is determined, otherwise, the second premolar is determined.
S723, determining posterior molars: there are at most three posterior molars on the left side, and if the distance of the first posterior molars from the previous teeth (typically the second anterior molars) is less than 3mm, the first posterior molars are located. Then, whether the tooth position is less than 3mm from the previous tooth position is judged in sequence.
And S724, if the other conditions are not judged, determining by using the original tooth position classification result.
S730, the tooth position is corrected from the incisors to the right side, which is similar to the tooth position correction from the incisors to the left side in step S720, and is not described herein again.
Furthermore, in some embodiments, a computer terminal storage medium is provided, which stores computer terminal executable instructions for performing the tooth position identification method of three-dimensional oral scan tooth separation model based on deep learning as described in the previous embodiments. Examples of the computer storage medium include a magnetic storage medium (e.g., a floppy disk, a hard disk, etc.), an optical recording medium (e.g., a CD-ROM, a DVD, etc.), or a memory such as a memory card, a ROM, a RAM, or the like. The computer storage media may also be distributed over a network-connected computer system, such as an application store.
Furthermore, in some embodiments, a computing device is presented, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method for tooth position identification based on a deep learning three-dimensional mouth scan tooth separation model as described in the previous embodiments. Examples of computing devices include PCs, tablets, smart phones or PDAs, and the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A tooth position recognition method of a three-dimensional oral scanning tooth separation model based on deep learning is characterized by comprising the following steps:
acquiring a training set and a test set of a three-dimensional oral scanning tooth separation model;
carrying out data preprocessing on the training set and the test set;
constructing a convolution neural network model for the three-dimensional oral scanning tooth separation model; the convolutional neural network model comprises a feature extraction network, a tooth position classification full-connection layer and a tooth classification full-connection layer, wherein the tooth position classification full-connection layer and the tooth classification full-connection layer are connected with the feature extraction network;
training the constructed convolutional neural network model by utilizing the preprocessed training set;
testing the trained convolutional neural network model by utilizing the preprocessed test set;
inputting the three-dimensional oral scanning tooth separation model to be identified into the tested convolutional neural network model to obtain tooth position classification information and tooth classification information;
and carrying out auxiliary correction on the tooth position classification information by utilizing the tooth classification information.
2. The method for recognizing the tooth positions of the three-dimensional oral scanning tooth separation model based on deep learning according to claim 1, wherein the method for acquiring the training set and the test set of the three-dimensional oral scanning tooth separation model comprises the following steps:
acquiring a three-dimensional oral-scan tooth separation model;
carrying out data annotation on the three-dimensional oral scanning tooth separation model to obtain annotation data;
and dividing the labeled data into a training set and a test set according to a proportion.
3. The method for recognizing the tooth position of the three-dimensional mouth-scan tooth separation model based on the deep learning of claim 2, wherein the method for performing data annotation on the three-dimensional mouth-scan tooth separation model comprises the following steps:
dividing the three-dimensional oral-scanning tooth separation model into an upper tooth model and a lower tooth model; wherein, the upper tooth model and the lower tooth model are respectively a binary stl format file; the initial 80 bytes in the stl format file are a file header, then the number of triangular patches of the upper tooth model or the lower tooth model is described by an integer of 4 bytes, and the geometric information of each triangular patch is given one by one; the geometric information of the triangular patch refers to three vertexes of the triangular patch in a three-dimensional space, and each vertex is determined by coordinates (x, y, z);
sequentially reading the triangular surface patch, taking the product of x, y and z in the coordinates (x, y, z) of each vertex of the triangular surface patch as the unique identifier of each vertex, and taking the product of the unique identifiers of two adjacent vertices as the unique identifier of the edge between the two vertices; creating a key value pair object, using the unique identifier of the edge as a key, wherein the value is 1, if the key value pair object does not have the unique identifier of the edge, the unique identifier of the edge is put into the key value pair object, and if the key value pair object has the unique identifier of the same edge, the unique identifier of the edge is deleted from the key value pair object;
because the triangular patches of the teeth or the alveolar bones in the three-dimensional oral-scanning tooth separation model are stored in sequence and are closed, when the key value pair object is traversed in sequence, the triangular patches during sequential traversal are shown to be the triangular patches forming one tooth or alveolar bone, and data annotation is completed by recording the corresponding relation between the positions of the teeth or alveolar bones and the start and stop subscripts of the triangular patches of the teeth or alveolar bones in the stl format file; the upper tooth model and the lower tooth model are labeled in pairs and are placed in the same json format file to serve as labeling data.
4. The method for recognizing the tooth position of the three-dimensional mouth-scanning tooth separation model based on the deep learning according to claim 3, wherein the method for preprocessing the data of the training set and the test set comprises the following steps:
(1) For the labeled data in the training set and the test set, the following data preprocessing is carried out on the lower tooth model in the labeled data:
arranging triangular patches corresponding to each tooth in the lower tooth model according to the vertex sequence to form a matrix (m, 3), wherein m represents the number of the vertices of each tooth, and 3 represents three dimensions of x, y and z; because the vertexes are repeated, filtering the repeated vertexes to form a vertex matrix, and randomly taking 1024 vertexes from the vertex matrix to form a (1024,3) matrix to represent one tooth or alveolar bone; randomly losing one tooth in the lower tooth model, sequencing each tooth or alveolar bone according to a central point in a reverse time direction, splicing the matrix, arranging the alveolar bones at the first position, and obtaining a splicing matrix of which the dimension of the single lower tooth model is (((tooth number-1 (lost one)) +1 (alveolar bone))) 1024,3); finding a vertex farthest from the coordinate origin in the lower dental model, taking the distance between the farthest vertex and the coordinate origin as a radius, dividing the splicing matrix by the radius to realize equal scaling, and normalizing x, y and z to be (0,1); splicing the marked tooth position classification data as a training and testing target while splicing the matrix, thereby obtaining lower tooth model preprocessing data; wherein the tooth classification target is mapped according to tooth position classification;
(2) For the labeled data in the training set and the test set, the upper tooth model in the labeled data is subjected to the following data preprocessing:
arranging triangular patches corresponding to each tooth in the upper tooth model according to the vertex sequence to form a matrix (m, 3), wherein m represents the number of the vertices of each tooth, and 3 represents three dimensions of x, y and z; because the vertexes are repeated, filtering the repeated vertexes to form a vertex matrix, and randomly taking 1024 vertexes from the vertex matrix to form a (1024,3) matrix to represent one tooth or alveolar bone; randomly losing one tooth in the upper tooth model, sequencing each tooth or alveolar bone according to a central point in a reverse time direction, splicing the matrix, arranging the alveolar bones at the first position, and obtaining a splicing matrix of which the dimension of a single upper tooth model is (((tooth number-1 (lost one)) +1 (alveolar bone))) 1024,3); finding a vertex farthest from the coordinate origin in the upper dental model, and dividing the splicing matrix by the radius by taking the distance between the farthest vertex and the coordinate origin as the radius to realize equal scaling so as to normalize x, y and z to be (0,1); splicing and marking tooth position classification data as a training and testing target while splicing the matrix, thereby obtaining upper tooth model preprocessing data; wherein the tooth classification goal is mapping according to tooth position classification.
5. The method for recognizing the tooth positions of the three-dimensional oral scanning tooth separation model based on deep learning according to claim 4, wherein the method for constructing the convolutional neural network model for the three-dimensional oral scanning tooth separation model comprises the following steps:
constructing a feature extraction network; the feature extraction network comprises an STN3d module, a first deformation module, a coordinate axis transformation module, three one-dimensional convolution layers, a Max layer and a second deformation module which are sequentially connected with a PointNet network model;
constructing a tooth position classification full-connection layer; the tooth position classification full-connection layer adopts three full-connection layers and is connected with the output end of the second deformation module;
constructing a tooth classification full-connection layer; the tooth classification full-connection layer adopts three full-connection layers and is connected with the output end of the second deformation module.
6. The method for recognizing the tooth positions of the three-dimensional oral scanning tooth separation model based on deep learning according to claim 5, wherein the method for training the constructed convolutional neural network model by using the preprocessed training set comprises the following steps:
inputting the preprocessed training set into the constructed convolutional neural network model, and outputting tooth position classification information and tooth classification information;
solving cross entropy loss of the tooth position classification information output by the convolutional neural network model and the tooth position classification target in the preprocessed training set, and solving cross entropy loss of the tooth classification information output by the convolutional neural network model and the tooth classification target in the preprocessed training set; taking the sum of the cross entropy losses of the two as a total loss function; then using an Adam optimizer to perform back propagation optimization on the total loss function; and stopping training after the training reaches the maximum iteration number or the total loss tends to be stable.
7. The method for recognizing the tooth position of the three-dimensional oral scanning tooth separation model based on deep learning according to claim 6, wherein the method for testing the trained convolutional neural network model by using the preprocessed test set comprises the following steps:
inputting the preprocessed test set into a trained convolutional neural network model, and outputting tooth position classification information and tooth classification information;
and respectively comparing the tooth position classification information and the tooth classification information output by the convolutional neural network model with the tooth position classification target and the tooth classification target in the preprocessed test set to evaluate the test effect:
if the test effect is not ideal, adjusting the structure of the convolutional neural network model or repeating the training process with super parameters;
and if the test effect meets the requirement, carrying out tooth position identification by using the tested convolutional neural network model.
8. The method for recognizing the tooth position of the three-dimensional mouth-scan tooth separation model based on the deep learning of claim 7, wherein the method for performing auxiliary correction on the tooth position classification information by using the tooth classification information comprises the following steps:
determining incisors according to the tooth classification information;
correcting the tooth position from the incisors to the left side;
the tooth position is corrected from the incisors to the right.
9. A computer terminal storage medium storing computer terminal executable instructions for performing the deep learning based three-dimensional oral scan tooth separation model tooth position identification method according to any one of claims 1 to 8.
10. A computing device, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the deep learning based three-dimensional mouth scan tooth separation model tooth position identification method according to any one of claims 1-8.
CN202210898084.XA 2022-07-28 2022-07-28 Three-dimensional oral-scanning tooth separation model tooth position identification method, medium and device based on deep learning Pending CN115223205A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210898084.XA CN115223205A (en) 2022-07-28 2022-07-28 Three-dimensional oral-scanning tooth separation model tooth position identification method, medium and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210898084.XA CN115223205A (en) 2022-07-28 2022-07-28 Three-dimensional oral-scanning tooth separation model tooth position identification method, medium and device based on deep learning

Publications (1)

Publication Number Publication Date
CN115223205A true CN115223205A (en) 2022-10-21

Family

ID=83614113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210898084.XA Pending CN115223205A (en) 2022-07-28 2022-07-28 Three-dimensional oral-scanning tooth separation model tooth position identification method, medium and device based on deep learning

Country Status (1)

Country Link
CN (1) CN115223205A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385474A (en) * 2023-02-27 2023-07-04 雅客智慧(北京)科技有限公司 Tooth scanning model segmentation method and device based on deep learning and electronic equipment
CN116492082A (en) * 2023-06-21 2023-07-28 先临三维科技股份有限公司 Data processing method, device, equipment and medium based on three-dimensional model
CN117079768A (en) * 2023-10-17 2023-11-17 深圳卡尔文科技有限公司 Precision evaluation method, system and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385474A (en) * 2023-02-27 2023-07-04 雅客智慧(北京)科技有限公司 Tooth scanning model segmentation method and device based on deep learning and electronic equipment
CN116492082A (en) * 2023-06-21 2023-07-28 先临三维科技股份有限公司 Data processing method, device, equipment and medium based on three-dimensional model
CN116492082B (en) * 2023-06-21 2023-09-26 先临三维科技股份有限公司 Data processing method, device, equipment and medium based on three-dimensional model
CN117079768A (en) * 2023-10-17 2023-11-17 深圳卡尔文科技有限公司 Precision evaluation method, system and storage medium
CN117079768B (en) * 2023-10-17 2023-12-19 深圳卡尔文科技有限公司 Precision evaluation method, system and storage medium

Similar Documents

Publication Publication Date Title
US20220218449A1 (en) Dental cad automation using deep learning
CN115223205A (en) Three-dimensional oral-scanning tooth separation model tooth position identification method, medium and device based on deep learning
US11957541B2 (en) Machine learning scoring system and methods for tooth position assessment
EP3620130A1 (en) Automated orthodontic treatment planning using deep learning
CN114746952A (en) Method, system and computer-readable storage medium for creating a three-dimensional dental restoration from a two-dimensional sketch
US20220008175A1 (en) Method for generating dental models based on an objective function
US20230196570A1 (en) Computer-implemented method and system for predicting orthodontic results based on landmark detection
CN112790879B (en) Tooth axis coordinate system construction method and system of tooth model
CN112989954B (en) Three-dimensional tooth point cloud model data classification method and system based on deep learning
EP4144324A1 (en) Intelligent design method for digital model for oral digital impression instrument
CN112869894A (en) Design method, preparation system and preparation method of shell-shaped tooth appliance
CN117095183A (en) Deep learning-based tooth anatomical feature detection method, program, storage medium and system
CN113298828A (en) Jaw automatic segmentation method based on convolutional neural network
CN115953583B (en) Tooth segmentation method and system based on iterative boundary optimization and deep learning
CN111986217A (en) Image processing method, device and equipment
CN114862771B (en) Wisdom tooth identification and classification method based on deep learning network
CN114612532A (en) Three-dimensional tooth registration method, system, computer equipment and storage medium
JP7227188B2 (en) Identification device, identification system, identification method, and identification program
CN115641325A (en) Tooth width calculation method of oral tooth scanning model, storage medium and electronic equipment
CN114463328B (en) Automatic orthodontic difficulty coefficient evaluation method
EP4307229A1 (en) Method and system for tooth pose estimation
CN114681091B (en) Method and equipment for evaluating occlusion condition of dental prosthesis
US20230113425A1 (en) Oral Digital Impression Instrument, Digital Model for the Same, and Intelligent Method of Making and/or Using the Same
JP7195291B2 (en) DATA PROCESSING APPARATUS, DATA PROCESSING SYSTEM, DATA PROCESSING METHOD, AND DATA PROCESSING PROGRAM
KR102610716B1 (en) Automated method for generating prosthesis from three dimensional scan data and computer readable medium having program for performing the method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination