CN109712703B - Orthodontic prediction method and device based on machine learning - Google Patents

Orthodontic prediction method and device based on machine learning Download PDF

Info

Publication number
CN109712703B
CN109712703B CN201811516625.8A CN201811516625A CN109712703B CN 109712703 B CN109712703 B CN 109712703B CN 201811516625 A CN201811516625 A CN 201811516625A CN 109712703 B CN109712703 B CN 109712703B
Authority
CN
China
Prior art keywords
image data
orthodontic
generator
oral cavity
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811516625.8A
Other languages
Chinese (zh)
Other versions
CN109712703A (en
Inventor
田烨
李鹏
周迪曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ace Dental Ltd
Original Assignee
Ace Dental Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ace Dental Ltd filed Critical Ace Dental Ltd
Priority to CN201811516625.8A priority Critical patent/CN109712703B/en
Publication of CN109712703A publication Critical patent/CN109712703A/en
Application granted granted Critical
Publication of CN109712703B publication Critical patent/CN109712703B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention provides an orthodontic prediction method and device based on machine learning, wherein the method comprises the steps of obtaining original oral CT image data; inputting the original oral cavity CT image data into a first generator trained in advance to obtain marked oral cavity CT image data; marking dental areas on each frame of image in a marked form in the oral cavity CT image data; marking corresponding tooth numbers on the circled areas, and setting the non-tooth areas to be 0; inputting the marked oral cavity CT image data into a second generator to obtain an orthodontic scheme characterized in a coding form; inputting the tooth three-dimensional digital model and the orthodontics scheme characterized by the coding form into a third generator so as to obtain the prediction result of the orthodontics scheme. The method can quickly, conveniently and vividly obtain the prediction effect of the orthodontic scheme, reduce the working difficulty of practitioners, obviously reduce the burden of doctors, and promote the visual sense recognition of patients on the orthodontic scheme to be accepted by the patients.

Description

Orthodontic prediction method and device based on machine learning
Technical Field
The invention relates to the technical field of tooth correction, in particular to an orthodontic prediction method and device based on machine learning.
Background
Oral disease is a common multiple disorder. Malocclusions have been counted by the world health organization as one of three major oral diseases (caries, periodontal disease and malocclusions). The tooth deformity has great influence on oral health, oral function, development and appearance of maxillofacial skeleton. Orthodontics has been considered an essential important part in oral care treatment. The orthodontic treatment is to apply three-dimensional correction force and moment to teeth by using fixed correction devices composed of archwires, brackets and the like or invisible removable correction devices such as tooth sockets and the like aiming at tooth arrangement deformity or malocclusion, adjust balance and coordination among facial bones, teeth and maxillofacial muscle, improve facial alignment, dentition alignment and chewing efficiency after correction for a period of time. Traditional orthodontic treatment mainly relies on doctors' experience to formulate an orthodontic solution.
The manual tooth placement experiment may help the orthodontist predict the course of treatment involved in orthodontic treatment and inform the patient of the tooth movements and final treatment effects that may be involved before the treatment regimen is determined. The main defects of the manual tooth arrangement process are that each tooth needs to be independently operated, the degree of automation is low, the tooth arrangement efficiency is low, a large amount of materials are consumed, the observability is low, and a patient is difficult to clearly know the orthodontic effect.
With the development of computer imaging technology and machine learning technology, automated orthodontic treatment is rapidly developing. In order to acquire tooth three-dimensional model data required by orthodontic treatment, in the prior art, image data of teeth are generally required to be acquired by means of professional 3D scanning equipment, the 3D scanning equipment is high in price, the acquisition cost of the image data is high, and the burden of medical institutions and users is necessarily increased; the accuracy of the CT image with moderate Gao Pu and relatively low cost is not high, accurate three-dimensional model data is difficult to obtain based on the CT image, and manual intervention is needed.
Further, the artificial orthodontic scheme in the prior art is limited by professional literacy of doctors, so that the effect of the orthodontic scheme is difficult to ensure, and the orthodontic scheme cannot be displayed to users in a dynamic image form, and is also unfavorable for the users to understand the orthodontic process.
Disclosure of Invention
The invention provides an orthodontic prediction method and device based on machine learning, which specifically comprise the following contents:
an orthodontic prediction method based on machine learning, comprising:
acquiring original oral CT image data;
inputting the original oral cavity CT image data into a first generator trained in advance to obtain marked oral cavity CT image data; marking dental areas on each frame of image in a marked form in the oral cavity CT image data; marking corresponding tooth numbers on the circled areas, and setting the non-tooth areas to be 0;
Inputting the marked oral cavity CT image data into a second generator to obtain an orthodontic scheme characterized in a coding form;
inputting the tooth three-dimensional digital model and the orthodontics scheme characterized by the coding form into a third generator so as to obtain the prediction result of the orthodontics scheme.
Preferably, the training method of the first generator includes:
acquiring training data, wherein the training data comprises original oral cavity CT image data of a pre-stored patient and labeled oral cavity CT image data corresponding to the original oral cavity CT image data;
inputting the training data into a GAN network to train a generator network and a discriminator network in the GAN network until the discriminator cannot distinguish the marked oral CT image data acquired by the generator network from the marked oral CT image data marked by a professional doctor or based on a semi-automatic method;
the trained generator network is used as a first generator.
Preferably, the labeling oral CT image data in the training data is obtained by a semi-automatic labeling method, and the semi-automatic labeling method includes:
obtaining a plurality of slices based on the original oral CT image data, and further obtaining voxel data of a three-dimensional coordinate system;
Acquiring a classification threshold value, wherein the classification threshold value is used for classifying voxel data;
grouping the voxel data based on a positional relationship and the classification threshold; grouping the voxel data based on the positional relationship and the classification threshold includes grouping adjacent and similar voxel data into groups based on an adjacent relationship;
analyzing independent tooth areas from the grouping result according to a preset rule;
the individual tooth areas are marked.
Preferably, the preset rule includes:
a complete tooth must be within a set of data;
crowded adjacent teeth have the potential to be grouped together in the same group;
if there is contact between the tooth and the alveolar bone, the teeth and the alveolar bone may be collected in the same group.
Preferably, training is performed using a preset machine learning model to obtain a second generator; the method comprises the following steps that a machine learning model is preset, wherein the machine learning model comprises a neural network machine learning model with two convolution layers, two pooling layers, two full-connection layers and one output layer;
the convolution layer carries out convolution processing on the input orthodontic input training data to realize feature extraction; the pooling layer performs downsampling operation on the output of the upper layer, namely, returns the maximum value in the sampling window to serve as downsampled output; the full-connection layer is used as a connection layer between the nodes of the upper layer and the lower layer, the connection relation is established between the node data obtained by the upper layer and the lower layer, and the output value is sent to the classifier.
Preferably, the output layer outputs orthodontic output training data by adopting a softmax function, a nonlinear classifier is included in the softmax function, and classifier training is performed on orthodontic input training data to determine a probability value of matching the orthodontic input training data with the orthodontic output training data.
Preferably, said inputting said three-dimensional digital model of teeth and said representation of an orthodontic plan in encoded form into a third generator so as to obtain a predicted result of said orthodontic plan comprises:
animation is drawn based on movement data for each stage of the orthodontic scheme and the time required for each stage to complete.
An orthodontic prediction device based on machine learning, comprising:
the original oral cavity CT image data acquisition module is used for acquiring original oral cavity CT image data;
the marked oral cavity CT image data acquisition module is used for inputting the original oral cavity CT image data into a first generator trained in advance to obtain marked oral cavity CT image data; marking dental areas on each frame of image in a marked form in the oral cavity CT image data; marking corresponding tooth numbers on the circled areas, and setting the non-tooth areas to be 0;
the orthodontic scheme acquisition module is used for inputting the marked oral CT image data into a second generator to obtain an orthodontic scheme characterized in a coding form;
And the prediction module is used for inputting the tooth three-dimensional digital model and the orthodontic scheme characterized in the coding form into a third generator so as to obtain a predicted result of the orthodontic scheme.
Preferably, the method further comprises:
a first generator training module for training a first generator, comprising:
the training data acquisition unit is used for acquiring training data, wherein the training data comprise original oral cavity CT image data of a pre-stored patient and marked oral cavity CT image data corresponding to the original oral cavity CT image data;
the training unit is used for inputting the training data into the GAN network to train the generator network and the discriminator network in the GAN network until the discriminator cannot distinguish the marked oral cavity CT image data acquired by the generator network from the marked oral cavity CT image data marked by a professional doctor or based on a semi-automatic method.
Preferably, the method further comprises:
the marked oral cavity CT image data acquisition module is used for acquiring marked oral cavity CT image data and comprises:
the voxel data acquisition unit of the three-dimensional coordinate system is used for obtaining a plurality of slices based on the original oral CT image data so as to obtain voxel data of the three-dimensional coordinate system;
The classification unit is used for acquiring a classification threshold value, and the classification threshold value is used for classifying the voxel data;
a grouping unit for grouping the voxel data based on a positional relationship and the classification threshold; grouping the voxel data based on the positional relationship and the classification threshold includes grouping adjacent and similar voxel data into groups based on an adjacent relationship;
the analysis unit is used for analyzing the independent tooth areas from the grouping result according to a preset rule;
and the marking unit is used for marking the independent tooth areas.
The orthodontic prediction method and device based on machine learning provided by the invention use a plurality of machine learning models, realize automatic acquisition of three-dimensional digitization of teeth and automatic generation of orthodontic schemes, quickly, conveniently and vividly obtain the prediction effect of the orthodontic schemes, reduce the working difficulty of practitioners, remarkably reduce the burden of doctors, and promote the visual sense recognition of patients on the orthodontic schemes to be accepted by the patients.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for machine learning based tooth three-dimensional digital acquisition provided in an embodiment of the present disclosure;
FIG. 2 is a flowchart of a training method for a first generator provided in an embodiment of the present disclosure;
FIG. 3 is a flow chart of a semi-automated labeling method provided by embodiments of the present description;
fig. 4 is a flowchart of an automatic orthodontic scheme planning method based on artificial intelligence according to an embodiment of the present disclosure;
fig. 5 (a) is a schematic diagram showing an original state before upper teeth orthodontic according to the embodiment of the present disclosure;
fig. 5 (b) is a schematic operation view of the upper orthodontic alignment along the arch direction provided in the embodiment of the present specification;
fig. 5 (c) is a schematic view illustrating the operation of the upper teeth orthodontic second stage pushing teeth in the backward direction provided in the embodiment of the present specification;
FIG. 5 (d) is a schematic diagram of the operation of the upper teeth integrated trimming alignment (multiple operations occurring simultaneously) provided in the embodiments of the present disclosure;
FIG. 5 (e) is a schematic view of an upper anterior adduction operation provided by an embodiment of the present disclosure;
FIG. 5 (f) is a schematic diagram of the overall fine adjustment alignment operation of the upper teeth provided in the embodiments of the present disclosure;
fig. 6 (a) is a schematic diagram showing an original state before lower teeth orthodontic according to the embodiment of the present disclosure;
FIG. 6 (b) is a schematic view of the operation of the lower anterior internal alignment provided by the embodiments of the present disclosure;
FIG. 6 (c) is a schematic view of the operation of the lower back intertooth alignment provided by the embodiments of the present disclosure;
FIG. 6 (d) is a schematic view of the lower pushing molar provided by the embodiments of the present disclosure;
FIG. 6 (e) is a schematic illustration of the alignment of the lower teeth along the dental arch provided by the embodiments of the present disclosure;
FIG. 6 (f) is a schematic diagram of the overall fine alignment operation of the lower teeth provided in the embodiments of the present disclosure;
fig. 7 is a schematic structural diagram of a two-layer neural network according to an embodiment of the present disclosure;
FIG. 8 is a flowchart of a method of making a bracket-less appliance as required for a concealed appliance provided in an embodiment of the present disclosure;
fig. 9 is a block diagram of an orthodontic prediction device based on machine learning according to an embodiment of the present disclosure.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "comprises" and "comprising," along with any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
The oral cavity CT image data has the advantages of low cost, easy acquisition and the like, but compared with professional oral cavity 3D printing equipment, the accuracy of the image data is not high, and the dental three-dimensional digital model is difficult to directly obtain based on the oral cavity CT image data, which is also the reason that the oral cavity CT image data is difficult to be used on a large scale, in order to obtain the dental three-dimensional digital model with high accuracy based on the oral cavity CT image data, the embodiment of the invention provides a dental three-dimensional digital acquisition method based on machine learning, as shown in fig. 1, the method comprises the following steps:
s101, acquiring original oral cavity CT image data.
The raw oral CT image data contains complete tooth information (including the entire root and crown).
S103, inputting the original oral cavity CT image data into a first trained generator to obtain marked oral cavity CT image data.
Specifically, the noted oral CT image data circumscribes the dental region on each frame of image in a noted form. The circled area is marked with the corresponding tooth number and the non-tooth area is set to 0.
The first generator is a neural network with an image identification function, which is obtained through machine learning, and can take an original oral CT image as input and output a recognition result of a tooth area in the original oral CT image, and the recognition result is output in the form of labeling oral CT image data. The training process of the first generator will be described in detail below.
In a preferred embodiment, the scaling process and the normalization process may be further performed on the two-dimensional image corresponding to the original oral CT image data before the original oral CT image data is input into the first generator. The scaling process refers to scaling the image to a predetermined size (e.g., 512 x 512). Normalization refers to normalizing pixel values to a standard data range (e.g., a data range of 0-1) by linear transformation; correspondingly, scaling and normalization processing are also carried out on the identified area in the marked oral CT image data.
S105, generating a tooth three-dimensional digital model according to the labeling oral cavity CT image data.
And the marked tooth information is recorded in the marked oral cavity CT image data, so that three-dimensional voxel data of the teeth are obtained. The data can also be subjected to a common surface reconstruction algorithm (such as a mobile cube method) to obtain three-dimensional surface data, and a tooth three-dimensional digital model is obtained.
The embodiment of the invention provides a method for intelligently acquiring a complete three-dimensional digital model of teeth from original oral cavity CT image data based on a neural network. Further, in a preferred embodiment, other feature points or feature lines, such as, for example, ear point positions, surface-shaped contours, etc., may be additionally marked in the obtained three-dimensional digital model of the tooth.
In the embodiment of the invention, the GAN network is specifically used for acquiring the marked oral CT image data. In order to detail the training method of the first generator, embodiments of the present invention are first described with respect to the GAN network. The main idea of GAN is that a two-dimensional image corresponding to the labeled oral CT image data is generated by using a generator network, and then the authenticity of the two-dimensional image generated by the generator is judged by using a discriminator network, so that the two-dimensional image cannot be judged by the discriminator network, and the probability of authenticity given by the discriminator network to any one piece of input original oral CT image data is 0.5, and the generator network is required by the embodiment of the invention, namely, the first generator is the discriminator can be discarded. Namely, the embodiment of the invention uses the trained first generator to acquire the marked oral CT image data.
However, in the process of GAN training specifically, after the number of network layers reaches a certain number, the performance of the network will saturate, and the performance of the network will begin to degrade after the network is increased again, so that the training accuracy and the testing accuracy are both reduced, in order to ensure the training accuracy, and when the network depth increases, the time and the computation complexity will not rise sharply, thereby ensuring rapid convergence and avoiding the phenomena of gradient elimination and gradient dispersion.
The residual network uses a jump structure as a basic structure of the network, and converts an optimized target from H (x) to H (x) -x through the jump structure, wherein H (x) =F (x) +x, so that the deep network can achieve the same effect as the shallow network on the basis of the shallow network by only performing equivalent mapping on the upper layers, and the training difficulty is remarkably reduced.
Specifically, the residual network in the embodiment of the present invention designs a plurality of residual blocks (residual blocks), and each residual block includes a convolutional layer (Conv) and a normalized layer (battnorm). The number of residual blocks can be automatically adjusted according to the complexity of the task before the network training, and the higher the task complexity is, the more the number of residual blocks can be designed.
In a generator network comprising a convolutional layer (Conv), a normalizing layer (battnorm), an activating layer (prilu), and a residual network (N x residual), raw oral CT image data is input and labeled oral CT image data is output.
Specifically, an embodiment of the present invention provides a training method of a first generator, as shown in fig. 2, including:
s10, training data are acquired, wherein the training data comprise original oral cavity CT image data of a pre-stored patient and marked oral cavity CT image data corresponding to the original oral cavity CT image data.
In one possible embodiment, the noted oral CT image data is obtained by a professional manually noted the original oral CT image data.
In this embodiment the practitioner marks each dentition according to dental standard dental site representations. Wherein, the dental position representation is a method of numbering each human tooth for representation; the upper and lower dentitions are divided into four areas by cross symbols, the upper right area is also called an A area, the upper left area is also called a B area, the lower right area is also called a C area, and the lower left area is also called a D area. The common tooth position representation method is FDI tooth position representation method (digital marking method), wherein each tooth is recorded by 2 Arabic numerals; each tooth is represented by a two digit arabic numeral, the first digit representing the quadrant in which the tooth is located: the upper right, upper left, lower left and lower right of the patient are 1, 2, 3 and 4 on permanent teeth and 5, 6, 7 and 8 on deciduous teeth; the second position represents the position of the tooth: 1-8 from the middle incisors to the third molars; table 1 shows the dentist's orientation (left corresponds to the right of the patient), but the distinction between left and right is reversed, based on the actual teeth of the patient.
Table 1:
it should be noted that a standard set of raw dentition models is a model of the 16 teeth of the upper jaw and the 16 teeth of the lower jaw. Positions not identified as crowns (tooth gaps) are assigned a value of 0; the positions identified as crowns are marked as corresponding numbers according to the dental position representation method; and at the same time, the corresponding dentition shape information is further identified and matched.
In another possible embodiment, the labeling of the oral CT image data may also be obtained using a semi-automated method, as illustrated in fig. 3, comprising:
s01, obtaining a plurality of slices based on original oral cavity CT image data, and further obtaining voxel data of a three-dimensional coordinate system.
The pixel data for each point on the original oral CT image data is referred to as voxel data in three-dimensional space.
S02, acquiring a classification threshold value, wherein the classification threshold value is used for classifying voxel data.
S03, grouping the voxel data based on the position relation and the classification threshold.
Specifically, adjacent and homogeneous voxel data are grouped into several groups based on the adjacency relation.
S04, analyzing the independent tooth areas from the grouping result according to a preset rule.
Specifically, the preset rule includes:
a. With reference to the anatomy of the tooth, the pixel values of the entire tooth (excluding non-osseous tissue such as pulp) are within a small threshold interval. Or may be classified as a group. A complete tooth must be within a set of data.
b. Adjacent teeth may be grouped together in the same group due to overcrowding.
c. Teeth and alveolar bone tissue may be grouped together due to close density and contact.
Further, for the cases of b, c described above, separate dental regions need to be segmented and acquired. The specific method comprises the following steps:
1) Three-dimensional surface data of the above data is obtained by a surface reconstruction algorithm (such as a mobile cube method, etc.), and the tooth numbers thereof are assigned.
2) Matching (such as ICP matching algorithm or manual matching adjustment) is performed by using standard tooth three-dimensional surface data with corresponding numbers, and the standard model is transformed to the position of the reconstructed model through three-dimensional space transformation (including translation transformation, rotation transformation, affine transformation and the like) to obtain the expected model form.
3) And deleting pixels which are not in the matched model range, and acquiring independent tooth areas.
S05, marking the independent tooth areas.
Compared with the method for obtaining the marked oral CT influence data by doctors in the previous embodiment, the method provided by the invention does not need to mark hundreds of pictures one by one. The automatic analysis results may automate the acquisition of most dental regions when threshold selection is appropriate. Partially unidentified teeth can also be accomplished with fewer manual operations.
S20, inputting the training data into a GAN network to train a generator network and a discriminator network in the GAN network until the discriminator cannot distinguish the marked oral cavity CT image data acquired by the generator network from the marked oral cavity CT image data marked by a professional doctor or based on a semi-automatic method.
In the trained GAN network, after the original oral CT image data is input into the generator network to obtain the labeled oral CT image data, the identifier network inputs the labeled oral CT image data output by the generator network and then determines the authenticity of the labeled oral CT image data (the labeled oral CT image data generated by the generator network is false, and the labeled oral CT image data labeled by a professional doctor or based on a semiautomatic method is true), if the identifier network cannot distinguish the authenticity, the generator network at this time can be used as the first generator, and in the embodiment of the invention, the identifier network and the generator network have similar structures.
S30, taking the trained generator network as a first generator.
The advantages of the first generator applied to acquiring labeled oral CT image data in embodiments of the present invention are apparent. A piece of original oral CT image data has a plurality of slices, and a piece of slice has a plurality of tooth images. The manual marking of several hundred images is a very tedious and time consuming task that must be performed by a physician with specialized experience and knowledge of the anatomy. Even in a semi-automatic mode, proper threshold data is needed to be selected, and the labeling work is completed by additionally matching with some interactive processing work. By adopting the artificial intelligent model, namely the first generator, accurate marking can be intelligently finished, and dependence on specialized doctors is reduced.
The embodiment of the invention provides a tooth three-dimensional digital acquisition method based on machine learning, which can automatically obtain labeling oral cavity image data based on a first generator, reduces cost compared with professional oral cavity 3D printing equipment, and improves the accuracy and the degree of automation of acquiring tooth model voxel data from CT image data based on machine learning.
On the basis of obtaining the three-dimensional digital model of the teeth, the tooth orthodontic scheme can be obtained. Orthodontic or corrective treatment refers to the gradual adjustment of the relative position of teeth, alignment, and adjustment of bite states by a series of medical means to achieve the effects of alignment, improvement of functions, and beauty treatment. Orthodontic treatment often requires multiple stages, each of which solves one or more problems, gradually completing the therapeutic effect. The multiple phases of orthodontic treatment combine to form a complete treatment regimen. The embodiment of the invention provides an artificial intelligence-based orthodontic scheme automatic planning method, as shown in fig. 4, which comprises the following steps:
s201, acquiring marked oral cavity CT image data.
Specifically, the three-dimensional digital model of the tooth can be obtained in step S103, or can be obtained by a professional doctor or based on a semi-automatic labeling method.
S202, inputting the marked oral cavity CT image data into a second generator to obtain an orthodontic scheme characterized in a coding form.
Specifically, the orthodontic scheme includes the following:
(1) Several stages (numbers) into which the orthodontic procedure can be divided;
(2) Tooth numbering with orthodontic movement at different stages (partial tooth movement, partial tooth non-movement);
(3) Orthodontic treatment measures (arch expansion, adduction, tooth extraction, midline alignment, fine adjustment alignment, etc.) performed at different stages; in particular, there may be multiple treatments per stage.
In fact, orthodontic schemes may be divided into multiple stages, each of which may perform multiple operations, each of which may involve one or more teeth. The embodiment of the invention can represent the operation content of each tooth at each stage in a coding mode. Thus, the whole orthodontic treatment scheme can be represented by one coding sequence. Typically, a single jaw has 16 teeth, which can be represented by 16 numbers. For ease of description, teeth are numbered from left to right 1 to 16 in the numbering process below.
The following is described by way of example.
The orthodontic treatment can be performed simultaneously on two parts of upper teeth and lower teeth or on one side. The code is 0 when a certain tooth does not move, and the code is an operation code corresponding to the movement when the tooth moves.
The upper teeth are exemplified as follows:
as shown in fig. 5 (a), it shows an original state before orthodontic. As shown in fig. 5 (b), which illustrates the operation of orthodontic alignment in the arch direction. Performing operational coding of anterior tooth alignment to midline: 0000001111100000, 16 teeth, each having a number of 0 indicating no tooth manipulation, 1 indicating alignment along the arch (alignment of the midline), the series of codes may indicate simultaneous alignment of the pairs 7,8,9, 10, 11. As shown in fig. 5 (c), which illustrates the second stage of orthodontic pushing molar backward operation. The corresponding operation code is 0660000000000660, the action of back grinding teeth is denoted by the number 6, and the corresponding teeth are 2,3, 14 and 15 teeth. As shown in fig. 5 (d), which illustrates the operation of the integrated adjustment alignment (multiple operations occur simultaneously). Its corresponding operation number is 0062220000222770. In this stage, a plurality of operations are simultaneously performed, and since the arch shape changes after the back of the molar, adjacent teeth 4,5,6, 11, 12, 13 are required to be aligned (the back molar direction moves), corresponding teeth with operation numbers 2 and 3 are not retracted in place, continuous back pushing (corresponding operation number 6) is required to be performed, teeth with operation number 13 and 15 are required to be deviated toward the cheek side, and inner alignment (back tooth) is required (corresponding operation number 7). As depicted in fig. 5 (e), which illustrates an anterior adduction operation. Its corresponding code is 0000088888800000. After the anterior stage adjustment, an adduction alignment (anterior teeth) is required (operation number indicated by 8). As illustrated in fig. 5 (f), which illustrates the overall fine alignment operation. Its corresponding code is 0009999999999990. The previous stage results have approached the target effect. Fine tuning is performed at this stage to achieve the target tooth arrangement effect. The trimming operation is denoted by 9.
The following teeth are exemplified:
as shown in fig. 6 (a), it shows an original state before orthodontic. As shown in fig. 6 (b), which illustrates an anterior tooth adduction alignment operation, which corresponds to code 0000088888800000, as previously described, an adduction alignment (pair Ji Zhongxian) corresponds to operation 8. As shown in fig. 6 (c), which illustrates a posterior adduction alignment operation, which corresponds to code 077700000077700, as previously described, the adduction alignment (pair Ji Zhongxian) corresponds to operation 7. As shown in fig. 6 (d), which illustrates the push tooth back and alignment operation, which is correspondingly numbered 0630000000000000, the above-described stage 3 tooth is blocked by the No. 2 tooth. It is necessary to push the number 2 molars backward while simultaneously twisting the number 3 teeth in alignment alone (shown as operation 3). As shown in fig. 6 (e), which illustrates an alignment operation along the dental arch, it corresponds to code 0000200000000000. As shown in fig. 6 (f), which shows overall trim alignment, corresponds to code 0009999999999000. With the teeth on, the final step fine-tunes alignment to achieve the desired goal.
Finally, the output is the coding sequence (left side is upper teeth, right side is lower teeth)
0000001111100000 0000088888800000
0660000000000660 0077700000077700
0062220000222770 0630000000000000
0000088888800000 0000200000000000
0009999999999990 0009999999999000
The upper teeth and the lower teeth are combined to form an integral scheme. In some cases, when the single-sided tooth is not moving, it may be set to 0000000000000000.
In the embodiment of the invention, three-dimensional tooth digital information is used as input, so that an orthodontic scheme represented by a coding sequence can be automatically obtained.
Specifically, in the embodiment of the invention, training is performed by using a preset machine learning model to obtain the second generator. Specifically, the training set of the second generator comprises two parts of contents, wherein the first part is the pre-orthodontic marked oral cavity CT image data, and the second part is an orthodontic scheme which corresponds to the pre-orthodontic marked oral cavity CT image data and is expressed in a coding form. And in training, adjusting model parameters of the preset machine learning model until an orthodontic scheme represented in a reasonable coding form can be output for any pre-orthodontic marked oral CT image data.
The learning model may be generally configured to include:
an input layer, x;
any number of hidden layers; each hidden layer has corresponding model parameters, the model parameters of each hidden layer can be multiple, and one model parameter in each hidden layer carries out linear or nonlinear change on input data to obtain an operation result; each hidden layer receives the operation result of the previous hidden layer, and outputs the operation result of the layer to the next through self operation;
An output layer of the semiconductor device is provided with a plurality of output layers,
a set of weights and biases (W and b) between each two layers;
as shown in the structure of the neural network shown in fig. 7; wherein the weight W and bias b are the effects of the outputThe process of fine tuning weights and offsets based on input data is referred to as a neural network training process, so that the optimal weights and offsets for the neural network are obtained during the training of the neural network.
The neural network model in this embodiment may use an existing machine learning algorithm for implementing the training process, but is not limited to a machine learning algorithm such as a convolutional neural network, a recurrent neural network, or a logistic regression network.
Specifically, the preset machine learning model in the embodiment of the present invention may include a neural network machine learning model of two convolutional layers, two pooling layers, two full-connection layers, and one output layer.
Specifically, the convolution layer can carry out convolution processing on the input orthodontic input training data to realize feature extraction.
Specifically, the pooling layer may perform a downsampling operation on the output of the previous layer, that is, return the maximum value in the sampling window to be used as the downsampled output. On the one hand, the computational complexity can be simplified; on the other hand, the method can perform feature compression and extract main features.
Specifically, the full connection layer may be used as a connection layer between nodes of the upper layer and the lower layer, and a connection relationship is established between the node data obtained by the upper layer and the lower layer, and an output value is sent to a classifier (such as a softmax classifier).
In the preset machine learning model, each layer of output is a linear function of the input of the previous layer, and considering that data is not always linearly separable in practical application, a nonlinear factor can be introduced by adding an activation function, namely adding a linear correction layer.
Specifically, the output layer may output orthodontic output training data by using a Softmax function, where the Softmax function includes a nonlinear classifier, and classifier training is performed on orthodontic input training data. Specifically, a probability value of the orthodontic input training data matching the orthodontic output training data may be determined.
In addition, it should be noted that the machine learning model in the embodiment of the present invention is not limited to the neural network machine learning model, and in practical application, other machine learning models, such as decision tree machine learning models, may also be included, and the embodiment of the present invention is not limited to the above.
In a specific embodiment, the preset machine learning model may be configured to include:
a first convolution layer; and a first pooling layer coupled to the first convolution layer; and a second convolutional layer coupled to the first pooling layer; and a second pooling layer coupled to the second convolution layer; and a first fully-connected layer connected to the second pooling layer; and a second fully-connected layer connected to the first fully-connected layer; and a linearity correction layer connected to the first full connection layer; and a neural network machine learning model of an output layer connected to the second fully connected layer.
In the preset machine learning model, each layer of output is a linear function of the input of the previous layer, and considering that data is not always linearly separable in practical application, a nonlinear factor can be introduced by adding an activation function.
In addition, it should be noted that the foregoing is merely an example of the preset machine learning model used for training the parameter identification model according to the present invention, and in practical application, more or fewer layers may be included in combination with the practical application requirements.
In a preferred embodiment, the first generator and the second generator may be used in combination, and the combination result as a whole may be used to implement a technical scheme that takes the original oral CT image data as input and takes the encoded orthodontic scheme as output.
The automatic orthodontic scheme planning method based on the artificial intelligence provided by the embodiment of the invention can quickly and automatically obtain an orthodontic scheme by means of the second generator, and the orthodontic scheme is objective and standard and is not influenced by subjective factors and external factors.
In orthodontic regimen implementation, the primary goal of orthodontic treatment is the intended movement of teeth. The teeth may be represented by a spatial transformation of the teeth relative to the previous stage (which may be represented mathematically as a three dimensional spatial transformation matrix). Each stage accomplishes one or more therapeutic purposes, such as (closing the gap between teeth, expanding the arch to obtain a gap, pushing the teeth backwards to form a gap, etc.)
The relevant description of the coded orthodontic scheme can be obtained to ensure that the orthodontic treatment process is long and can be decomposed into a plurality of stages, and each stage can respectively adopt standard operation methods for a plurality of teeth. The method adopts specific force application measures to enable teeth to move, so as to achieve the stage correction effect. For more accurate completion treatment scheme design, if can quantitative or with three-dimensional visual mode, represent the correction target of this stage tooth, be favorable to the doctor more to judge whether orthodontic scheme is reasonable or be favorable to the patient to the change of tooth and accomplish the heart to some number. Based on this, the embodiment of the invention further provides a method for predicting the therapeutic effect of an orthodontic scheme, which comprises the following steps:
S301, obtaining a tooth three-dimensional digital model.
S303, obtaining the orthodontics scheme represented in a coding form.
S305, inputting the tooth three-dimensional digital model and the orthodontic scheme represented in the coding form into a third generator so as to obtain a predicted result of the orthodontic scheme, wherein the predicted result is displayed in an animation form.
The traditional method is to acquire a digital model of teeth by adopting a three-dimensional scanning mode, acquire an independent dental crown model by digital segmentation, then display the three-dimensional arrangement (dental model) of the dental crowns by adopting a three-dimensional visualization method, and move the positions (including translation and torsion) of the targeted dental crowns in an interactive mode so as to acquire the subjective predicted (or expected) target arrangement (dental model) state. Obviously, the three-dimensional tooth model (comprising the dental crown and the dental root) obtained based on CT in the embodiment of the invention provides an intelligent algorithm, and can obtain the same effect.
Specifically, an embodiment of the present invention provides a training method of a third generator, where the method includes:
s100, training data are obtained, wherein the training data comprise a tooth three-dimensional digital model, an orthodontic scheme in a corresponding coding form, movement data of each stage of the orthodontic scheme and time required by completion of each stage.
Specifically, the tooth three-dimensional digital model can be obtained based on the method in steps S101-S105 in the embodiment of the present invention, and also can be a crown model in a traditional three-dimensional scanning manner.
And obtaining a three-dimensional model of the dental crown, performing tooth arrangement operation by adopting a traditional method, and obtaining movement data of the tooth at each stage. Based on empirical methods, the time required for each stage to complete is obtained.
S200, training a preset neural network according to the training data to obtain a third generator, wherein the third generator takes a tooth three-dimensional digital model and a corresponding orthodontic scheme in a coding form as input, and takes movement data of each stage of the orthodontic scheme and time required by completion of each stage as output.
Preferably, the third generator may further draw an animation based on movement data of each stage of the orthodontic scheme and a time required for completion of each stage.
In a preferred embodiment, the first generator, the second generator and the third generator may be used in combination, and the combination result as a whole may be used to implement a technical scheme that the original oral CT image data is taken as input, and the prediction result is taken as output. The second generator and the third generator may be trained using the same or different neural networks.
Compared with the prior art, the treatment effect prediction method for the orthodontic scheme provided by the embodiment of the invention can quickly, conveniently and vividly obtain the prediction effect of the orthodontic scheme, reduce the working difficulty of practitioners, remarkably reduce the burden of doctors, and promote the visual sense recognition of patients on the orthodontic scheme to be accepted by the patients.
In order to implement an orthodontic scheme, a corresponding appliance is required to be manufactured according to the orthodontic scheme. Specifically, the embodiment of the invention can be used for manufacturing the bracket-free appliance required by the invisible correction. The manufacturing method is as shown in fig. 8, and comprises the following steps:
s401, acquiring marked oral cavity CT image data and obtaining a tooth three-dimensional digital model based on the marked oral cavity CT image data.
S403, acquiring an orthodontic scheme according to the marked oral cavity CT image data.
S405, according to the orthodontic scheme, combining the tooth three-dimensional digital model obtained by labeling the oral cavity CT image data to obtain a printable dentition model.
Specifically, based on the manufacturing principle of the bracket-free appliance, animation data corresponding to an orthodontic scheme can be integrated, and modeling output is conducted to form a dentition model capable of being subjected to 3D printing. And manufacturing the bracket-free appliance by a high polymer material hot-pressing film forming mode.
Specifically, the labeling oral cavity CT image data, the tooth three-dimensional digital model obtained based on the labeling oral cavity CT image data, the orthodontic scheme and the animation data of the orthodontic scheme can be obtained by using the method provided by the embodiment of the present invention.
S407, manufacturing an appliance based on the dentition model.
The process of manufacturing the die and manufacturing the appliance can be combined into one step, namely, the shape of the appliance is directly printed, and the step of manufacturing the pressed film is reduced; further improving the manufacturing efficiency of the appliance.
In a feasible implementation mode of the invention, if the marked oral cavity CT image data is clear enough, the obtained tooth three-dimensional digital model and the real crown surface error meet the requirements, and the method can be directly used for modeling printable dentition. In particular, the errors may be related to specific orthodontic schemes, different orthodontic schemes differing in the requirements for tooth model crown surface and real crown surface errors.
The labeling of the oral CT image data may include noise information such as soft tissue, which may not be clear enough. The resulting printable dentition model can therefore also be evaluated by a specialist to assess whether it can be used to make an appliance.
In another possible embodiment of the present invention, if a more accurate response of the external crown surface model morphology is considered, the external crown surface model may be obtained by three-dimensional model extraction. The fabrication of the bracket-free appliance has high requirements on the form conformity of the outer surface of the dental crown. Three-dimensional scanning post-modular processing is typically required. The external surface model of the dental crown is obtained by an intraoral scanning mode or an extraoral tooth plaster model rescanning mode. The crown model obtained by scanning has extremely small morphological difference with the crown part of the tooth three-dimensional digital model obtained in the embodiment of the invention, and in some cases, the crown of the tooth three-dimensional digital model can be slightly smaller than the real crown due to the reason of threshold selection; or the three-dimensional scan may locally differ slightly from the three-dimensional digitized model of the tooth due to errors in the scan.
Further, in the embodiment of the invention, the tooth three-dimensional digital model based on CT scanning and finally obtained and the model obtained by three-dimensional scanning belong to two expression forms in a three-dimensional space. The three-dimensional coordinate systems are different, and therefore do not overlap in three-dimensional space. The crown model can be matched and scanned to the crown position in the three-dimensional digital model of the tooth based on a common three-dimensional space data matching algorithm (such as an ICP algorithm) so as to coincide.
Furthermore, the methods of the invention can be freely combined to achieve the aim of automatic orthodontic. The prior art from medical imaging to treatment protocols generally depend on doctors to a large extent, and the degree of automation is difficult to increase. The embodiment of the invention adopts a machine learning method to obtain an intelligent calculation model for orthodontic treatment through big data training, thereby replacing or partially replacing the judgment and decision making process of doctors. Compared with the prior art, the invention is not easily influenced by personal subjective of doctors, and can also improve diagnosis and treatment efficiency.
According to the invention, the whole diagnosis and treatment process is decomposed into a plurality of independent calculation processes, and the first generator, the second generator and the third generator of the used intelligent agent can respectively perform machine learning training based on deep learning or neural network, so that each medical link required by diagnosis and treatment is decoupled, the dependence on original data is reduced, and the accuracy of each diagnosis and treatment process is improved.
The embodiment of the invention also discloses an orthodontic prediction device based on machine learning, which is shown in fig. 9 and comprises:
the original oral CT image data acquiring module 501 is configured to acquire original oral CT image data;
The labeling oral CT image data obtaining module 502 is configured to input the original oral CT image data into a first trained generator to obtain labeling oral CT image data; marking dental areas on each frame of image in a marked form in the oral cavity CT image data; marking corresponding tooth numbers on the circled areas, and setting the non-tooth areas to be 0;
an orthodontic scheme obtaining module 503, configured to input the labeled oral CT image data into a second generator to obtain an orthodontic scheme characterized in a coding form;
a prediction module 504, configured to input the tooth three-dimensional digital model and the orthodontic scheme represented in the encoded form into a third generator so as to obtain a predicted result of the orthodontic scheme.
Specifically, the method further comprises the following steps:
a first generator training module for training a first generator, comprising:
the training data acquisition unit is used for acquiring training data, wherein the training data comprise original oral cavity CT image data of a pre-stored patient and marked oral cavity CT image data corresponding to the original oral cavity CT image data;
the training unit is used for inputting the training data into the GAN network to train the generator network and the discriminator network in the GAN network until the discriminator cannot distinguish the marked oral cavity CT image data acquired by the generator network from the marked oral cavity CT image data marked by a professional doctor or based on a semi-automatic method.
Further, the method further comprises the following steps:
the marked oral cavity CT image data acquisition module is used for acquiring marked oral cavity CT image data and comprises:
the voxel data acquisition unit of the three-dimensional coordinate system is used for obtaining a plurality of slices based on the original oral CT image data so as to obtain voxel data of the three-dimensional coordinate system;
the classification unit is used for acquiring a classification threshold value, and the classification threshold value is used for classifying the voxel data;
a grouping unit for grouping the voxel data based on a positional relationship and the classification threshold; grouping the voxel data based on the positional relationship and the classification threshold includes grouping adjacent and similar voxel data into groups based on an adjacent relationship;
the analysis unit is used for analyzing the independent tooth areas from the grouping result according to a preset rule;
and the marking unit is used for marking the independent tooth areas.
The apparatus embodiment has the same inventive concept as the method embodiment.
The division of the modules/units in the present invention is merely a logic function division, and there may be other division manners in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Some or all of the modules/units may be selected according to actual needs to achieve the purpose of implementing the solution of the present invention.
In addition, each module/unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (5)

1. An orthodontic prediction method based on machine learning, comprising:
acquiring original oral CT image data;
inputting the original oral cavity CT image data into a first generator trained in advance to obtain marked oral cavity CT image data; marking dental areas on each frame of image in a marked form in the oral cavity CT image data; marking corresponding tooth numbers on the circled areas, and setting the non-tooth areas to be 0;
inputting the marked oral cavity CT image data into a second generator to obtain an orthodontic scheme characterized in a coding form;
Inputting the tooth three-dimensional digital model and the orthodontic scheme characterized in a coding form into a third generator so as to obtain a predicted result of the orthodontic scheme;
the training method of the first generator comprises the following steps:
acquiring training data, wherein the training data comprises original oral cavity CT image data of a pre-stored patient and labeled oral cavity CT image data corresponding to the original oral cavity CT image data;
inputting the training data into a GAN network to train a generator network and a discriminator network in the GAN network until the discriminator cannot distinguish the marked oral CT image data acquired by the generator network from the marked oral CT image data marked by a professional doctor or based on a semi-automatic method;
taking the trained generator network as a first generator;
the labeling oral cavity CT image data in the training data is obtained through a semi-automatic labeling method, and the semi-automatic labeling method comprises the following steps: obtaining a plurality of slices based on the original oral CT image data, and further obtaining voxel data of a three-dimensional coordinate system;
acquiring a classification threshold value, wherein the classification threshold value is used for classifying voxel data;
grouping the voxel data based on a positional relationship and the classification threshold; grouping the voxel data based on the positional relationship and the classification threshold includes grouping adjacent and similar voxel data into groups based on an adjacent relationship;
Analyzing independent tooth areas from the grouping result according to a preset rule;
the individual tooth areas are marked.
2. The method according to claim 1, characterized in that:
training using a preset machine learning model to obtain a second generator; the method comprises the following steps that a machine learning model is preset, wherein the machine learning model comprises a neural network machine learning model with two convolution layers, two pooling layers, two full-connection layers and one output layer;
the convolution layer carries out convolution processing on the input orthodontic input training data to realize feature extraction; the pooling layer performs downsampling operation on the output of the upper layer, namely, returns the maximum value in the sampling window to serve as downsampled output; the full-connection layer is used as a connection layer between the nodes of the upper layer and the lower layer, the connection relation is established between the node data obtained by the upper layer and the lower layer, and the output value is sent to the classifier.
3. The method according to claim 2, characterized in that:
the output layer adopts a softmax function to output orthodontic output training data, a nonlinear classifier is included in the softmax function, classifier training is carried out on orthodontic input training data, and therefore probability values of matching of the orthodontic input training data and the orthodontic output training data are determined.
4. The method according to claim 1, characterized in that:
the inputting the three-dimensional digital model of the tooth and the representation of the orthodontic scheme in encoded form into a third generator so as to obtain a predicted result of the orthodontic scheme comprises:
animation is drawn based on movement data for each stage of the orthodontic scheme and the time required for each stage to complete.
5. An orthodontic prediction device based on machine learning, comprising:
the original oral cavity CT image data acquisition module is used for acquiring original oral cavity CT image data;
the marked oral cavity CT image data acquisition module is used for inputting the original oral cavity CT image data into a first generator trained in advance to obtain marked oral cavity CT image data; marking dental areas on each frame of image in a marked form in the oral cavity CT image data; marking corresponding tooth numbers on the circled areas, and setting the non-tooth areas to be 0;
the orthodontic scheme acquisition module is used for inputting the marked oral CT image data into a second generator to obtain an orthodontic scheme characterized in a coding form;
the prediction module is used for inputting the tooth three-dimensional digital model and the orthodontics scheme represented in the coding form into a third generator so as to obtain a prediction result of the orthodontics scheme;
The training method of the first generator comprises the following steps:
acquiring training data, wherein the training data comprises original oral cavity CT image data of a pre-stored patient and labeled oral cavity CT image data corresponding to the original oral cavity CT image data;
inputting the training data into a GAN network to train a generator network and a discriminator network in the GAN network until the discriminator cannot distinguish the marked oral CT image data acquired by the generator network from the marked oral CT image data marked by a professional doctor or based on a semi-automatic method;
taking the trained generator network as a first generator;
the labeling oral cavity CT image data in the training data is obtained through a semi-automatic labeling method, and the semi-automatic labeling method comprises the following steps: obtaining a plurality of slices based on the original oral CT image data, and further obtaining voxel data of a three-dimensional coordinate system;
acquiring a classification threshold value, wherein the classification threshold value is used for classifying voxel data;
grouping the voxel data based on a positional relationship and the classification threshold; grouping the voxel data based on the positional relationship and the classification threshold includes grouping adjacent and similar voxel data into groups based on an adjacent relationship;
Analyzing independent tooth areas from the grouping result according to a preset rule;
the individual tooth areas are marked.
CN201811516625.8A 2018-12-12 2018-12-12 Orthodontic prediction method and device based on machine learning Active CN109712703B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811516625.8A CN109712703B (en) 2018-12-12 2018-12-12 Orthodontic prediction method and device based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811516625.8A CN109712703B (en) 2018-12-12 2018-12-12 Orthodontic prediction method and device based on machine learning

Publications (2)

Publication Number Publication Date
CN109712703A CN109712703A (en) 2019-05-03
CN109712703B true CN109712703B (en) 2023-08-25

Family

ID=66255661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811516625.8A Active CN109712703B (en) 2018-12-12 2018-12-12 Orthodontic prediction method and device based on machine learning

Country Status (1)

Country Link
CN (1) CN109712703B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020133180A1 (en) * 2018-12-28 2020-07-02 上海牙典软件科技有限公司 Orthodontic method and apparatus based on artificial intelligence
CN110136830B (en) * 2019-05-20 2020-01-14 哈尔滨理工大学 Method for establishing dynamic correction torque prediction model of auxiliary arch for depression
CN111265317B (en) * 2020-02-10 2022-06-17 上海牙典医疗器械有限公司 Tooth orthodontic process prediction method
CN111798445B (en) * 2020-07-17 2023-10-31 北京大学口腔医院 Tooth image caries identification method and system based on convolutional neural network
CN113223010B (en) * 2021-04-22 2024-02-27 北京大学口腔医学院 Method and system for multi-tissue full-automatic segmentation of oral cavity image
CN114049350B (en) * 2021-12-15 2023-04-07 四川大学 Generation method, prediction method and device of alveolar bone contour prediction model
CN115796306B (en) * 2023-02-07 2023-04-18 四川大学 Training of permanent tooth maturity grading model and permanent tooth maturity grading method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1686058A (en) * 2005-04-28 2005-10-26 上海隐齿丽医学技术有限公司 Computer assisted hidden tooth abnormal correction system
CN101256627A (en) * 2008-01-25 2008-09-03 浙江工业大学 Method for analysis of picture distortion based on constant moment
CN107005720A (en) * 2014-08-08 2017-08-01 皇家飞利浦有限公司 Method and apparatus for encoding HDR image
CN107863149A (en) * 2017-11-22 2018-03-30 中山大学 A kind of intelligent dentist's system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITPD20130010A1 (en) * 2013-01-23 2014-07-24 Amato Dott Aldo PROCEDURE FOR THE AESTHETIC ANALYSIS OF THE DENTAL INSTRUMENT IN THE SMILE AREA AND FOR THE SUPPORT FOR THE IDENTIFICATION OF DENTISTRY AND DENTAL TECHNICAL AESTHETIC TREATMENTS

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1686058A (en) * 2005-04-28 2005-10-26 上海隐齿丽医学技术有限公司 Computer assisted hidden tooth abnormal correction system
CN101256627A (en) * 2008-01-25 2008-09-03 浙江工业大学 Method for analysis of picture distortion based on constant moment
CN107005720A (en) * 2014-08-08 2017-08-01 皇家飞利浦有限公司 Method and apparatus for encoding HDR image
CN107863149A (en) * 2017-11-22 2018-03-30 中山大学 A kind of intelligent dentist's system

Also Published As

Publication number Publication date
CN109712703A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN109528323B (en) Orthodontic method and device based on artificial intelligence
CN109712703B (en) Orthodontic prediction method and device based on machine learning
US11232573B2 (en) Artificially intelligent systems to manage virtual dental models using dental images
US20200350059A1 (en) Method and system of teeth alignment based on simulating of crown and root movement
JP5671734B2 (en) Computer-aided creation of custom tooth setup using facial analysis
WO2020185527A1 (en) Foreign object identification and image augmentation and/or filtering for intraoral scanning
KR101590330B1 (en) Method for deriving shape information
CN105354426B (en) Smile designer
EP3859671A1 (en) Segmentation device and method of generating learning model
CN113223010B (en) Method and system for multi-tissue full-automatic segmentation of oral cavity image
KR102320857B1 (en) Method for orthodontic treatment and apparatus thereof
Singi et al. Extended arm of precision in prosthodontics: Artificial intelligence
Jang et al. Fully automatic integration of dental CBCT images and full-arch intraoral impressions with stitching error correction via individual tooth segmentation and identification
Deleat-Besson et al. Automatic segmentation of dental root canal and merging with crown shape
CN111275808B (en) Method and device for establishing tooth orthodontic model
CN112201349A (en) Orthodontic operation scheme generation system based on artificial intelligence
CN112419476A (en) Method and system for creating three-dimensional virtual image of dental patient
KR102448169B1 (en) Method and apparatus for predicting orthodontic treatment result based on deep learning
JP7269587B2 (en) segmentation device
WO2020133180A1 (en) Orthodontic method and apparatus based on artificial intelligence
Sornam Artificial Intelligence in Orthodontics-An exposition
Orlowska et al. Virtual tooth extraction from cone beam computed tomography scans
Ma et al. Accurate 3D Prediction of Missing Teeth in Diverse Patterns for Precise Dental Implant Planning
Wang et al. Influence of intraoral scanning coverage on the accuracy of digital implant impressions–An in vitro study
KR20230129859A (en) Orthodontic automatic diagnosis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant