CN118297968A - Method and device for dividing three-dimensional dental model, electronic equipment and storage medium - Google Patents

Method and device for dividing three-dimensional dental model, electronic equipment and storage medium Download PDF

Info

Publication number
CN118297968A
CN118297968A CN202410403302.7A CN202410403302A CN118297968A CN 118297968 A CN118297968 A CN 118297968A CN 202410403302 A CN202410403302 A CN 202410403302A CN 118297968 A CN118297968 A CN 118297968A
Authority
CN
China
Prior art keywords
dimensional
tooth
dental
model
dimensional dental
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410403302.7A
Other languages
Chinese (zh)
Inventor
王嘉磊
江腾飞
邱凯佳
张健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shining 3D Technology Co Ltd
Original Assignee
Shining 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shining 3D Technology Co Ltd filed Critical Shining 3D Technology Co Ltd
Priority to CN202410403302.7A priority Critical patent/CN118297968A/en
Publication of CN118297968A publication Critical patent/CN118297968A/en
Pending legal-status Critical Current

Links

Landscapes

  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The application relates to the technical field of three-dimensional digitization, and provides a method and a device for segmenting a three-dimensional dental model, electronic equipment and a storage medium, wherein the method comprises the following steps: projecting the three-dimensional dental model to obtain a two-dimensional dental image; identifying the two-dimensional dental image to obtain at least one two-dimensional dental mask contained in the two-dimensional dental image and a predicted dental label corresponding to each two-dimensional dental mask, wherein the two-dimensional dental mask is a dental mask area corresponding to a single tooth in the two-dimensional dental image; based on the corresponding relation between the two-dimensional dental image and the three-dimensional dental model and the predicted tooth label corresponding to the two-dimensional dental mask, the three-dimensional dental model is segmented to obtain a three-dimensional single-tooth model, and the three-dimensional single-tooth model refers to a three-dimensional digital model corresponding to a single tooth. The method has higher robustness, can not be interfered by diversification and abrasion of single tooth forms and the quality of the grid curved surface, and can realize full-automatic and high-precision example segmentation of the single tooth area.

Description

Method and device for dividing three-dimensional dental model, electronic equipment and storage medium
Technical Field
The present application relates to the field of three-dimensional digitizing technology, and in particular, to a method and apparatus for segmenting a three-dimensional dental model, an electronic device, and a storage medium.
Background
Digital intraoral impressions are widely used in the field of dental correction due to their efficiency and safety, orthodontists need to divide single tooth areas from the digital impression data of teeth, thereby making personalized orthodontic schemes, and in the field of restoration and denture design, the design of crown teeth, dental braces and dentures can be aided by the division of single tooth examples.
In the related art, the mouth scan digital grid is generally converted into point cloud data, and single-tooth instance segmentation is performed on the point cloud according to differential geometric characteristics such as curvature, law and the like.
The method has certain requirements on the number of vertexes and patches in the grid, the shape of single teeth, the quality of a grid curved surface and the like, and noise data, surface defects, fairing technology and the like can influence the robustness of the algorithm.
Disclosure of Invention
In order to solve the technical problems, the application provides a method, a device, electronic equipment and a storage medium for dividing a three-dimensional dental model, which have higher robustness, are not interfered by the diversification and abrasion of single tooth shapes and the quality of a grid curved surface, and can realize full-automatic and high-precision example division of a single tooth area.
In a first aspect, an embodiment of the present application provides a method for segmenting a three-dimensional dental model, including: projecting the three-dimensional dental model to obtain a two-dimensional dental image; identifying the two-dimensional dental image to obtain at least one two-dimensional dental mask contained in the two-dimensional dental image and a predicted dental label corresponding to each two-dimensional dental mask; the two-dimensional dental mask is a dental mask area corresponding to a single tooth in a two-dimensional dental image, and the three-dimensional dental model is segmented based on a corresponding relation between the two-dimensional dental image and the three-dimensional dental model and a predicted dental label corresponding to the two-dimensional dental mask to obtain a three-dimensional single-tooth model, wherein the three-dimensional single-tooth model refers to a three-dimensional digital model corresponding to the single tooth.
In a second aspect, an embodiment of the present application provides a segmentation apparatus for a three-dimensional dental model, including: the two-dimensional dental image determining module is used for projecting the three-dimensional dental model to obtain a two-dimensional dental image; the two-dimensional dental image recognition module is used for recognizing the two-dimensional dental image to obtain at least one two-dimensional dental mask contained in the two-dimensional dental image and a predicted dental label corresponding to each two-dimensional dental mask, wherein the two-dimensional dental mask is a dental mask area corresponding to a single tooth in the two-dimensional dental image; the three-dimensional single-tooth model determining module is used for determining a target tooth label corresponding to the three-dimensional single-tooth model based on the corresponding relation between the two-dimensional dental image and the three-dimensional single-tooth model and the predicted tooth label corresponding to the two-dimensional dental mask, and dividing the three-dimensional single-tooth model to obtain a three-dimensional single-tooth model, wherein the three-dimensional single-tooth model refers to a three-dimensional digital model corresponding to a single tooth.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory storing a computer program and a processor implementing the steps of any one of the methods of the first aspect when the processor executes the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor implements the steps of the method of any of the first aspects.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program or instructions which, when executed by a processor, performs the steps of the method according to any of the first aspects described above.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
The embodiment of the application provides a method, a device, electronic equipment and a storage medium for segmenting a three-dimensional dental model, wherein firstly, the three-dimensional dental model is projected to obtain a two-dimensional dental image; then, recognizing the two-dimensional dental image to obtain at least one two-dimensional dental mask contained in the two-dimensional dental image and a predicted dental label corresponding to each two-dimensional dental mask, wherein the two-dimensional dental mask is a dental mask area corresponding to a single tooth in the two-dimensional dental image; finally, based on the corresponding relation between the two-dimensional dental image and the three-dimensional dental model and the predicted tooth label corresponding to the two-dimensional dental mask, the three-dimensional dental model is segmented to obtain a three-dimensional single-tooth model, and the three-dimensional single-tooth model refers to a three-dimensional digital model corresponding to a single tooth. As the deep learning technology with mature two-dimensional space is used, the perception of human eyes to a three-dimensional object is simulated, and the digital impression grids are subjected to multi-view recognition and result fusion, so that the algorithm has higher robustness, cannot be subjected to the diversification and abrasion of single tooth forms, is not interfered by the quality of grid curved surfaces, and can realize full-automatic and high-precision instance segmentation of single tooth areas.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic view of an application scenario of a segmentation method of a three-dimensional dental model according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for segmenting a three-dimensional dental model according to an embodiment of the present application;
FIG. 3 is a flow chart of another method for segmenting a three-dimensional dental model provided by an embodiment of the present application;
FIG. 4 is a schematic view of a three-dimensional dental model according to an embodiment of the present application;
FIG. 5 is a schematic view of a three-dimensional dental model at 6 perspectives provided by an embodiment of the present application;
FIG. 6 is a schematic illustration of a two-dimensional dental mask identified in 6 two-dimensional dental images provided in an embodiment of the present application;
FIG. 7 is a schematic view of a structure including 13 three-dimensional single tooth models provided by an embodiment of the present application;
FIG. 8 is a schematic flow chart of three-dimensional single tooth model segmentation provided by an embodiment of the present application;
FIG. 9a is a schematic diagram of a three-dimensional single tooth model with burrs at boundaries provided by an embodiment of the present application;
FIG. 9b is a schematic illustration of a validation region and a non-validation region at a boundary of a three-dimensional single tooth model provided by an embodiment of the present application;
FIG. 9c is a schematic diagram of a validation region and a non-validation region at a boundary of an optimized three-dimensional single tooth model according to an embodiment of the present application;
Fig. 10 is a schematic structural view of a segmentation apparatus for a three-dimensional dental model according to an embodiment of the present application;
Fig. 11 is a schematic structural view of a segmentation apparatus for a three-dimensional dental model according to an embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the application will be more clearly understood, a further description of the application will be made. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the application.
In the field of oral medical restoration, the introduction of digitization technology has brought about profound changes. The process of creating a digital oral impression by an intraoral scanner is faster and more efficient than conventional physical models and plaster impressions, without waiting for the model to dry and cure. In addition, the digital impression can be directly subjected to virtual design and numerical control manufacture, so that the manufacturing time and the requirement of multiple back-diagnosis are reduced.
In the field of digital orthodontics, a single tooth area is segmented from a three-dimensional dental model, so that an orthodontist can be helped to accurately analyze the position, inclination and interrelation of each tooth, an accurate standard frame is established, and more personalized correction schemes can be formulated. In the field of restoration and denture design, the position and shape characteristics of a single tooth are obtained through example segmentation of the single tooth, and a digital impression with higher precision can be created to assist in the design of crown teeth, tooth sockets and dentures.
In the related art, the method for dividing the single tooth area from the three-dimensional dental model mainly comprises the following three steps:
(1) Labeling by manual means
Based on digital mapping software tools, the single tooth region is manually identified directly on the digital impression, which requires a great deal of time and effort. Support of relevant marking tools and software is needed, and learning cost is high. Marking and editing on three-dimensional digital impressions is also a less challenge for the designer than traditionally done on plaster models. The consistency and standardization are difficult to achieve under the influence of human subjective factors.
(2) By interactive (semi-automatic) means:
And selecting a plurality of single-tooth mark points on the digital impression by a user, and carrying out threshold segmentation and edge detection based on the information of geometric curvature and normal. The method needs robustness of the algorithm, has requirements on the technology of operators, and also consumes certain manual interaction and operation time.
(3) By means of fully automatic generation
The mouth sweeping mesh is usually converted into a point cloud, and single-tooth instance segmentation is performed on the point cloud according to differential geometric characteristics such as curvature, method and the like by using artificial intelligence and other technologies. The method has certain requirements on the number of vertexes and patches in the grid, the shape of single teeth, the quality of a grid curved surface and the like, and noise data, surface defects, fairing technology and the like can influence the robustness of the algorithm.
In order to solve at least one technical problem, an embodiment of the present application provides a method, an apparatus, a device, and a storage medium for segmenting a three-dimensional dental model, where first, the three-dimensional dental model is projected to obtain a two-dimensional dental image; then, recognizing the two-dimensional dental image to obtain at least one two-dimensional dental mask contained in the two-dimensional dental image and a predicted dental label corresponding to each two-dimensional dental mask, wherein the two-dimensional dental mask is a dental mask area corresponding to a single tooth in the two-dimensional dental image; finally, based on the corresponding relation between the two-dimensional tooth image and the three-dimensional dental model and the predicted tooth label corresponding to the two-dimensional tooth mask, the three-dimensional dental model is segmented to obtain a three-dimensional single-tooth model, and the three-dimensional single-tooth model refers to a three-dimensional digital model corresponding to a single tooth.
As the deep learning technology with mature two-dimensional space is used, the perception of a three-dimensional object by human eyes is simulated, the recognition of multiple visual angles and the fusion of results are carried out on the digital impression grid, and the interference of a plurality of visual angles does not influence the overall recognition precision, so that the algorithm has higher robustness, is not influenced by the diversification and abrasion of single tooth forms and the interference of the quality of a grid curved surface, and the full-automatic and high-precision example segmentation of a single tooth area can be realized.
The following describes in detail a method, an apparatus, a device and a storage medium for dividing a three-dimensional dental model according to an embodiment of the present application with reference to the accompanying drawings.
Fig. 1 is an application scenario schematic diagram of a segmentation method of a three-dimensional dental model according to an embodiment of the present application. It should be noted that fig. 1 is only an example of an application scenario where an embodiment of the present application may be applied, so as to help those skilled in the art understand the technical content of the present application, but it does not mean that the embodiment of the present application may not be applied to other devices, systems, environments, or scenarios.
As shown in fig. 1, the application scenario 100 of this embodiment may include a plurality of terminal devices 110, a network 120, a server 130, and a database 140. For example, the application scenario 100 may be adapted to implement the method of segmentation of a three-dimensional dental model according to any of the embodiments of the present application.
The terminal device 110 may be various electronic devices including a display screen and installed with various client applications including, but not limited to, smartphones, tablet computers, portable computers, desktop computers, and the like. The terminal device 110 may be installed, for example, with denture design applications, orthodontic solution design applications, digital oral scanning applications, and the like.
The above application installed in the terminal device 110 may be provided with a man-machine interaction interface, for example. The user can submit a tooth region segmentation request through the man-machine interaction interface. The three-dimensional dental model to be processed and the one or more three-dimensional single-tooth models after segmentation can be displayed in the man-machine interaction interface. Other information of the three-dimensional dental model can also be displayed in the man-machine interaction interface, for example: tooth number, tooth position number, number of missing teeth, location of missing teeth, etc.
According to an embodiment of the present application, the application scenario may further include, for example, a server that is communicatively connected to the terminal device 110 and is capable of performing tooth instance segmentation in response to the tooth region segmentation request sent by the terminal device 110.
It will be appreciated that the method of segmenting a three-dimensional dental model applied according to embodiments of the present application may be performed by the terminal device 110 or by a server 130 communicatively coupled to the terminal device 110. Accordingly, the segmentation apparatus of the three-dimensional dental model applied in the embodiment of the present application may be disposed in the terminal device 110 or disposed in the server 130 communicatively connected to the terminal device 110.
Network 120 may be a single network or a combination of at least two different networks. For example, network 120 may include, but is not limited to, one or a combination of several of a local area network, a wide area network, a public network, a private network, and the like. The network 120 may be a computer network such as the Internet and/or various telecommunications networks (e.g., 3G/4G/5G mobile communication networks, WIFI, bluetooth, zigBee, etc.), to which embodiments of the application are not limited.
The server 130 may be a single server, or a group of servers, or a cloud server, with each server within the group of servers being connected via a wired or wireless network. A server farm may be centralized, such as a data center, or distributed. The server 130 may be local or remote. The server 130 may communicate with the terminal device 110 through a wired or wireless network. Embodiments of the present application are not limited to the hardware system and software system of server 130.
Database 140 may refer broadly to a device having a storage function. The database 140 is mainly used to store various data utilized, generated, and outputted by the terminal device 110 and the server 130 in operation. Database 140 may be local or remote. The database 140 may include various memories, such as random access Memory (Random Access Memory, RAM), read Only Memory (ROM), and the like. The above-mentioned storage devices are merely examples and the storage devices that may be used by the system 100 are not limited in this regard. Embodiments of the present application are not limited to the hardware system and software system of database 140, and may be, for example, a relational database or a non-relational database.
Database 140 may be interconnected or in communication with server 130 or a portion thereof via network 120, or directly with server 130, or a combination thereof.
In some examples, database 140 may be a stand-alone device. In other examples, database 140 may also be integrated in at least one of terminal device 110 and server 130. For example, the database 140 may be provided on the terminal device 110 or on the server 130. For another example, the database 140 may be distributed, with one portion being provided on the terminal device 110 and another portion being provided on the server 130.
Fig. 2 is a flowchart of a method for segmenting a three-dimensional dental model according to an embodiment of the present application, where the method may be applied to a case of processing a three-dimensional dental model, and the method may be performed by a device for segmenting a three-dimensional dental model, and the device for segmenting a three-dimensional dental model may be implemented by software and/or hardware, and the method for segmenting a three-dimensional dental model may be performed by the terminal device 110 or the server 130 in fig. 1.
As shown in fig. 2, the method for segmenting a three-dimensional dental model according to the embodiment of the present application mainly includes steps S101 to S103.
And S101, projecting the three-dimensional dental model to obtain a two-dimensional dental image.
The three-dimensional dental model refers to a three-dimensional digital model comprising teeth and gums, wherein the three-dimensional dental model can be a three-dimensional model of upper teeth and gums, can be a three-dimensional model of lower teeth and can be a three-dimensional model of upper teeth and gums. Further, the three-dimensional dental model may be a part of a three-dimensional model of upper dental jaw or a three-dimensional model of lower dental jaw. For example: the three-dimensional dental model is a three-dimensional digital model of the left half of the upper jaw.
The three-dimensional dental model is composed of a plurality of mesh patches comprising a plurality of mesh vertices, wherein the mesh patches may be generally triangular patches comprising three mesh vertices. The high-precision dental three-dimensional digital model comprises a large number of triangular patches, for example, more than 10 ten thousand, even more than 15 ten thousand.
The method comprises the steps of scanning the oral cavity of a patient through a preset three-dimensional data scanning device to obtain a three-dimensional dental model, wherein the preset three-dimensional data scanning device can be an intraoral scanner with higher precision and the like, the intraoral scanner directly scans the oral cavity of the patient by using a detection-in optical scanning head, and projects an active light pattern to teeth, gums, mucous membranes and the like in the oral cavity by using a digital projection system through a structured light triangulation imaging principle, and the three-dimensional reconstruction and splicing are carried out through algorithm processing after a camera acquisition system obtains the pattern to obtain the three-dimensional dental model. The data of the three-dimensional dental model is displayed on a computer or a server through a display screen, so that a doctor can watch the three-dimensional dental model conveniently.
The two-dimensional dental image may be an image obtained by projecting a three-dimensional dental model onto a single plane. The two-dimensional dental image includes dental regions and non-dental regions. Wherein the non-dental region includes a gingival region and a background region. The projection method may be orthogonal projection.
In some possible embodiments, a coordinate conversion relationship between the three-dimensional dental model and the two-dimensional dental image is determined according to the projection view angle, the coordinate of each grid vertex in the three-dimensional dental model is converted into the coordinate of the corresponding two-dimensional pixel point according to the coordinate conversion relationship, and after each grid vertex is converted, the coordinate of each pixel point in the two-dimensional dental image is obtained, and then the two-dimensional dental image is obtained.
In the process of obtaining a two-dimensional dental image through projection of the three-dimensional dental model, the coordinate conversion relation between each triangular surface patch in the three-dimensional dental model and pixel points in the two-dimensional dental image is recorded, so that the identified two-dimensional dental mask can be conveniently mapped into the three-dimensional dental model.
S102, identifying the two-dimensional dental image to obtain at least one two-dimensional dental mask contained in the two-dimensional dental image and a predicted dental label corresponding to each two-dimensional dental mask.
The two-dimensional dental mask refers to a dental mask region corresponding to one tooth in the two-dimensional dental image. Tooth labels can be understood as an identification for distinguishing two-dimensional dental masks, for example: tooth labels may be lable, lable, etc. The method for predicting the tooth label refers to predicting a label after two-dimensional dental images are identified.
Specifically, the trained deep learning model is utilized to identify the two-dimensional dental image, and at least one two-dimensional dental mask contained in the two-dimensional dental image and a predicted dental label corresponding to each two-dimensional dental mask are obtained. The two-dimensional dental image is input into a trained deep learning model, the trained deep learning model identifies the two-dimensional dental image, and the output result of the trained deep learning model is as follows: a plurality of two-dimensional dental masks and a predictive dental label corresponding to each two-dimensional dental mask.
S103, based on the corresponding relation between the two-dimensional dental image and the three-dimensional dental model and the predicted tooth label corresponding to the two-dimensional dental mask, dividing the three-dimensional dental model to obtain a three-dimensional single-tooth model, wherein the three-dimensional single-tooth model refers to a three-dimensional digital model corresponding to a single tooth.
Wherein, the three-dimensional single tooth model refers to a three-dimensional digital model of a single tooth. The three-dimensional single-tooth model is obtained by dividing the three-dimensional dental model, and it can be understood that it is clear which tooth each patch in the three-dimensional dental model belongs to. In one embodiment, the entire three-dimensional dental model is displayed on the interactive interface, but the segmentation step has been completed, defining to which tooth each patch in the three-dimensional dental model belongs. In one embodiment, the entire three-dimensional dental model is displayed on the interactive interface, and the user can select to display a single three-dimensional dental model by voice indication or mouse click.
In addition, when the three-dimensional dental model is segmented based on the correspondence between the two-dimensional dental image and the three-dimensional dental model, the dental region in the two-dimensional dental image is correspondingly segmented to obtain a three-dimensional single-tooth model, and the portion of the three-dimensional dental model remaining after the three-dimensional single-tooth model is removed may be called a non-dental model, which corresponds to a non-dental region in the two-dimensional dental image, and the non-dental model may be a gum, a mucosa, or the like, which corresponds to the patient.
In one embodiment, the non-dental model may indicate no action, no segmentation, or the like, as desired by the user. In one embodiment, the non-dental model may be obtained from a boolean operation on the three-dimensional dental model and all three-dimensional single-tooth models. In one embodiment, the non-dental model may be obtained by dividing a three-dimensional dental model based on the correspondence between the two-dimensional dental image and the three-dimensional dental model. In one embodiment, the non-dental model is further segmented according to user requirements to obtain a gingival model and a background model. The gingival area and the background area are further identified through the non-tooth area of the two-dimensional dental image, and then the gingival area and the background area are further segmented based on the corresponding relation between the gingival area and the three-dimensional dental model.
In one possible implementation, determining a triangular patch area corresponding to the two-dimensional dental mask based on a correspondence between pixel points in the two-dimensional dental image and triangular patches in the three-dimensional dental model; and (3) carrying out segmentation treatment on the boundary of the triangular patch area to obtain the three-dimensional single-tooth model.
The method for determining the triangular patch area corresponding to the two-dimensional dental mask is mainly composed of the following 2 methods.
(1) The coordinate conversion is performed in the process of obtaining the two-dimensional dental image by the projection of the three-dimensional dental model. Therefore, the coordinate of each pixel point in the two-dimensional tooth mask can be subjected to the inverse operation of the coordinate conversion, so that the vertex coordinate of the triangular patch corresponding to the pixel point can be obtained. After the above conversion is performed for each pixel point in the two-dimensional tooth mask, the vertex coordinates of the triangular patches can be obtained, and the area constructed by the vertex coordinates of the triangular patches is taken as the triangular patch area.
(2) In the process of executing step S101, the correspondence between the vertex coordinates of each patch point in the three-dimensional dental model and the pixel point coordinates of the two-dimensional dental image is recorded, for example: the coordinates of the vertex 1 in the patch 1 correspond to the coordinates of the pixel point 1, and the coordinates of the vertex 1 in the patch 2 correspond to the coordinates of the pixel point 2. In the corresponding relation, the vertex coordinates of the triangular patches corresponding to the pixel points in the two-dimensional tooth mask are directly inquired. After the vertex coordinates of the triangular patches corresponding to each pixel point in the two-dimensional tooth mask are inquired, the areas constructed by the vertex coordinates of the triangular patches are used as triangular patch areas.
After determining the triangular patch area corresponding to the two-dimensional tooth mask, directly dividing the boundary of the triangular patch area to obtain a three-dimensional single-tooth model.
Further, after the above operations are performed on each two-dimensional tooth mask obtained in S102, a corresponding three-dimensional single tooth model can be obtained. To this end, the three-dimensional dental model is divided into a plurality of three-dimensional single-tooth models, i.e. into a further plurality of complete single-tooth instances.
In one possible implementation, a tooth tag may be understood as identifying information that marks the position of a tooth. For example: the teeth labels corresponding to the teeth of the upper jaw are 11, 12, 13, …,1N, … and 113 in sequence from left to right. The teeth labels corresponding to the teeth of the mandible are 21, 22, 23, …,2N, …,214 in sequence from left to right. Tooth labels may also be represented by other indicia, embodiments of the application not being particularly limited. Specifically, the tooth label may be a tooth position number, for example: the predicted tooth label is a predicted tooth position number, and the target tooth label is a target tooth position number.
In the process of training the deep learning model, the tooth position number is used as the marking information of the deep learning model to train, so that the deep learning model outputs the predicted tooth position number corresponding to the two-dimensional tooth mask while outputting the two-dimensional tooth mask, and correspondingly, the predicted tooth position number of the two-dimensional tooth mask is assigned to the corresponding three-dimensional single tooth model as the target tooth position number, and the accuracy of tooth position after tooth example segmentation can be further improved.
Firstly, projecting a three-dimensional dental model to obtain a two-dimensional dental image; then, recognizing the two-dimensional dental image to obtain at least one two-dimensional dental mask contained in the two-dimensional dental image and a predicted dental label corresponding to each two-dimensional dental mask, wherein the two-dimensional dental mask is a dental mask area corresponding to a single tooth in the two-dimensional dental image; finally, based on the corresponding relation between the two-dimensional dental model and the three-dimensional single-tooth model and the predicted tooth label corresponding to the two-dimensional dental mask, the three-dimensional dental model is segmented to obtain a three-dimensional single-tooth model, and the three-dimensional single-tooth model refers to a three-dimensional digital model corresponding to a single tooth. As the deep learning technology with mature two-dimensional space is used, the perception of human eyes to a three-dimensional object is simulated, and the digital impression grids are subjected to multi-view recognition and result fusion, so that the algorithm has higher robustness, cannot be subjected to the diversification and abrasion of single tooth forms, is not interfered by the quality of grid curved surfaces, and can realize full-automatic and high-precision instance segmentation of single tooth areas.
On the basis of the above embodiment, the method for segmenting a three-dimensional dental model according to the embodiment of the present application is further optimized, as shown in fig. 3, and the optimized method for segmenting a three-dimensional dental model includes the following steps:
S201, performing multi-view projection on the three-dimensional dental model to obtain a plurality of two-dimensional dental images.
And carrying out projection at different visual angles on the three-dimensional dental model to obtain two-dimensional dental images corresponding to the different visual angles, wherein the number of the two-dimensional dental images is the same as the number of the projected visual angles. For example, as shown in fig. 4, a three-dimensional dental model is shown, and fig. 5 is a view of 6 three-dimensional dental images obtained after projection of 6 perspectives on a three-dimensional dental model.
The projection view angle may be set by a doctor in the actual application process, or may be set according to an empirical value, which is not particularly limited in this embodiment.
S202, identifying each two-dimensional dental image, and obtaining at least one two-dimensional dental mask contained in the two-dimensional dental image and a predicted dental label corresponding to each two-dimensional dental mask.
In the embodiment of the present application, step S102 in the above embodiment is performed for each two-dimensional dental image, and the specific execution flow may refer to the description in the above example, which is not limited in detail in the embodiment of the present application.
By way of example, taking 6 two-dimensional dental images shown in fig. 5 as an example, after each dental image is identified, a two-dimensional dental mask corresponding to each single tooth and a predicted dental label corresponding to each two-dimensional dental mask included in each two-dimensional dental image can be obtained.
As shown in fig. 6, each of the two-dimensional dental masks included in the 6 two-dimensional dental images is 13 two-dimensional dental masks identified in each of the two-dimensional dental images as shown in fig. 6. In this example, each two-dimensional dental mask is identified as an example, and in practical application, the individual two-dimensional dental mask in the individual two-dimensional dental image cannot be identified correctly due to tooth defect, or influence of projection view angle, but this does not affect the subsequent segmentation of the three-dimensional single-tooth model in this embodiment.
Further, taking the first two-dimensional dental image in fig. 6 as an example, two-dimensional dental mask 11 corresponds to predicted dental label lable, two-dimensional dental mask 12 corresponds to predicted dental label lable, two-dimensional dental mask 13 corresponds to predicted dental label lable, two-dimensional dental mask 14 corresponds to predicted dental label lable, two-dimensional dental mask 15 corresponds to predicted dental label lable, two-dimensional dental mask 16 corresponds to predicted dental label lable, two-dimensional dental mask 17 corresponds to predicted dental label lable70, two-dimensional dental mask 18 corresponds to predicted dental label lable, two-dimensional dental mask 19 corresponds to predicted dental label lable90, two-dimensional dental mask 110 corresponds to predicted dental label lable100, two-dimensional dental mask 111 corresponds to predicted dental label lable110, two-dimensional dental mask 112 corresponds to predicted dental label lable, and two-dimensional dental mask 113 corresponds to predicted dental label lable.
As described above, each two-dimensional dental image can identify a plurality of two-dimensional dental masks and predicted tooth labels corresponding to each two-dimensional dental mask.
S203, determining triangular patch areas corresponding to the two-dimensional dental masks based on the corresponding relation between the pixel points in the two-dimensional dental images and the triangular patches in the three-dimensional dental model for each two-dimensional dental image.
Based on the correspondence between the two-dimensional dental image and the three-dimensional dental model, triangular patch areas corresponding to each two-dimensional dental mask can be determined, and the manner of determining the triangular patch areas corresponding to the two-dimensional dental mask is the same as that of the above embodiment, and specific reference may be made to the description in the above embodiment, which is not repeated in this embodiment.
In the embodiment of the present application, as shown in fig. 6, there are 6 two-dimensional dental images, each of which includes 13 two-dimensional dental masks, and accordingly, each of the two-dimensional dental images can determine 13 triangular patch areas corresponding to the two-dimensional dental masks based on the 13 two-dimensional dental masks included in the two-dimensional dental images. For example, a first two-dimensional dental image may determine its corresponding 13 triangular patch areas, a second two-dimensional dental image may determine its corresponding 13 triangular patch areas, a third two-dimensional dental image may determine its corresponding 13 triangular patch areas, a fourth two-dimensional dental image may determine its corresponding 13 triangular patch areas, a fifth two-dimensional dental image may determine its corresponding 13 triangular patch areas, and a sixth two-dimensional dental image may determine its corresponding 13 triangular patch areas.
S204, assigning the predicted tooth label corresponding to the two-dimensional tooth mask to the triangular patch area corresponding to the two-dimensional tooth mask.
And assigning the predicted tooth label corresponding to the two-dimensional tooth mask to the triangular patch area corresponding to the predicted tooth label, namely determining the predicted tooth label corresponding to the triangular patch area.
Illustratively, taking the first two-dimensional dental image of fig. 6 as an example, the two-dimensional dental mask 11 corresponds to the predicted dental label lable, and the two-dimensional dental mask 11 corresponds to the triangular patch area 1, then the predicted dental label lable is assigned to the triangular patch area 1. I.e. to build a relationship between the three-dimensional single tooth model and the predicted tooth label. Further, the assignment processing is performed on a plurality of two-dimensional dental masks in a plurality of two-dimensional dental images, so that predicted dental labels corresponding to the triangular patch areas can be obtained.
S205, voting the predicted tooth labels corresponding to the triangular patch areas.
And counting the predicted tooth labels corresponding to the three-dimensional single tooth model by taking the three-dimensional single tooth model as a reference.
The two-dimensional dental mask in each two-dimensional dental image is obtained by performing multi-view projection on the same three-dimensional dental model, and the triangular patch area is used as a reference, namely, the triangular patch area is subjected to multi-view projection. In other words, two-dimensional dental masks in different two-dimensional dental images correspond to the same triangular patch area.
Specifically, a predicted tooth label set corresponding to the triangular patch area is determined, wherein the predicted tooth label set comprises predicted tooth labels corresponding to the triangular patch area in each two-dimensional dental image; a target tooth label is selected from the set of predicted tooth labels.
As shown in fig. 6, each of the two-dimensional dental mask 11, the two-dimensional dental mask 21, the two-dimensional dental mask 31, the two-dimensional dental mask 41, the two-dimensional dental mask 51, and the two-dimensional dental mask 61 is obtained by performing multi-view projection of the triangular patch region 1.
Further, the two-dimensional dental mask 11 is identified in the first two-dimensional dental image, which corresponds to the predicted dental label lable, the two-dimensional dental mask 21 is identified in the second two-dimensional dental image, which corresponds to the predicted dental label lable, the two-dimensional dental mask 31 is identified in the third two-dimensional dental image, which corresponds to the predicted dental label lable, the two-dimensional dental mask 41 is identified in the fourth two-dimensional dental image, which corresponds to the predicted dental label lable10, the two-dimensional dental mask 51 is identified in the fifth two-dimensional dental image, which corresponds to the predicted dental label lable, and the two-dimensional dental mask 11 is identified in the sixth two-dimensional dental image, which corresponds to the predicted dental label lable10. Then 6 predicted tooth labels lable are included in the set of predicted tooth labels corresponding to triangular patch area 1.
In the above embodiment, the predicted tooth label identified in each two-dimensional dental image is lable as an example, and in practical application, the predicted tooth label corresponding to the two-dimensional dental mask identified in a certain two-dimensional dental image may not be lable, for example: and identifying that the predicted tooth label corresponding to the two-dimensional tooth mask in the third two-dimensional dental image is lable. Then, the set of predicted teeth labels corresponding to the three-dimensional single tooth model 1 includes 5 predicted teeth labels lable and 1 predicted teeth label lable.
And counting a corresponding predicted tooth label set of the three-dimensional single tooth model obtained for each two-dimensional tooth mask according to the mode.
The predicted tooth label set comprises a plurality of predicted tooth labels, and one predicted tooth label is selected from the plurality of predicted tooth labels to serve as a target tooth label corresponding to the three-dimensional single tooth model.
In one possible implementation, selecting the target tooth tag from a set of predicted tooth tags includes: counting the number of the same predicted tooth labels in the predicted tooth label set; the most number of identical predicted tooth tags is taken as the target tooth tag.
And if all the predicted tooth labels included in the predicted tooth label set are identical, taking the predicted tooth label as a target tooth label corresponding to the three-dimensional single tooth model. For example: in the above example, the predicted tooth label set includes 6 predicted tooth labels lable and lable is the target tooth label corresponding to the three-dimensional single tooth model 1.
When the predicted tooth label sets comprise different predicted tooth labels, the most number of the same predicted tooth labels are used as the corresponding target tooth labels of the three-dimensional single tooth model. For example: in the above example, the predicted tooth label set includes 5 predicted tooth labels lable and 1 predicted tooth label lable, and lable is the target tooth label corresponding to the three-dimensional single tooth model 1.
Further, if the largest number of identical predicted tooth labels includes at least 2, for example: the set of predicted teeth labels corresponding to the three-dimensional single tooth model 1 includes 6 predicted teeth labels, wherein 2 predicted teeth labels lable, 2 predicted teeth labels lable, and 2 predicted teeth labels lable. At this time, a failure of the segmentation of the three-dimensional single-tooth model is prompted, and the segmentation method of the three-dimensional single-tooth model provided by the embodiment of the application is re-executed.
In one possible implementation, after selecting the target tooth tag from the set of predicted tooth tags, further comprising: judging whether target tooth labels associated with two adjacent three-dimensional single tooth models are the same or not, and if yes, sending out prompt information. If not, the operation is not performed.
The prompt information is mainly used for prompting a doctor that the single tooth model segmentation at the position possibly has errors. The prompt information is displayed in any one or more forms of sound, light, vibration, text and the like.
In the practical application process, various abnormal conditions such as damaged teeth, multiple teeth, retention of deciduous teeth, no falling of the deciduous teeth and the like possibly exist in a patient, so that prompt information is sent out under the condition that the target tooth labels associated with two adjacent three-dimensional single tooth models are the same, whether the teeth are separated correctly can be judged manually, and doctors can be prompted to pay attention to the abnormal conditions and pay attention to treatment.
S206, dividing the three-dimensional dental model according to the voting result to obtain a three-dimensional single-tooth model, wherein the three-dimensional single-tooth model is associated with the target tooth label.
In the above example, when there are different predicted tooth tags in the set of predicted tooth tags, for example: the predicted tooth label set includes 5 predicted tooth labels lable and 1 predicted tooth label lable, wherein lable is a target tooth label corresponding to the three-dimensional single tooth model 1, only the region corresponding to lable is segmented to be used as a three-dimensional single tooth model, and an association relationship between the three-dimensional single tooth model and the target tooth label lable is established, and the region corresponding to the predicted tooth label lable20 is not in the three-dimensional single tooth model. The three-dimensional dental model is segmented according to the voting result, so that the influence caused by a plurality of single-view identification errors can be avoided, the robustness of the algorithm is improved, and the segmentation accuracy of the three-dimensional single-tooth model is ensured.
In one embodiment, after step S206, the method further includes a step of optimizing the segmentation boundary of the three-dimensional single-tooth model, so as to correct isolated points or patches, and improve edge details.
Fig. 7 is a view showing a plurality of three-dimensional single tooth models obtained by dividing the three-dimensional dental model of fig. 4, each three-dimensional single tooth model being represented by a different color.
Fig. 8 is a flowchart of three-dimensional single-tooth model recognition provided in this embodiment, as shown in fig. 8, after 3-view projection is performed on triangular patches in a three-dimensional dental model, one triangular patch is projected into two-dimensional dental images corresponding to different view angles, and recognition is performed on the two-dimensional dental images to obtain two-dimensional dental masks and corresponding predicted dental labels identified in each two-dimensional dental image, for example: the predicted tooth label corresponding to the two-dimensional tooth mask is identified as lable in the first two-dimensional dental image, lable in the second two-dimensional dental image, and lable in the third two-dimensional dental image. The same predicted tooth label lable is used as the target tooth label for the three-dimensional single tooth model, according to the rules provided above.
On the basis of the above embodiment, as shown in fig. 9a, due to some noise and shielding during the projection of the three-dimensional dental model, there may be some saw teeth and erroneous segmentation of the edges of the resulting three-dimensional single-tooth model.
In this regard, the embodiment of the present application further optimizes the above-mentioned method for dividing a three-dimensional dental model, and the method for dividing an optimized three-dimensional dental model further includes, after dividing the three-dimensional dental model into at least one three-dimensional single dental model: dividing the three-dimensional single-tooth model into a confirmation area and a non-confirmation area; the segmentation boundary of the three-dimensional single tooth model is optimized based on the confirmed region and the non-confirmed region. It should be noted that, for each three-dimensional single tooth model divided in the above embodiment, the boundary optimization scheme provided by the embodiment of the present application may be adopted.
The segmentation boundary of the three-dimensional single-tooth model is optimized, so that the edge of the three-dimensional single-tooth model is smoother, saw teeth and wrong segmentation are avoided, and the edge detail is improved.
Specifically, optimizing the segmentation boundary of the three-dimensional single-tooth model based on the confirmed region and the non-confirmed region includes: traversing the triangular patches of the non-confirmed area from the boundary of the confirmed area; for the traversed target triangular patch, dividing the target triangular patch into a confirmation area when the difference between the normal vector of the target triangular patch and the normal vector of the adjacent triangular patch is smaller than a first preset value and the difference between the curvature of the target triangular patch and the curvature of the adjacent triangular patch is smaller than a second preset value; and when the difference between the normal vector of the target triangular patch and the normal vector of the adjacent triangular patch is larger than or equal to a first preset value, and/or when the difference between the curvature of the target triangular patch and the curvature of the adjacent triangular patch is larger than or equal to a second preset value, the target triangular patch is used as the boundary patch of the three-dimensional single tooth model, wherein the adjacent triangular patch is the triangular patch in the confirmation area.
As shown in fig. 9b, according to the rough result of the segmentation of the deep learning example, the three-dimensional single tooth model of the single tooth is divided into a confirmation area and a non-confirmation area, the traversal is performed from the confirmation area to the non-confirmation area according to the normal direction and curvature condition of the local adjacent patches, and when the normal direction and curvature change is large, the traversal is stopped, and at the moment, the stop area is the boundary area of the single tooth.
FIG. 9c is an optimized three-dimensional single tooth model, and it can be seen that the boundary area is smoother and edge details of the three-dimensional single tooth model are improved compared with the three-dimensional single tooth model in FIG. 9 a.
Based on the above embodiment, in the embodiment of identifying two-dimensional dental images to obtain a plurality of two-dimensional dental masks and predicted dental labels corresponding to the two-dimensional dental masks, the plurality of two-dimensional dental masks and predicted dental labels corresponding to the two-dimensional dental masks included in each two-dimensional dental image may be identified based on a pre-trained deep learning model, that is, each two-dimensional dental image is input into the pre-trained deep learning model to obtain a plurality of two-dimensional dental masks output by the deep learning model and predicted dental labels corresponding to the two-dimensional dental masks, and the training process of the deep learning model is described below in combination with the embodiment: acquiring a sample dental image, wherein the sample dental image is a two-dimensional image; determining a sample dental mask included in the sample dental image and a sample dental label of each sample dental mask; training model parameters of a pre-constructed deep learning model based on the determined sample dental mask and sample dental images of sample dental labels of the sample dental mask to obtain a trained deep learning model. The trained deep learning model is used for identifying the two-dimensional dental image to obtain at least one two-dimensional dental mask contained in the two-dimensional dental image and predicted dental labels corresponding to the two-dimensional dental masks.
In one embodiment of the present disclosure, a sample tooth three-dimensional model is obtained, a plurality of sample dental images corresponding to a plurality of perspectives of the tooth three-dimensional model are obtained, a two-dimensional mask corresponding to each tooth in each sample dental image is determined, and a sample tooth label is determined in a central region of each tooth two-dimensional mask. So that the sample tooth label to be determined later belongs to the inside of the two-dimensional tooth mask.
Further, a marked sample dental image is obtained according to a sample dental label corresponding to the two-dimensional dental mask, and a pre-built deep learning model is trained according to the marked sample dental image.
The embodiment of the application provides a training mode of a model, which can obtain a pre-constructed deep learning model so as to realize the identification of a two-dimensional tooth mask and provide processing efficiency.
Compared with some deep learning schemes based on point clouds and grids, the segmentation precision of the single tooth boundary is higher based on pixel-by-pixel segmentation and combination with the traditional geometric method in the embodiment of the application.
Compared with some deep learning schemes based on point clouds and grids, the embodiment of the application has little dependence on global information, does not need to input a whole pair of dental jaws (upper jaw and lower jaw), and can well identify an independent upper jaw or an independent lower jaw or a half jaw.
Fig. 10 is a schematic structural view of a segmentation apparatus for a three-dimensional dental model according to the present embodiment; the device is configured in the electronic equipment, and can realize the segmentation method of the three-dimensional dental model of any embodiment of the application. As shown in fig. 10, a segmentation apparatus 1000 for a three-dimensional dental model according to an embodiment of the present application mainly includes:
The two-dimensional dental image determining module 101 is configured to project the three-dimensional dental model to obtain a two-dimensional dental image; the two-dimensional dental image recognition module 102 is configured to recognize a two-dimensional dental image to obtain at least one two-dimensional dental mask included in the two-dimensional dental image and a predicted dental label corresponding to each two-dimensional dental mask, where the two-dimensional dental mask corresponds to a dental mask area corresponding to a single tooth in the two-dimensional dental image; the three-dimensional single-tooth model segmentation module 103 is configured to segment the three-dimensional dental model into at least one three-dimensional single-tooth model based on a correspondence between the two-dimensional dental image and the three-dimensional single-tooth model, where the three-dimensional single-tooth model refers to a three-dimensional digital model corresponding to a single tooth.
In one possible implementation manner, the two-dimensional dental image determining module 101 is specifically configured to perform multi-view projection on the three-dimensional dental model to obtain a plurality of two-dimensional dental images; the three-dimensional single-tooth model segmentation module 103 is specifically configured to determine, for each two-dimensional dental image, a triangular patch region corresponding to the two-dimensional dental mask based on a correspondence between pixel points in the two-dimensional dental image and triangular patches in the three-dimensional dental model; assigning the predicted tooth label corresponding to the two-dimensional tooth mask to the triangular patch area corresponding to the two-dimensional tooth mask; voting the predicted tooth label corresponding to the triangular patch area; and dividing the three-dimensional dental model according to the voting result to obtain a three-dimensional single-tooth model.
In one possible implementation manner, the three-dimensional single-tooth model segmentation module 103 is specifically configured to determine a set of predicted tooth labels corresponding to the triangular patch area, where the set of predicted tooth labels includes predicted tooth labels corresponding to the triangular patch area in each two-dimensional dental image; selecting the target tooth label from the set of predicted tooth labels; and dividing the boundary of the triangular patch area corresponding to the target tooth label to obtain a three-dimensional single tooth model, wherein the three-dimensional single tooth model is associated with the target tooth label.
In one possible implementation, the three-dimensional single tooth model determination module 103 is configured to count the number of identical predicted tooth labels in the set of predicted tooth labels; the most number of identical predicted tooth tags is taken as the target tooth tag.
In one possible implementation, the apparatus further includes: the label judging module is used for judging whether the target tooth labels associated with the two adjacent three-dimensional single tooth models are the same after the target tooth labels are selected from the predicted tooth label set, and if yes, prompting information is sent out.
In one possible implementation, the apparatus further includes: the boundary optimization module is used for dividing the three-dimensional dental model into at least one three-dimensional single-tooth model and dividing the three-dimensional single-tooth model into a confirmation area and a non-confirmation area; optimizing the segmentation boundary of the three-dimensional single tooth model based on the confirmed region and the non-confirmed region.
In one possible implementation manner, the boundary optimization module is specifically configured to traverse the triangular patch of the non-validation area by starting the three-dimensional single-tooth model from the boundary of the validation area; for the traversed target triangular patch, dividing the target triangular patch into a confirmation area when the difference between the normal vector of the target triangular patch and the normal vector of the adjacent triangular patch is smaller than a first preset value and the difference between the curvature of the target triangular patch and the curvature of the adjacent triangular patch is smaller than a second preset value; and when the difference between the normal vector of the target triangular patch and the normal vector of the adjacent triangular patch is larger than or equal to a first preset value, and/or when the difference between the curvature of the target triangular patch and the curvature of the adjacent triangular patch is larger than or equal to a second preset value, the target triangular patch is used as the boundary patch of the three-dimensional single tooth model, wherein the adjacent triangular patch is the triangular patch in the confirmation area.
In one possible implementation, the predicted tooth label is a predicted tooth position number and the target tooth label is a target tooth position number.
In one possible implementation, the apparatus further includes: the model training module is used for acquiring a sample dental image, wherein the sample dental image is a two-dimensional image; determining a sample dental mask included in the sample dental image and a sample dental label of each sample dental mask; training model parameters of a pre-constructed deep learning model based on the determined sample tooth mask and sample dental images of sample tooth labels of the sample tooth mask to obtain a trained deep learning model; the trained deep learning model is used for identifying the two-dimensional dental image to obtain at least one two-dimensional dental mask contained in the two-dimensional dental image and predicted dental labels corresponding to the two-dimensional dental masks.
The device for dividing the three-dimensional dental model provided by the embodiment of the invention can execute the method for dividing the three-dimensional dental model provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 11 is a schematic structural view of a segmentation apparatus for a three-dimensional dental model according to the present embodiment. As shown in fig. 11, the segmentation apparatus 1100 of the three-dimensional dental model includes a processor 1110, a memory 1120, an input device 1130, and an output device 1140; the number of processors 1110 in the electronic device may be one or more, one processor 1110 being taken as an example in fig. 11; the processor 1110, memory 1120, input device 1130, and output device 1140 in the electronic device may be connected by a bus or other means, for example in fig. 11.
The memory 1120 is used as a computer readable storage medium for storing a software program, a computer executable program, and modules, such as program instructions/modules corresponding to the data transmission method in the embodiment of the present invention. The processor 1110 executes various functional applications of the electronic device and data processing by executing software programs, instructions and modules stored in the memory 1120, that is, implements the method for dividing a three-dimensional dental model provided by the embodiment of the present invention.
The memory 1120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 1120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 1120 may further include memory remotely located relative to processor 1110, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 1130 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device, which may include a keyboard, mouse, etc. The output 1140 may comprise a display device such as a display screen.
The present embodiment also provides a storage medium containing computer executable instructions which, when executed by a computer processor, are used to implement the method of segmentation of a three-dimensional dental model provided by embodiments of the present invention.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present invention is not limited to the method operations described above, and may also perform the related operations in the method for segmenting a three-dimensional dental model provided in any embodiment of the present invention.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, etc., and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present invention.
It should be noted that, in the above-mentioned embodiments of the search apparatus, each unit and module included are only divided according to the functional logic, but not limited to the above-mentioned division, as long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the application to enable those skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A method for segmenting a three-dimensional dental model, comprising:
projecting the three-dimensional dental model to obtain a two-dimensional dental image;
Identifying the two-dimensional dental image to obtain at least one two-dimensional dental mask included in the two-dimensional dental image and a predicted dental label corresponding to each two-dimensional dental mask, wherein the two-dimensional dental mask is a dental mask area corresponding to a single tooth in the two-dimensional dental image;
Based on the corresponding relation between the two-dimensional dental image and the three-dimensional dental model and the predicted tooth label corresponding to the two-dimensional dental mask, the three-dimensional dental model is segmented to obtain a three-dimensional single-tooth model, and the three-dimensional single-tooth model refers to a three-dimensional digital model corresponding to a single tooth.
2. The method of claim 1, wherein projecting the three-dimensional dental model to obtain a two-dimensional dental image comprises:
Performing multi-view projection on the three-dimensional dental model to obtain a plurality of two-dimensional dental images;
The method for dividing the three-dimensional dental model based on the corresponding relation between the two-dimensional dental image and the three-dimensional dental model and the predicted dental label corresponding to the two-dimensional dental mask to obtain a three-dimensional single-tooth model comprises the following steps:
Determining triangular patch areas corresponding to the two-dimensional dental masks based on the corresponding relation between pixel points in the two-dimensional dental images and triangular patches in the three-dimensional dental model aiming at each two-dimensional dental image;
Assigning the predicted tooth label corresponding to the two-dimensional tooth mask to the triangular patch area corresponding to the two-dimensional tooth mask;
voting the predicted tooth label corresponding to the triangular patch area;
And dividing the three-dimensional dental model according to the voting result to obtain a three-dimensional single-tooth model.
3. The method of claim 2, wherein the voting is performed on a predicted tooth label corresponding to the triangular patch region; dividing the three-dimensional dental model according to the voting result to obtain a three-dimensional single-tooth model, wherein the method comprises the following steps:
determining a predicted tooth label set corresponding to the triangular patch region, wherein the predicted tooth label set comprises predicted tooth labels corresponding to the triangular patch region in each two-dimensional dental image;
selecting a target tooth label from the set of predicted tooth labels;
And dividing the boundary of the triangular patch area corresponding to the target tooth label to obtain a three-dimensional single tooth model, wherein the three-dimensional single tooth model is associated with the target tooth label.
4. The method of claim 3, wherein the selecting the target tooth tag from the set of predicted tooth tags comprises:
Counting the number of identical predicted tooth labels in the predicted tooth label set;
and taking the most number of the same predicted tooth labels as target tooth labels.
5. The method of claim 4, wherein after the selecting the target tooth tag from the set of predicted tooth tags, further comprising:
Judging whether the target tooth labels associated with the two adjacent three-dimensional single tooth models are the same or not, and if yes, sending out prompt information.
6. The method according to any one of claims 1 to 5, further comprising:
dividing the three-dimensional single-tooth model into a confirmation area and a non-confirmation area after the three-dimensional dental model is divided into at least one three-dimensional single-tooth model;
Optimizing the segmentation boundary of the three-dimensional single tooth model based on the confirmed region and the non-confirmed region.
7. The method of claim 6, wherein optimizing the segmentation boundary of the three-dimensional single tooth model based on the identified region and the non-identified region comprises:
Traversing the triangular patches of the non-confirmation area from the boundary of the confirmation area;
for the traversed target triangular patch, dividing the target triangular patch into a confirmation area when the difference between the normal vector of the target triangular patch and the normal vector of the adjacent triangular patch is smaller than a first preset value and the difference between the curvature of the target triangular patch and the curvature of the adjacent triangular patch is smaller than a second preset value;
And taking the target triangular patch as a boundary patch of the three-dimensional single-tooth model when the difference value between the normal vector of the target triangular patch and the normal vector of the adjacent triangular patch is larger than or equal to a first preset value and/or when the difference value between the curvature of the target triangular patch and the curvature of the adjacent triangular patch is larger than or equal to a second preset value, wherein the adjacent triangular patch is a triangular patch in a confirmation area.
8. The method of claim 3, wherein the predicted tooth label is a predicted tooth position number and the target tooth label is a target tooth position number.
9. The method according to claim 1, wherein the method further comprises:
Acquiring a sample dental image, wherein the sample dental image is a two-dimensional image;
Determining a sample dental mask included in the sample dental image and a sample dental label of each sample dental mask;
Training model parameters of a pre-constructed deep learning model based on the determined sample tooth mask and sample dental images of sample tooth labels of the sample tooth mask to obtain a trained deep learning model;
The trained deep learning model is used for identifying the two-dimensional dental image to obtain at least one two-dimensional dental mask contained in the two-dimensional dental image and a predicted dental label corresponding to each two-dimensional dental mask.
10. A device for segmenting a three-dimensional dental model, comprising:
The two-dimensional dental image determining module is used for projecting the three-dimensional dental model to obtain a two-dimensional dental image;
The two-dimensional dental image recognition module is used for recognizing the two-dimensional dental image to obtain at least one two-dimensional dental mask contained in the two-dimensional dental image and a predicted dental label corresponding to each two-dimensional dental mask, wherein the two-dimensional dental mask is a dental mask area corresponding to a single tooth in the two-dimensional dental image;
the three-dimensional single-tooth model determining module is used for dividing the three-dimensional single-tooth model into at least one three-dimensional single-tooth model based on the corresponding relation between the two-dimensional dental image and the three-dimensional single-tooth model and the predicted tooth label corresponding to the two-dimensional dental mask, wherein the three-dimensional single-tooth model refers to a three-dimensional digital model corresponding to a single tooth.
11. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 9 when the computer program is executed.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 9.
CN202410403302.7A 2024-04-03 2024-04-03 Method and device for dividing three-dimensional dental model, electronic equipment and storage medium Pending CN118297968A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410403302.7A CN118297968A (en) 2024-04-03 2024-04-03 Method and device for dividing three-dimensional dental model, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410403302.7A CN118297968A (en) 2024-04-03 2024-04-03 Method and device for dividing three-dimensional dental model, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118297968A true CN118297968A (en) 2024-07-05

Family

ID=91685454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410403302.7A Pending CN118297968A (en) 2024-04-03 2024-04-03 Method and device for dividing three-dimensional dental model, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118297968A (en)

Similar Documents

Publication Publication Date Title
US20220218449A1 (en) Dental cad automation using deep learning
JP7489964B2 (en) Automated Orthodontic Treatment Planning Using Deep Learning
US11995839B2 (en) Automated detection, generation and/or correction of dental features in digital models
US11735306B2 (en) Method, system and computer readable storage media for creating three-dimensional dental restorations from two dimensional sketches
CN109414306B (en) Historical scan reference for intraoral scanning
KR102273438B1 (en) Apparatus and method for automatic registration of oral scan data and computed tomography image using crown segmentation of oral scan data
US9814549B2 (en) Method for creating flexible arch model of teeth for use in restorative dentistry
CN112790879B (en) Tooth axis coordinate system construction method and system of tooth model
WO2024046400A1 (en) Tooth model generation method and apparatus, and electronic device and storage medium
WO2020161245A1 (en) Method for generating dental models based on an objective function
US20220361992A1 (en) System and Method for Predicting a Crown and Implant Feature for Dental Implant Planning
CN113171188A (en) Digital dental model construction method and system with hard palate area
US20240058105A1 (en) Augmentation of 3d surface of dental site using 2d images
KR20180109412A (en) System For Rendering 3D Data
CN118297968A (en) Method and device for dividing three-dimensional dental model, electronic equipment and storage medium
US20220358740A1 (en) System and Method for Alignment of Volumetric and Surface Scan Images
CN114399551B (en) Method and system for positioning tooth root orifice based on mixed reality technology
CN116385474B (en) Tooth scanning model segmentation method and device based on deep learning and electronic equipment
CN118298104A (en) Tooth characteristic point identification method, device, equipment and storage medium
US20230419631A1 (en) Guided Implant Surgery Planning System and Method
US20230298272A1 (en) System and Method for an Automated Surgical Guide Design (SGD)
CN116012529B (en) Virtual root generating method, device, computer equipment and storage medium
EP4307229A1 (en) Method and system for tooth pose estimation
CN115457106A (en) Tooth center positioning method of oral-scanning dental model, storage medium and electronic equipment
WO2024039547A1 (en) Augmentation of 3d surface of dental site using 2d images

Legal Events

Date Code Title Description
PB01 Publication