CN117670956A - Automatic extraction method of tooth boundary, image correction method and tooth treatment system - Google Patents

Automatic extraction method of tooth boundary, image correction method and tooth treatment system Download PDF

Info

Publication number
CN117670956A
CN117670956A CN202311671789.9A CN202311671789A CN117670956A CN 117670956 A CN117670956 A CN 117670956A CN 202311671789 A CN202311671789 A CN 202311671789A CN 117670956 A CN117670956 A CN 117670956A
Authority
CN
China
Prior art keywords
image
tooth
feature
boundaries
extracting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311671789.9A
Other languages
Chinese (zh)
Inventor
任春霞
田庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ruiyibo Technology Co ltd
Beijing Baihui Weikang Technology Co Ltd
Original Assignee
Beijing Ruiyibo Technology Co ltd
Beijing Baihui Weikang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ruiyibo Technology Co ltd, Beijing Baihui Weikang Technology Co Ltd filed Critical Beijing Ruiyibo Technology Co ltd
Priority to CN202311671789.9A priority Critical patent/CN117670956A/en
Publication of CN117670956A publication Critical patent/CN117670956A/en
Pending legal-status Critical Current

Links

Landscapes

  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an automatic extraction method, an image correction method and a tooth treatment system of a tooth boundary, wherein a tooth surface characteristic image and a tooth anatomical characteristic image are obtained; registering the tooth surface feature image onto the tooth anatomical feature image to obtain a registered image; extracting real tooth boundaries from the registered image as supervisory signals; extracting tooth boundaries from the tooth anatomic feature images based on a first image processing model; based on the supervision signals and the tooth boundaries actually extracted by the first image processing model, adjusting model parameters of the first image processing model until training is completed; the automatic extraction of the tooth boundary is carried out on the target tooth anatomical feature image based on the first image processing model which completes training, so that an operator does not need to manually carry out image labeling and the like, automatic processing on CBCT images is realized, the operator does not need to manually carry out image labeling and the like, and the accuracy and the efficiency of image processing are improved.

Description

Automatic extraction method of tooth boundary, image correction method and tooth treatment system
Technical Field
The application relates to the technical field of medical robots, in particular to an automatic tooth boundary extraction method, an image correction method and a tooth treatment system.
Background
With the continuous development of dental diagnostic techniques, cone beam computed tomography (Cone Beam Computed Tomography, CBCT) has become a major three-dimensional image inspection tool in dental clinics, which can noninvasively acquire high-resolution oral anatomy information, and has important value for diagnosis and surgical treatment planning.
However, artifacts such as motion artifacts and serious metal artifacts from dental restorations can occur during CBCT image generation, which can negatively affect image quality, resulting in that CBCT images cannot accurately reproduce actual anatomical physiological conditions of the oral cavity, such as tooth boundaries.
Currently, in various CBCT image correction solutions, an operator is often required to manually perform image labeling and the like, which results in poor correction accuracy and efficiency.
Disclosure of Invention
The purpose of the application is to provide an automatic extraction method, an image correction method and a tooth treatment system for tooth boundaries, which are used for solving or relieving the technical problems in the prior art.
An automatic extraction method of tooth boundaries, comprising:
acquiring a tooth surface feature image and a tooth anatomy feature image;
registering the tooth surface feature image onto the tooth anatomical feature image to obtain a registered image;
extracting real tooth boundaries from the registered image;
extracting tooth boundaries from the tooth anatomic feature images based on a first image processing model;
based on the supervision signals and the tooth boundaries actually extracted by the first image processing model, adjusting model parameters of the first image processing model until training is completed;
and automatically extracting tooth boundaries from the target tooth anatomical feature image based on the trained first image processing model.
An image correction method, comprising:
acquiring a first mode image and a second mode image, wherein the first mode image is used for representing tooth surface characteristics, and the second mode image is used for representing tooth anatomical characteristics;
mapping the first modality image onto the second modality image to obtain a registered image;
extracting real tooth boundaries from the registered image;
extracting tooth boundaries from the second modality image based on a set image correction model;
based on the actual tooth boundary and the tooth boundary actually extracted by the image correction model, adjusting model parameters of the image correction model until training is completed;
and carrying out automatic extraction of tooth boundaries on the second-mode image to be corrected based on the trained image correction model, and removing artifacts on the second-mode image to be corrected.
A method of training an image correction model, comprising:
acquiring a first mode image and a second mode image, wherein the first mode image is used for representing tooth surface characteristics, and the second mode image is used for representing tooth anatomical characteristics;
mapping the first modality image onto the second modality image to obtain a registered image;
extracting real tooth boundaries from the registered image;
extracting tooth boundaries from the second modality image based on a set image correction model;
and adjusting model parameters of the image correction model based on the supervision signals and the tooth boundaries actually extracted by the image correction model until training is completed.
An image correction method, comprising:
invoking an image correction model trained according to any of the embodiments of the present application;
and inputting the second-mode image to be corrected into the image correction model for correction so as to start to extract tooth boundaries and remove artifacts on the second-mode image to be corrected.
A dental treatment system, comprising: the host is used for executing the method according to any one of the embodiments of the application to automatically extract the tooth boundary of the target tooth anatomical feature image of the patient, and controlling the target action set by the mechanical arm for the target tooth of the patient based on the automatically extracted tooth boundary.
In the scheme provided by the embodiment of the application, the tooth surface characteristic image and the tooth anatomical characteristic image are obtained; registering the tooth surface feature image onto the tooth anatomical feature image to obtain a registered image; extracting real tooth boundaries from the registered image; extracting tooth boundaries from the tooth anatomic feature images in a first image processing model; based on the supervision signals and the tooth boundaries actually extracted by the first image processing model, adjusting model parameters of the first image processing model until training is completed; the automatic extraction of the tooth boundary is carried out on the target tooth anatomical feature image based on the first image processing model which completes training, so that an operator does not need to manually carry out image labeling and the like, automatic processing on CBCT images is realized, the operator does not need to manually carry out image labeling and the like, and the accuracy and the efficiency of image processing are improved.
Drawings
Some specific embodiments of the present application will be described in detail below by way of example and not by way of limitation with reference to the accompanying drawings. The same reference numbers will be used throughout the drawings to refer to the same or like parts or portions. It will be appreciated by those skilled in the art that the drawings are not necessarily drawn to scale. In the accompanying drawings:
fig. 1 is a flowchart of an automatic extraction method of tooth boundaries according to an embodiment of the present application.
Fig. 2 is a flowchart of an image correction method according to an embodiment of the present application.
Fig. 3 is a schematic view of a dental treatment system according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions in the embodiments of the present application, the following descriptions will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the embodiments of the present application shall fall within the scope of protection of the embodiments of the present application.
Fig. 1 is a flowchart of an automatic extraction method of tooth boundaries according to an embodiment of the present application. As shown in fig. 1, it includes:
s101, acquiring a tooth surface feature image and a tooth anatomy feature image;
s102, registering the tooth surface feature image to the tooth anatomical feature image to obtain a registered image;
s103, extracting real tooth boundaries from the registration images to serve as supervision signals;
s104, extracting tooth boundaries from the tooth anatomical feature images based on a first image processing model;
s105, adjusting model parameters of the first image processing model based on the supervision signals and the tooth boundaries actually extracted by the first image processing model until training is completed;
s106, automatically extracting tooth boundaries from the target tooth anatomical feature image based on the trained first image processing model.
Alternatively, in this embodiment, the three-dimensional oral scanning device may be used to perform oral scanning on the oral cavity of the user, so as to obtain an oral scanning image, and the oral scanning image is used as the tooth surface feature image. For the tooth anatomical feature image, for example, a CBCT scanning device may be used to photograph the oral cavity of the user, so as to obtain a CBCT image, and the CBCT image is used as the tooth anatomical feature image.
Here, the aforementioned mouth scan image is merely an example, and not limited to the aforementioned tooth surface feature image, and the CBCT image is merely an example of a tooth anatomical feature image.
In this embodiment, the training of the model is performed, and thus the tooth surface feature image and the tooth anatomy feature image may be obtained from a plurality of users. For this purpose, a one-to-one correspondence between the tooth surface feature image, the tooth anatomy feature image and the user may be established by means of a user ID.
In this embodiment, before performing step S102, geometric alignment may be performed on the tooth surface feature image and the tooth anatomical feature image, so that the registration processing of step S102, such as a specific alignment processing, may be performed after alignment, which may be implemented based on the following point cloud processing.
In this embodiment, in performing step S102, registering the tooth surface feature image onto the tooth anatomical feature image to obtain a registered image includes:
extracting feature descriptors from the tooth surface feature images to obtain at least one first feature descriptor;
extracting feature descriptors from the tooth anatomical feature images to obtain at least one second feature descriptor;
registering the tooth surface feature image onto the tooth anatomical feature image based on the first feature descriptor and the second feature descriptor to obtain a registered image.
For this reason, by extracting feature descriptors from the tooth surface feature image and the tooth anatomy feature image and registering based on these descriptors, accuracy and stability of registration can be improved. The feature descriptors are typically able to describe the local features of the image more accurately, thereby helping to improve the accuracy of the registration. Furthermore, registration using feature descriptors can improve robustness to image variations and disturbances, and even if there is some degree of deformation, rotation or noise in the tooth surface feature image and the tooth anatomy feature image, the feature descriptor-based registration method can better cope with these situations. Finally, by extracting the feature descriptors and registering based on the feature descriptors, automatic registration can be ensured, the quasi-efficiency is improved, and quick registration can be performed on large-scale data; moreover, the feature descriptors can better retain local feature information of the image, so that the details and features of the image can be better retained by registration based on the feature descriptors, and subsequent analysis and processing are facilitated.
Optionally, the extracting the feature descriptors from the tooth surface feature image to obtain at least one first feature descriptor may include: and carrying out convolution operation on the tooth surface feature image to extract feature descriptors so as to obtain at least one first feature descriptor.
Optionally, performing a convolution operation on the tooth surface feature image to perform feature descriptor extraction to obtain at least one first feature descriptor, including:
convolving the tooth surface feature image to generate a first stereo feature map of the tooth, the first stereo feature comprising spatial features and structural features of the tooth;
performing residual convolution on the first three-dimensional feature map of the tooth to generate a detail feature map of the tooth;
the detailed feature map of the tooth is upsampled to obtain at least one first feature descriptor.
Optionally, the extracting the feature descriptors from the tooth surface feature image to obtain first feature descriptors includes: and extracting point cloud data from the tooth surface feature image to obtain tooth surface feature point cloud, and extracting feature descriptors from the tooth surface feature image based on the tooth surface feature point cloud to obtain first feature descriptors.
Optionally, performing a convolution operation on the dental anatomy feature image to perform feature descriptor extraction to obtain at least one second feature descriptor, including:
convolving the dental anatomic feature image to generate a second stereoscopic feature map of the tooth, the second stereoscopic feature comprising spatial and structural features of the tooth;
performing residual convolution on the second three-dimensional feature map of the tooth to generate a detail feature map of the tooth;
the detailed feature map of the tooth is upsampled to obtain at least one second feature descriptor.
Optionally, the extracting the feature descriptors from the tooth surface feature image to obtain the first feature descriptors may further include: and removing noise in the tooth surface characteristic point cloud.
Illustratively, the removing noise in the tooth surface feature point cloud may include: calculating the actual distance and the average distance from each point in any point cloud to K nearest points according to the any point cloud; calculating the average value and standard deviation of all average distances based on the actual distance and the average distance, and calculating a distance threshold according to the average value and the standard deviation; and if the average distance is outside the distance threshold, removing any point cloud as noise, wherein K is a positive integer, and setting the size of K according to the scene.
Therefore, the noise points can be accurately identified and removed by calculating the actual distance and the average distance from each point in each point cloud to the adjacent point and judging based on the distance threshold value. Furthermore, the method is based on the distance threshold value obtained by the mean value and the standard deviation of the average distance, and the noise removal standard can be adaptively adjusted according to the specific point cloud data set, so that the method has certain applicability and robustness to different data sets. Finally, the removal is performed by regarding points whose average distance lies outside the distance threshold as noise.
Optionally, extracting the feature descriptors from the dental anatomic feature image to obtain second feature descriptors, including: and extracting point cloud data from the tooth anatomic feature image to obtain tooth anatomic feature point cloud, and extracting feature descriptors from the tooth anatomic feature image based on the tooth anatomic feature point cloud to obtain second feature descriptors.
In this embodiment, the point cloud data extraction may be performed on the dental anatomical feature image through voxelization (voxelization) or three-dimensional reconstruction to obtain a dental anatomical feature point cloud.
Optionally, the registering the tooth surface feature image onto the tooth anatomical feature image based on the first feature descriptor and the second feature descriptor to obtain a registered image includes:
determining a transformation matrix that registers the tooth surface feature image onto the tooth anatomical feature image based on the first feature descriptor and the second feature descriptor;
registering the tooth surface feature image onto the tooth anatomical feature image based on the transformation matrix to obtain a registered image.
For this reason, by determining the transformation matrix for registration based on the first feature descriptor and the second feature descriptor, high-precision image registration can be achieved, so that the tooth surface feature image can be accurately aligned with the tooth anatomy feature image, thereby obtaining a high-quality registered image. Furthermore, the feature descriptors are used for registration, so that an automatic registration flow can be realized, and the registration efficiency and consistency are improved. In addition, the feature descriptors have stronger robustness and have certain invariance to the transformation such as rotation, translation and the like of the images, so that the registration based on the feature descriptors can be better suitable for the change between different images, and the registration robustness is improved.
Optionally, the determining, based on the first feature descriptor and the second feature descriptor, a transformation matrix that registers the tooth surface feature image onto the tooth anatomical feature image includes:
the following steps are performed for any first feature descriptor to determine a matching point of the any first feature descriptor on the dental anatomical feature image to determine the transformation matrix based on the matching points of all first feature descriptors:
calculating the similarity between any first feature descriptor and all second feature descriptors;
and taking the second feature descriptors with the greatest similarity as matching points of any first feature descriptors on the tooth anatomical feature image.
Therefore, by calculating the similarity between any first feature descriptor and all second feature descriptors, the matching points of the first feature descriptors on the tooth anatomical feature images can be efficiently determined, so that accurate matching information is provided for the subsequent registration process. Furthermore, by determining the transformation matrix based on the matching points of all the first feature descriptors, the information of a plurality of matching points can be comprehensively considered, so that a more accurate and stable transformation matrix is obtained, and high-quality image registration is facilitated. In the tooth anatomical feature image, complex structures and textures possibly exist, the method based on the second feature descriptor with the largest similarity as the matching point can better cope with the complex situation, and the matching robustness and accuracy are improved. Finally, the method can adaptively determine the matching point according to the similarity between the first feature descriptor and the second feature descriptor, is not influenced by specific scene or image change, and has strong applicability and universality.
Optionally, the method further comprises:
and taking the tooth surface characteristic point cloud as a source point cloud and taking the tooth anatomical characteristic point cloud as a target point cloud.
Further, when calculating the similarity between any first feature descriptor and all the second feature descriptors, calculating the similarity (such as euclidean distance) between any first feature descriptor in the source point cloud and all the second feature descriptors in the target point cloud.
Optionally, the method further comprises:
forming a matching pair of each first feature descriptor and a matching point thereof on the tooth anatomical feature image;
and carrying out credibility verification on each matching pair to select a matching pair with correct matching from the matching pairs so as to determine the transformation matrix based on the first feature descriptors corresponding to the matching pairs with correct matching and the matching points of the first feature descriptors.
Specifically, the trust verification for each matching pair includes the following steps: for any first feature descriptor, determining other second feature descriptors meeting the transformation matrix based on the transformation matrix;
counting the number of second feature descriptors with similarity larger than a set similarity threshold value with any one of the other second feature descriptors;
if the number of the second feature descriptors is larger than the set number threshold, the corresponding matching pair is credible.
Optionally, in this embodiment, the transformation matrix is used to describe parameters such as rotation, translation, etc. to be performed for transforming the first feature descriptor into the dental anatomical feature image.
Optionally, the extracting the real tooth boundary from the registered image includes:
the tooth surface feature image is subjected to registration transformation to be segmented into images on the registration image, so that projection contours of teeth on the tooth surface feature image are extracted from the images, and the projection contours are used as the real tooth boundaries.
Alternatively, if the registration image is obtained based on the above-mentioned point cloud processing, when the image of the tooth surface feature image, which is converted to the registration image through registration, is segmented, the image segmentation may be performed based on the point cloud processing.
Optionally, the image segmentation based on the point cloud processing method includes:
encoding the registered image to calculate self-attention of point cloud data thereon and to derive a self-attention vector comprising key features of the tooth;
adding the position of the point cloud data corresponding to the registration image to the self-attention vector to obtain an embedded vector;
and encoding and decoding the embedded vector to obtain the projection contour of the tooth on the tooth anatomical feature image on the tooth surface feature image.
Optionally, the method further comprises: the dental anatomic feature image is rectified based on dental boundaries extracted from the dental anatomic feature image. Therefore, by correcting the tooth anatomic feature image, the corrected tooth anatomic feature image can more accurately display the tooth anatomic structure, and the correction of the unreliable tooth boundaries on the tooth anatomic feature image based on the reliable tooth boundaries on the tooth surface feature image is realized.
Optionally, the method further comprises: and performing auxiliary supervision on the training process of the first image processing model so as to improve the accuracy of model training.
Optionally, the performing auxiliary supervision on the training process of the first image processing model includes:
based on the first discriminant after training, performing auxiliary supervision on the training process of the first image processing model according to the corrected tooth anatomical feature image and the tooth surface feature image.
Optionally, the training process of the first image processing model is supervised in an auxiliary manner based on the training-completed first discriminator according to the corrected tooth anatomical feature image and the tooth surface feature image, and the training process comprises the following steps:
and based on the first discriminator after training, extracting the characteristics of the corrected tooth anatomic characteristic image and the tooth surface characteristic image, and judging whether the corrected tooth anatomic characteristic image and the tooth surface characteristic image belong to the same target user according to the extracted characteristics so as to carry out auxiliary supervision on the training process of the first image processing model.
In this embodiment, the first discriminator uses the corrected dental anatomy feature image and the positive and negative samples of the dental surface feature image to train, so that the first discriminator has correct model parameters to be able to distinguish whether the corrected dental anatomy feature image and the dental surface feature image belong to the same target user.
Optionally, the performing auxiliary supervision on the training process of the first image processing model includes:
synthesizing the tooth surface feature image and the corrected tooth anatomy feature image based on the second image processing model to obtain a synthesized surface feature image;
based on the second discriminant after training, performing auxiliary supervision on the training process of the first image processing model according to the synthesized surface feature image and the tooth anatomical feature image.
Here, only one of the first discriminator and the second discriminator may be selected to perform the above supervision.
Optionally, the training process of the first image processing model is supervised in an auxiliary manner by the second discriminator based on training completion according to the synthesized surface feature image and the tooth anatomical feature image, and the training process comprises the following steps:
performing feature extraction on the synthesized surface feature image and the tooth anatomical feature image based on the second discriminant after training; and judging whether the synthesized surface feature image and the tooth anatomical structure image belong to the same target user according to the extracted features so as to carry out auxiliary supervision on the training process of the first image processing model.
In this embodiment, the second discriminator is trained using the positive and negative samples of the composite surface feature image and the dental anatomy feature image, so that the second discriminator has correct model parameters to be able to distinguish whether the corrected dental anatomy feature image and the dental surface feature image belong to the same target user.
Here, the first and second image processing models may be generator model structures (Generator Network), wherein the first and second image processing models are both codec structures
To this end, the method may further comprise: and encoding and decoding the corrected tooth anatomical feature image based on the second image processing model to obtain a restored tooth anatomical feature image (such as a Cycle-CBCT image) so as to compare the restored tooth anatomical feature image with the tooth anatomical feature image before correction and assist in judging the training effect of the first image processing model.
Further, for this purpose, the method may further comprise: and encoding and decoding the synthesized surface feature image based on the first image processing model to obtain a restored tooth surface feature image (such as a Cycle-mouth scan image) so as to compare the restored tooth surface feature image with the tooth surface feature image before synthesis, and assist in judging the training effect of the first image processing model.
In the above embodiment, the first arbiter and the second arbiter are a trained arbiter network (Discriminator Network).
For this purpose, the first image processing model, the second image processing model, the first arbiter, the second arbiter, for example, may be implemented specifically based on the challenge-generating network model.
Here, the above-described functions implemented by the first image processing model, the second image processing model, the first arbiter, and the second arbiter are not limited to only some of the functions implemented, but not all of the functions.
In the training process, the first image processing model, the second image processing model, the first discriminator and the second discriminator are required to participate together so as to realize the training of the first image processing model, the automatic extraction of the tooth boundary is required after the training is completed, and the first image processing model which is required to be completed in the training is required to participate.
Therefore, the training process can be supervised by extracting the characteristics of the corrected tooth anatomical characteristic image and the corrected tooth surface characteristic image based on the first and second discriminators after training and judging the target user, so that the training process of the first image processing model is supervised, and the accuracy and the robustness of the model are improved. Furthermore, by extracting the features and judging the target user by using the first and second discriminators, the automatic supervision of the training process can be realized, and the training efficiency and consistency are improved. In addition, by judging whether the corrected tooth anatomical feature image and the tooth surface feature image belong to the same target user, the recognition and distinguishing capability of the model on the features of different users can be enhanced, and the robustness and generalization capability of the model can be improved. Finally, by carrying out feature extraction on the synthesized surface feature image and the tooth anatomical feature image and carrying out target user judgment, the correlation of the data can be improved, the training data is ensured to be more attached to the actual application scene, and therefore the practicability and the reliability of the model are improved. Moreover, by introducing a discriminator for auxiliary supervision, the model resistance can be increased, and the distinguishing capability of the model for different user characteristics can be improved, so that the safety and privacy protection capability of the model are improved.
Fig. 2 is a flowchart of an image correction method according to an embodiment of the present application. As shown in fig. 2, it includes:
s201, acquiring a first mode image and a second mode image, wherein the first mode image is used for representing tooth surface characteristics, and the second mode image is used for representing tooth anatomical characteristics;
s202, mapping the first modal image to the second modal image to obtain a registration image;
s203, extracting real tooth boundaries from the registration images;
s204, extracting tooth boundaries from the second mode image based on the set image correction model;
s205, adjusting model parameters of the image correction model based on the real tooth boundary and the tooth boundary actually extracted by the image correction model until training is completed;
s206, automatically extracting tooth boundaries of the second-mode image to be corrected based on the trained image correction model, and removing artifacts on the second-mode image to be corrected.
In this embodiment, the artifacts include, but are not limited to, motion artifacts and severe metal artifacts from dental restorations.
In this embodiment, the first mode image may be the above-mentioned tooth surface feature image, and the second mode image may be the above-mentioned tooth anatomical feature image.
In this embodiment, the image correction model may be, for example, the first image processing model described above.
Here, an exemplary description of the relevant steps in the present embodiment can be found in the description of fig. 1 described above.
The embodiment of the application provides a training method of an image correction model, which comprises the following steps:
acquiring a first mode image and a second mode image, wherein the first mode image is used for representing tooth surface characteristics, and the second mode image is used for representing tooth anatomical characteristics;
mapping the first modality image onto the second modality image to obtain a registered image;
extracting real tooth boundaries from the registered image;
extracting tooth boundaries from the second modality image based on a set image correction model;
and adjusting model parameters of the image correction model based on the supervision signals and the tooth boundaries actually extracted by the image correction model until training is completed.
In this embodiment, the first mode image may be the above-mentioned tooth surface feature image, and the second mode image may be the above-mentioned tooth anatomical feature image.
In this embodiment, the image correction model may be, for example, the first image processing model described above.
An image correction method of an embodiment of the application includes:
invoking an image correction model trained based on the above embodiments of the present application;
and inputting the second-mode image to be corrected into the image correction model for correction so as to start to extract tooth boundaries and remove artifacts on the second-mode image to be corrected.
In the above-mentioned model process, if the model parameters of the model need to be adjusted, for example, the loss value can be calculated based on the set loss function to estimate the matching degree between the actual tooth boundary and the actually extracted tooth boundary, and further, the gradient descent method is used to adjust the model parameters according to the matching pair.
Fig. 3 is a schematic view of a dental treatment system according to an embodiment of the present application. As shown in fig. 3, it includes: comprising the following steps: the method comprises the steps of a mechanical arm 301 and a host 302, wherein the host is used for executing the method for automatically extracting tooth boundaries of a target tooth anatomical feature image of a patient, and controlling the mechanical arm to set target actions for the target tooth of the patient based on the automatically extracted tooth boundaries.
The detailed implementation of the technology in this embodiment can be seen from the description of fig. 1.
The terms "first," "second," "the first," or "the second," as used in various embodiments of the present disclosure, may modify various components without regard to order and/or importance, but these terms do not limit the corresponding components. The above description is only configured for the purpose of distinguishing an element from other elements. For example, the first user device and the second user device represent different user devices, although both are user devices. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure.
When an element (e.g., a first element) is referred to as being "coupled" (operatively or communicatively) to "another element (e.g., a second element) or" connected "to another element (e.g., a second element), it is understood that the one element is directly connected to the other element or the one element is indirectly connected to the other element via yet another element (e.g., a third element). In contrast, it will be understood that when an element (e.g., a first element) is referred to as being "directly connected" or "directly coupled" to another element (a second element), then no element (e.g., a third element) is interposed therebetween.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the invention. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.

Claims (11)

1. An automatic extraction method of tooth boundaries, comprising:
acquiring a tooth surface feature image and a tooth anatomy feature image;
registering the tooth surface feature image onto the tooth anatomical feature image to obtain a registered image;
extracting real tooth boundaries from the registered image as supervisory signals;
extracting tooth boundaries from the tooth anatomic feature images based on a first image processing model;
based on the supervision signals and the tooth boundaries actually extracted by the first image processing model, adjusting model parameters of the first image processing model until training is completed;
and automatically extracting tooth boundaries from the target tooth anatomical feature image based on the trained first image processing model.
2. The method of claim 1, wherein said registering the tooth surface feature image onto the tooth anatomical feature image to obtain a registered image comprises:
extracting feature descriptors from the tooth surface feature images to obtain at least one first feature descriptor;
extracting feature descriptors from the tooth anatomical feature images to obtain at least one second feature descriptor;
registering the tooth surface feature image onto the tooth anatomical feature image based on the first feature descriptor and the second feature descriptor to obtain a registered image.
3. The method according to claim 2, wherein the extracting the feature descriptors from the tooth surface feature image to obtain the first feature descriptors includes: extracting point cloud data from the tooth surface feature image to obtain tooth surface feature point cloud, and extracting feature descriptors from the tooth surface feature image based on the tooth surface feature point cloud to obtain first feature descriptors;
and/or extracting the feature descriptors from the tooth anatomical feature image to obtain second feature descriptors, including: and extracting point cloud data from the tooth anatomic feature image to obtain tooth anatomic feature point cloud, and extracting feature descriptors from the tooth anatomic feature image based on the tooth anatomic feature point cloud to obtain second feature descriptors.
4. The method of claim 2, wherein the registering the tooth surface feature image onto the tooth anatomical feature image based on the first feature descriptor and the second feature descriptor to obtain a registered image comprises:
determining a transformation matrix that registers the tooth surface feature image onto the tooth anatomical feature image based on the first feature descriptor and the second feature descriptor;
registering the tooth surface feature image onto the tooth anatomical feature image based on the transformation matrix to obtain a registered image.
5. The method of claim 4, wherein the determining a transformation matrix to register the tooth surface feature image onto the tooth anatomical feature image based on the first feature descriptor and the second feature descriptor comprises:
the following steps are performed for any first feature descriptor to determine a matching point of the any first feature descriptor on the dental anatomical feature image to determine the transformation matrix based on the matching points of all first feature descriptors:
calculating the similarity between any first feature descriptor and all second feature descriptors;
and taking the second feature descriptors with the greatest similarity as matching points of any first feature descriptors on the tooth anatomical feature image.
6. The method of claim 1, wherein the extracting real tooth boundaries from the registered image comprises:
the tooth surface feature image is subjected to registration transformation to be segmented into images on the registration image, so that projection contours of teeth on the tooth surface feature image are extracted from the images, and the projection contours are used as the real tooth boundaries.
7. The method of claim 6, further comprising: the dental anatomic feature image is rectified based on dental boundaries extracted from the dental anatomic feature image.
8. An image correction method, comprising:
acquiring a first mode image and a second mode image, wherein the first mode image is used for representing tooth surface characteristics, and the second mode image is used for representing tooth anatomical characteristics;
mapping the first modality image onto the second modality image to obtain a registered image;
extracting real tooth boundaries from the registered image;
extracting tooth boundaries from the second modality image based on a set image correction model;
based on the actual tooth boundary and the tooth boundary actually extracted by the image correction model, adjusting model parameters of the image correction model until training is completed;
and carrying out automatic extraction of tooth boundaries on the second-mode image to be corrected based on the trained image correction model, and removing artifacts on the second-mode image to be corrected.
9. A method of training an image correction model, comprising:
acquiring a first mode image and a second mode image, wherein the first mode image is used for representing tooth surface characteristics, and the second mode image is used for representing tooth anatomical characteristics;
mapping the first modality image onto the second modality image to obtain a registered image;
extracting real tooth boundaries from the registered image;
extracting tooth boundaries from the second modality image based on a set image correction model;
and adjusting model parameters of the image correction model based on the supervision signals and the tooth boundaries actually extracted by the image correction model until training is completed.
10. An image correction method, comprising:
invoking the image correction model trained according to the method of claim 9;
and inputting the second-mode image to be corrected into the image correction model for correction so as to start to extract tooth boundaries and remove artifacts on the second-mode image to be corrected.
11. A dental treatment system, comprising: a robotic arm and a host for performing the method of any one of claims 1-7 to automatically extract tooth boundaries from a target tooth anatomical feature image of a patient and for controlling a target motion of the robotic arm set for a target tooth of the patient based on the automatically extracted tooth boundaries.
CN202311671789.9A 2023-12-07 2023-12-07 Automatic extraction method of tooth boundary, image correction method and tooth treatment system Pending CN117670956A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311671789.9A CN117670956A (en) 2023-12-07 2023-12-07 Automatic extraction method of tooth boundary, image correction method and tooth treatment system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311671789.9A CN117670956A (en) 2023-12-07 2023-12-07 Automatic extraction method of tooth boundary, image correction method and tooth treatment system

Publications (1)

Publication Number Publication Date
CN117670956A true CN117670956A (en) 2024-03-08

Family

ID=90070988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311671789.9A Pending CN117670956A (en) 2023-12-07 2023-12-07 Automatic extraction method of tooth boundary, image correction method and tooth treatment system

Country Status (1)

Country Link
CN (1) CN117670956A (en)

Similar Documents

Publication Publication Date Title
US11443494B2 (en) Method for estimating at least one of shape, position and orientation of a dental restoration
CN112200843B (en) Super-voxel-based CBCT and laser scanning point cloud data tooth registration method
CN111415419B (en) Method and system for making tooth restoration model based on multi-source image
JP5763172B2 (en) Diagnosis support system using panoramic X-ray photograph and diagnosis support program using panoramic X-ray photograph
CN111784754B (en) Tooth orthodontic method, device, equipment and storage medium based on computer vision
US11961238B2 (en) Tooth segmentation using tooth registration
US20160203604A1 (en) Method for automatic detection of anatomical landmarks in volumetric data
CN113052902B (en) Tooth treatment monitoring method
Zhong et al. 3D dental biometrics: Alignment and matching of dental casts for human identification
CN107122754A (en) Posture identification method and device
US20220361992A1 (en) System and Method for Predicting a Crown and Implant Feature for Dental Implant Planning
JP2017097836A (en) Image processor, image processing method, and ultrasonic diagnostic apparatus
CN109087357A (en) Scan orientation method, apparatus, computer equipment and computer readable storage medium
Wang et al. Tooth alignment network based on landmark constraints and hierarchical graph structure
CN117670956A (en) Automatic extraction method of tooth boundary, image correction method and tooth treatment system
JPWO2020136243A5 (en)
El-Fegh et al. Automated 2-D cephalometric analysis of X-ray by image registration approach based on least square approximator
Zhang et al. Performance analysis of active shape reconstruction of fractured, incomplete skulls
Wirtz et al. Automatic teeth segmentation in cephalometric X-ray images using a coupled shape model
EP4307229A1 (en) Method and system for tooth pose estimation
Kaur et al. Cephalometric X-ray registration using angular radial transform
Wang et al. Geometric Analysis of 3D Facial Image Data: A Survey
US20230051400A1 (en) System and Method for Fusion of Volumetric and Surface Scan Images
CN117643501B (en) Spine registration guide plate, manufacturing method, model construction method and device
US20230298272A1 (en) System and Method for an Automated Surgical Guide Design (SGD)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination