CN114332123A - Automatic caries grading method and system based on panoramic film - Google Patents

Automatic caries grading method and system based on panoramic film Download PDF

Info

Publication number
CN114332123A
CN114332123A CN202111652116.XA CN202111652116A CN114332123A CN 114332123 A CN114332123 A CN 114332123A CN 202111652116 A CN202111652116 A CN 202111652116A CN 114332123 A CN114332123 A CN 114332123A
Authority
CN
China
Prior art keywords
tooth
caries
dentin
panoramic
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111652116.XA
Other languages
Chinese (zh)
Inventor
陈庆光
黄俊超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202111652116.XA priority Critical patent/CN114332123A/en
Publication of CN114332123A publication Critical patent/CN114332123A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of intelligent diagnosis of oral cavity images, and particularly relates to a panoramic film-based automatic caries grading method and a panoramic film-based automatic caries grading system. The method comprises S1, segmenting and cutting the teeth example of the panoramic picture to obtain the example segmentation sample of single tooth in the panoramic picture; s2, segmenting the obtained tooth example into samples by using the trained tooth coarse classification model, and classifying the tooth sample into a restoration state and a non-restoration state; s3, obtaining a plurality of different regional tissues of the segmented teeth after the non-restored tooth sample is subjected to the trained tooth tissue semantic segmentation model; and S4, classifying the tooth into 4 grades of healthy, shallow caries, medium caries and deep caries according to the caries grade diagnosis standard by a caries grade model according to the obtained tissue conditions of different areas of each tooth. The invention has the characteristics of realizing the classification of tooth health state and the automatic grading of caries and reducing the workload of clinicians.

Description

Automatic caries grading method and system based on panoramic film
Technical Field
The invention belongs to the technical field of intelligent diagnosis of oral cavity images, and particularly relates to a panoramic film-based automatic caries grading method and a panoramic film-based automatic caries grading system.
Background
The oral health is an important component of the whole body health and is an important mark of the physical and mental health and civilization level of residents. Oral diseases not only bring about tooth pain and directly affect the physiological functions of chewing food, pronunciation and the like, but also have close relationship with other diseases of the whole body, such as cerebral apoplexy, heart disease, diabetes, digestive system and the like. The digital medical imaging technology in the oral cavity field comprises a root tip film, a panoramic X-ray film, an optical coherence tomography technology, cone beam computed tomography, oral cavity three-dimensional scanning and the like. Caries is a progressive lesion caused by demineralization of hard tooth tissues, is one of three diseases of oral cavity, is identified by the World Health Organization (WHO), and has high incidence rate and wide age coverage. The clinical diagnosis of caries mainly depends on the ways of visual inspection and probe, and has the characteristic of strong subjectivity. The structure of a tooth is mainly composed of enamel, dentin, a pulp cavity and the like, and can be divided into 4 different grades of deep caries, medium caries, shallow caries and the like according to the difference of the enamel, the dentin and the pulp cavity which are invaded into tooth tissues in a demineralized lesion area. Different treatment methods are required for caries lesions at different stages. Therefore, diagnosis of caries levels is a clinically important matter.
The panoramic film is a simple and fast technology which applies the principle of narrow slit and circular arc orbit tomography, and can completely and clearly display the complete appearance of the maxilla and mandible, the conditions of the dentition of the maxilla and the mandible and the conditions of the alveolar bone on one film through one-time imaging. It is widely used in clinic because of its simple operation, wide examination range and low radiation dose. The curved surface section sheet can display the image of anatomical structures such as full-mouth teeth, jaw bones, nasal cavities, maxillary sinuses, temporomandibular joints and the like through one-time exposure, has wide display range, is commonly used for symmetrically observing the morphological structure of the jaw bones, the jaw bone pathological changes, the growth conditions of the jaw bones and the teeth, the alveolar bone absorption degree in full-mouth periodontal diseases and the like, and is an important auxiliary examination means in clinical oral examination.
Because the panoramic film is widely existed in each primary community hospital, the health state of all teeth in the oral cavity can be easily diagnosed by utilizing the image results of the panoramic film with large range, wide visual angle and high flux, especially aiming at different lesion degrees of caries, the judgment can be carried out according to the distribution condition of the gray level in the panoramic film, and the possibility is provided for establishing the oral health record of all teeth in the primary community. Meanwhile, because the tissues in the oral cavity are complex, including hard tissues such as the jaw bone and the like and soft tissues such as muscles and the like, the spatial arrangement among all parts is staggered, and the X-ray transmission in the panoramic imaging process is overlapped with each other, so that the contrast is not high. Traditionally, dental health has been diagnosed primarily by means of visual examination by an oral clinician. With the deep fusion and wide development of the artificial intelligence technology and the medical imaging technology, the automatic diagnosis result is hopefully obtained by utilizing the self-learning capability of the neural network and the high-quality image annotation result, the diagnosis result is given in an auxiliary mode, and the intellectualization of clinical decision is improved.
Therefore, it is necessary to design a panoramic-based automatic caries grading method and system.
For example, chinese patent application No. CN202110159089.6 describes an artificial intelligence based caries diagnosis system, which is composed of an image acquisition module, a data transmission module, an intelligent diagnosis module, and a diagnosis result display module. The system uses stored caries case big data, an artificial intelligence diagnosis model of caries is trained by matching with artificial intelligence and a cloud computer, dental image information of a user is obtained in a mode of shooting through a mobile phone and a flat-panel camera, a user client and a flash are used as data transmission media, image processing and caries diagnosis are carried out through the trained model, and a diagnosis result is displayed on a user interface in the user client in an image mode. Although the operation is simple and convenient, the cost is low, and the dental caries patient can be effectively found as soon as possible and can be treated in time, the defect is that a visual diagnosis process cannot be provided for a clinician, the clinical diagnosis of the dentist cannot be assisted, and the workload of the clinician is increased.
Disclosure of Invention
The invention provides a panoramic-based automatic caries grading method and a panoramic-based automatic caries grading system which can realize the classification of tooth health states and the automatic grading of caries and reduce the workload of clinicians, and aims to overcome the problems that the conventional clinical caries diagnosis mainly depends on visual inspection and a probe mode, has strong subjectivity and cannot realize the classification of tooth health states and the automatic grading of caries.
In order to achieve the purpose, the invention adopts the following technical scheme:
the automatic caries grading method based on panoramic film includes the following steps:
s1, segmenting and cutting the tooth examples of the panoramic picture to obtain the example segmentation samples of single tooth in the panoramic picture;
s2, segmenting the tooth example obtained in the step S1 into samples by utilizing the trained tooth rough classification model, and classifying the tooth example into a restoration state and a non-restoration state;
s3, obtaining a plurality of different regional tissues of the segmented teeth after the non-restored tooth sample in the step S2 passes through the trained tooth tissue semantic segmentation model;
s4, classifying the tooth into 4 grades of healthy, shallow caries, medium caries and deep caries according to caries grade diagnosis standards through a caries grade model according to the tissue conditions of different areas of each tooth obtained in the step S3.
Preferably, step S1 includes the steps of:
s11, carrying out artificial labeling of tooth examples on the collected panoramic photo data set;
s12, training the manually marked panoramic picture data set by adopting a Mask R-CNN example segmentation model;
s13, carrying out tooth example segmentation on the tooth examples of the panoramic picture by using the trained Mask R-CNN example segmentation model to obtain the external contour of each tooth;
s14, calculating a maximum rectangular surrounding frame for the obtained tooth external contour;
and S15, the boundary of the frame surrounded by the maximum rectangle is respectively expanded outwards along the long axis and the short axis, and the clipping is carried out to obtain the example segmentation sample of the single tooth in the panoramic picture.
Preferably, step S2 further includes the steps of:
s21, designing a lightweight convolutional neural network framework, and classifying the result into a repairing state and a non-repairing state;
s22, labeling the sample of each tooth instance segmentation obtained in the step S1, and dividing the sample into a restoration state and a non-restoration state;
s23, training a neural network model by using the marked example segmentation samples of each tooth to obtain a tooth rough classification model;
and S24, classifying the tooth example segmentation samples by using the trained tooth rough classification model, and distinguishing the tooth example segmentation samples into two categories, namely a restoration state and a non-restoration state.
Preferably, step S3 includes the steps of:
s31, labeling the non-restored tooth samples distinguished by the tooth rough classification model to obtain enamel, dentin, a pulp cavity and a carious region of each tooth sample;
s32, using the tooth sample labeled in the step S31 to train a U-Net deep learning segmentation model to obtain a trained tooth tissue semantic segmentation model;
and S33, performing tissue segmentation on the tooth sample by using the trained tooth body tissue semantic segmentation model to obtain the enamel, dentin, pulp cavity and carious region of the tooth sample.
Preferably, step S4 includes the steps of:
s41, dividing samples of each tooth example of the panoramic picture, calculating a second moment according to the obtained tooth external contour, determining the direction of a main axis of the tooth, and establishing a coordinate system of each tooth by taking the center of gravity of the tooth contour as an origin and the main axis and the auxiliary axis as coordinate axes;
s42, generating scanning lines starting from the original point at an angle interval of delta theta within the range of (0, pi) by taking the center of gravity of the pulp cavity area as the original point and the radius of the outline of the dental gear as the radius;
s43, traversing the tooth tissue area to which each point belongs on each scanning line, and obtaining the inner and outer boundary lines of the dentin by combining the results of three areas of enamel, dentin and pulp cavity obtained by the tooth tissue semantic segmentation model; wherein the inner and outer boundary lines of the dentin comprise an enamel dentin boundary line and a boundary line of a dentin pulp cavity;
s44, extracting the central line of the dentin 1/2 by using a unit circle rolling tracking method according to the obtained inner and outer boundary lines of the dentin;
s45, connecting the first and last end points of the external contour line of the pulp cavity by taking the boundary curves of the external contour line of the enamel, the boundary line of the enamel dentin, the 1/2 center line of the dentin and the external contour line of the pulp cavity as edges to form closed areas such as the enamel, the outer 1/2 dentin and the inner 1/2 dentin; defining an enamel area as a shallow caries decision area, an outer 1/2 dentin as a middle caries decision area, and an inner 1/2 dentin as a deep caries decision area;
s46, performing and operation on the caries area obtained by the semantic tooth tissue segmentation model and the deep caries decision area, the middle caries decision area and the shallow caries decision area obtained in the step S45 in sequence on space, judging whether the caries area is overlapped with the caries area or not, and determining the diagnosis result of the dental caries level.
Preferably, step S43 includes the steps of:
and if the front and the back of the current traversal point on the scanning line belong to different areas, the current traversal point is considered as a boundary line.
Preferably, step S46 further includes the following steps:
if the sum of the calculated results is not 0, judging that coincidence exists, and determining the dental caries grade diagnosis result of the corresponding grade; if no coincidence region exists among the deep caries decision region, the middle caries decision region and the shallow caries decision region, the tooth is judged to be healthy.
The invention also provides a panoramic-based automatic caries grading system, which comprises:
the tooth example segmentation and cutting module is used for segmenting and cutting the tooth example of the panoramic picture to obtain an example segmentation sample of a single tooth in the panoramic picture;
the tooth coarse classification module is used for segmenting the obtained tooth example into samples by utilizing the trained tooth coarse classification model and classifying the samples in a restoration state and a non-restoration state;
the tooth body tissue semantic segmentation module is used for obtaining a plurality of different regional tissues of the segmented teeth after the non-restored tooth sample passes through the trained tooth tissue semantic segmentation model;
and the caries grading module is used for grading healthy, shallow caries, medium caries and deep caries according to acquired tissue conditions of different areas of each tooth, a caries grading model and caries grade diagnosis standards.
Compared with the prior art, the invention has the beneficial effects that: (1) the invention can realize the classification of tooth health state and the automatic grading of caries; (2) the invention provides a visual diagnosis process for a clinician, can assist the clinician in clinical diagnosis, and is beneficial to promoting the establishment and management of resident oral health files.
Drawings
FIG. 1 is a flow chart of the panoramic-based automatic caries grading method of the present invention;
FIG. 2 is a schematic diagram of the annotation result of the panorama data according to the method of the present invention;
FIG. 3 is a schematic diagram of a Mask R-CNN network in the method of the present invention;
FIG. 4 is a diagram illustrating the segmentation result of the example of a panoramic tooth according to the method of the present invention;
FIG. 5 is a schematic representation of the results of the cutting of a single tooth in the method of the invention;
FIG. 6 is a schematic representation of a restored tooth sample in accordance with the method of the invention;
FIG. 7 is a schematic illustration of the results of labeling a region of tooth tissue according to the method of the present invention;
FIG. 8 is a schematic diagram of a framework of a U-Net semantic segmentation model in the method of the present invention;
FIG. 9 is a schematic representation of a coordinate system and representative scan lines for cutting a tooth sample in accordance with the method of the present invention;
FIG. 10 is a schematic diagram of points on the contour line of the inner and outer boundaries of dentin extracted by the method of the present invention;
FIG. 11 is a schematic representation of the extraction of the centerline of dentin 1/2 in accordance with the method of the present invention;
FIG. 12 is a schematic view of the process of the present invention for the automatic grading of caries based on panoramic film.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention, the following description will explain the embodiments of the present invention with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
Example 1:
as shown in figure 1, the invention utilizes the deep learning technique to obtain example segmentation of the panoramic teeth and the tissue segmentation result of each tooth, and on the basis, utilizes the coincidence condition of a carious region and different regions of healthy teeth tissues to realize automatic grading of caries. The specific flow is shown in figure 1.
The automatic caries grading method based on panoramic film includes the following steps:
s1, segmenting and cutting the tooth examples of the panoramic picture to obtain the example segmentation samples of single tooth in the panoramic picture;
s2, segmenting the tooth example obtained in the step S1 into samples by utilizing the trained tooth rough classification model, and classifying the tooth example into a restoration state and a non-restoration state;
s3, obtaining a plurality of different regional tissues of the segmented teeth after the non-restored tooth sample in the step S2 passes through the trained tooth tissue semantic segmentation model;
s4, classifying the tooth into 4 grades of healthy, shallow caries, medium caries and deep caries according to caries grade diagnosis standards through a caries grade model according to the tissue conditions of different areas of each tooth obtained in the step S3.
Step S1 specifically includes the following steps:
collecting the image data of the oral panoramic film, cleaning all the data, and eliminating bad samples with the defects of overall over-dark, over-bright, poor imaging effect, serious tooth loss, incapability of distinguishing by human eyes and the like. 1000 samples were finally obtained. Two doctors with abundant clinical experience use VGG Image annotor labeling software to label to obtain the external contour and health condition of each tooth. Wherein the health status is classified into 5 categories of healthy, deep caries, shallow caries, middle caries, and repaired. The labeling results are shown in FIG. 2.
The Mask R-CNN is an effective example segmentation deep learning framework, and is characterized in that FPN (feature Pyramid networks) is introduced into a backbone network part of the Faster R-CNN, ROI Pooling is improved into ROI Align, and an FCN branch is added to realize Mask segmentation. The Mask R-CNN network framework is shown in FIG. 3. Mainly comprises the following parts:
the model training uses SGD (stored Gradient decision) optimizer, the initial learning rate is set to be 0.025, the momentum parameter is set to be 0.9, the regular term parameter of the loss function is 0.0001, and the learning rate adjusting strategy adopts a linear hot start and equidistant attenuation mode. The training process uses four video cards, the number of training pictures per card is 2, i.e. the batch size is 8. A total of 30 rounds of training were performed to fit the network parameters continuously.
Fig. 4 shows the example segmentation result of the panoramic picture using the trained Mask R-CNN example segmentation model.
According to the tooth profile obtained by each result of the example, a maximum rectangular surrounding frame of the tooth external profile is calculated. The result is a rectangular box as shown in fig. 4. In order to cut out each tooth sample, the boundary point coordinate of the maximum rectangular bounding box of the outline is set as Pi(xi,yi) With 25 pixels as the number of external extension pixels, the specific boundary calculation method is as follows:
xstart=min(xi)-25 i=0,1,...,n
ystart=min(yi)-25 i=0,1,...,n
w=max(xi)-min(xi)+50 i=0,1,...,n
h=max(yi)-min(yi)+50 i=0,1,...,n
wherein (x)start,ystart) Is a straight edge momentThe coordinates of the starting point of the frame, w, h, are the width and height of the rectangular frame.
According to the formula, calculating to obtain the starting point x of the cutting framestart,ystartAnd the corresponding width w and height h, the original panorama is cropped to obtain a cropping result of each tooth sample, and a typical result is shown in fig. 5.
Step S2 specifically includes the following steps:
aiming at the sample classification problem of two classes of teeth, namely a restoration state tooth and a non-restoration state tooth, the restoration state tooth has an obvious highlight area and obvious characteristics, and therefore a lightweight convolutional neural network framework is adopted. The frame has the advantages of clear structure, few parameters and low training complexity. The designed lightweight convolutional neural network has 15 layers in total, including 9 convolutional layers, 4 maximum pool layers and 2 complete connection layers, and the final output layer is the result of 2 classification. The BN layer and the ReLU activation functions are performed at each convolutional layer. During training, Dropout layers are inserted into the fully-connected layers with a probability of 0.5 to reduce overfitting. And selecting a minimum cross entropy loss function as an optimization target. Table 1 below gives the core size, number, stride, fill, and output for each layer.
TABLE 2 lightweight convolutional neural network framework
Figure BDA0003447402120000081
Figure BDA0003447402120000091
And carrying out category marking on the cut tooth samples, wherein the tooth samples such as root canal treatment, crown restoration and the like are marked as a restoration state, and the samples of other tooth types including health, caries and the like are marked as a non-restoration state. A restored tooth sample is shown in fig. 6.
And training the lightweight convolutional neural network shown in the table 1 by using the labeled data set to obtain a 2-class model for distinguishing repaired states from non-repaired states.
And applying the trained model to the segmented tooth samples for predicting the classes of the teeth so as to distinguish the restored tooth samples from the non-restored tooth samples.
Step S3 specifically includes the following steps:
the method comprises the steps of marking a tooth tissue region of a cut single tooth sample by a clinician, wherein the marking mode includes that a closed contour is drawn along the boundary of an enamel region, a dentin region, a pulp cavity region and a carious region, and the marking mode includes four regions of the enamel region, the dentin region, the pulp cavity region and the carious region. As shown in particular in fig. 7.
On the basis of obtaining the labeled data, a U-Net semantic segmentation model is used for segmenting different tissue areas of a single tooth, the U-shaped structure of the U-Net is shown in figure 8, four times of downsampling are used for the structure to continuously obtain high-level semantic features, and correspondingly 4 times of upsampling are carried out to restore the features to the resolution of an original image. Aiming at two parts of down sampling and up sampling, a direct connection structure is used for fusing corresponding feature maps with the same scale, so that a plurality of low-order features and multi-scale features are fused in the feature maps recovered by up sampling, the features with different scales in the medical image have the effect on segmentation, and the information recovered by up sampling is more precise by fusing different features.
In the training process, a random gradient descent optimizer is adopted for training, the initial learning rate is set to be 0.01, the momentum parameter is set to be 0.9, the regular term parameter of the loss function is 0.0004, the batch size is 4, cosine annealing hot restart is used as a learning rate adjustment strategy, the iteration number of the first restart is set to be 750, and the iteration number increase multiple between two hot restarts in SGDR (random gradient descent with hot restart) is set to be 2.
Step S4 specifically includes the following steps:
for tooth samples after tooth example segmentation and cutting of the panoramic picture, calculating a second moment according to the obtained tooth external contour, determining the main axis direction of the tooth, and constructing a covariance matrix as follows:
Figure BDA0003447402120000101
Figure BDA0003447402120000102
Figure BDA0003447402120000103
Figure BDA0003447402120000104
Figure BDA0003447402120000105
where I (x, y) represents the intensity of a gray scale image pixel,
Figure BDA0003447402120000111
representing the geometric center coordinates of the contour. The profile orientation angle is obtained from the angle of the eigenvector associated with the largest eigenvalue and the closest axial direction. The specific calculation formula is as follows:
Figure BDA0003447402120000112
after the main shaft direction theta is determined, the orientation of the auxiliary shaft can be determined according to the characteristic that the auxiliary shaft is perpendicular to the main shaft direction theta. A coordinate system is established by taking the center of gravity of the profile of the tooth gear as an origin and taking the main shaft and the auxiliary shaft as coordinate axes. Scanning lines starting from the origin are generated at 1-degree intervals along the (0, pi) range with the center of gravity of the pulp cavity region as the origin and the radius of the external circle of the tooth profile as the radius. Wherein the scan lines at certain two angles are shown in fig. 9, wherein fig. 9(a) is the coordinate system established on the cut tooth sample, and the scan lines at 100 degree direction, and fig. 9(b) and (c) show the case where the scan lines intersect the inner and outer dentin boundaries, respectively.
Extracting each pixel point on the scanning lines at different angles, and according to the condition of the segmentation result of the pixel point belonging to the tooth tissue region, namely, when the jumping of the dentin tag to the enamel tag is detected by the search point, the pixel point is regarded as an outer boundary point, and similarly, when the jumping of the pulp cavity and the dentin occurs, the pixel point is regarded as an inner boundary point, and the results of the inner boundary point and the outer boundary point are stored and displayed in the graph, wherein the results are shown in fig. 10.
And connecting the points on the extracted inner and outer boundary contour lines of the dentin to respectively obtain the inner and outer boundary contour lines of the dentin. The dentin 1/2 centerline was extracted using a unit circle rolling tracking method. In the specific implementation process, the radius of a search unit circle is set to be 10 pixels, the termination condition of single search is that the difference value of the closest distances from a point on the unit circle to two boundaries is lower than 2 pixel points, the direction of the first search is set to be the positive direction of the tooth main axis, then the search direction is updated every time to be the direction in which the front circle center points to the current circle center, the initial parameters are set, and the search is stopped until the termination point is in the search radius area of the unit circle after iteration is continuously performed. Fig. 11 shows the line extraction result in dentin region 1/2.
And (3) carrying out and operation on a caries region obtained by a semantic tooth tissue segmentation model, a deep caries decision region, a middle caries decision region and a shallow caries decision region obtained in the last step in space in sequence, judging whether the caries region is overlapped with the caries region, if the sum of the judgment result and the operation is not 0, judging that the caries region is overlapped, determining a diagnosis result of a corresponding level, and if the three decision regions are not overlapped, judging that the caries region is healthy. In this way, the classification of different caries levels, such as healthy teeth, shallow caries, medium caries, deep caries, etc., is achieved.
In summary, the caries automatic grading strategy of the present invention mainly comprises: on the basis of obtaining panoramic picture tooth example segmentation by utilizing a deep learning segmentation model and tissue region segmentation of a single tooth, a coordinate system is established according to tooth morphology, a dentin internal and external boundary search line is generated, dentin internal and external boundary discrete points are searched, a central line in a dentin region 1/2 is obtained, a caries damage level decision region is generated, and a caries damage level decision is made. The intermediate process results of each stage are shown in fig. 12.
The invention also provides a panoramic-based automatic caries grading system, which comprises:
the tooth example segmentation and cutting module is used for segmenting and cutting the tooth example of the panoramic picture to obtain an example segmentation sample of a single tooth in the panoramic picture;
the tooth coarse classification module is used for segmenting the obtained tooth example into samples by utilizing the trained tooth coarse classification model and classifying the samples in a restoration state and a non-restoration state;
the tooth body tissue semantic segmentation module is used for obtaining a plurality of different regional tissues of the segmented teeth after the non-restored tooth sample passes through the trained tooth tissue semantic segmentation model;
and the caries grading module is used for grading healthy, shallow caries, medium caries and deep caries according to acquired tissue conditions of different areas of each tooth, a caries grading model and caries grade diagnosis standards.
The four modules correspond to the four steps of the method respectively.
The invention provides an automatic caries grading technology combined with deep learning, aiming at a panoramic picture widely applied in community hospitals. Inputting a panoramic picture, segmenting each tooth by using a deep learning example segmentation model, further performing tissue segmentation on each tooth, and determining different regions such as a lesion region, enamel, dentin, a pulp cavity and the like. Considering that the teeth in the restored state have obvious brightness difference, obvious characteristics and easy differentiation, a two-stage hierarchical classification network is designed for classifying the teeth in 5 different states, such as healthy, deep caries, middle caries, shallow caries, restored teeth and the like. First, a CNN model is constructed to distinguish between repaired and non-repaired states. Extracting the restored teeth with obvious brightness characteristics. And performing tissue segmentation on the rest tooth sample to determine different tissue regions such as enamel, dentin, pulp cavity, caries and the like. And further, boundary lines of all the regions are determined, and decision support of the regions is provided for automatic grading of caries.
The invention extracts the boundary line of each tooth tissue area by utilizing a ray scanning intersection method on the basis of tooth tissue segmentation, realizes automatic classification of caries by judging the intersection condition of the caries area and each tooth tissue area, is different from the dark box characteristic of the technical scheme of a deep learning classification model, provides a visual diagnosis process for a clinician, and can assist the clinician in clinical diagnosis. In particular, the system fills the gap of the field of caries classification of panoramic images, and has guiding significance for the subsequent caries classification related work. The system and the method of the invention greatly reduce the workload of clinicians, are beneficial to promoting the establishment of oral health files of residents, promote the intelligent oral health and big data management, and have great significance for the high-quality development of the oral health service industry.
The foregoing has outlined rather broadly the preferred embodiments and principles of the present invention and it will be appreciated that those skilled in the art may devise variations of the present invention that are within the spirit and scope of the appended claims.

Claims (8)

1. The automatic caries grading method based on the panoramic film is characterized by comprising the following steps of:
s1, segmenting and cutting the tooth examples of the panoramic picture to obtain the example segmentation samples of single tooth in the panoramic picture;
s2, segmenting the tooth example obtained in the step S1 into samples by utilizing the trained tooth rough classification model, and classifying the tooth example into a restoration state and a non-restoration state;
s3, obtaining a plurality of different regional tissues of the segmented teeth after the non-restored tooth sample in the step S2 passes through the trained tooth tissue semantic segmentation model;
s4, classifying the tooth into 4 grades of healthy, shallow caries, medium caries and deep caries according to caries grade diagnosis standards through a caries grade model according to the tissue conditions of different areas of each tooth obtained in the step S3.
2. The automatic grading method of panoramic-based caries according to claim 1, characterized by that, the step S1 includes the following steps:
s11, carrying out artificial labeling of tooth examples on the collected panoramic photo data set;
s12, training the manually marked panoramic picture data set by adopting a Mask R-CNN example segmentation model;
s13, carrying out tooth example segmentation on the tooth examples of the panoramic picture by using the trained Mask R-CNN example segmentation model to obtain the external contour of each tooth;
s14, calculating a maximum rectangular surrounding frame for the obtained tooth external contour;
and S15, the boundary of the frame surrounded by the maximum rectangle is respectively expanded outwards along the long axis and the short axis, and the clipping is carried out to obtain the example segmentation sample of the single tooth in the panoramic picture.
3. The automatic grading method of panoramic-based caries according to claim 1, characterized by the step S2 further comprising the steps of:
s21, designing a lightweight convolutional neural network framework, and classifying the result into a repairing state and a non-repairing state;
s22, labeling the sample of each tooth instance segmentation obtained in the step S1, and dividing the sample into a restoration state and a non-restoration state;
s23, training a neural network model by using the marked example segmentation samples of each tooth to obtain a tooth rough classification model;
and S24, classifying the tooth example segmentation samples by using the trained tooth rough classification model, and distinguishing the tooth example segmentation samples into two categories, namely a restoration state and a non-restoration state.
4. The automatic grading method of panoramic-based caries according to claim 1, characterized by that, the step S3 includes the following steps:
s31, labeling the non-restored tooth samples distinguished by the tooth rough classification model to obtain enamel, dentin, a pulp cavity and a carious region of each tooth sample;
s32, using the tooth sample labeled in the step S31 to train a U-Net deep learning segmentation model to obtain a trained tooth tissue semantic segmentation model;
and S33, performing tissue segmentation on the tooth sample by using the trained tooth body tissue semantic segmentation model to obtain the enamel, dentin, pulp cavity and carious region of the tooth sample.
5. The automatic grading method of panoramic-based caries according to claim 1, characterized by that, the step S4 includes the following steps:
s41, dividing samples of each tooth example of the panoramic picture, calculating a second moment according to the obtained tooth external contour, determining the direction of a main axis of the tooth, and establishing a coordinate system of each tooth by taking the center of gravity of the tooth contour as an origin and the main axis and the auxiliary axis as coordinate axes;
s42, generating scanning lines starting from the original point at an angle interval of delta theta within the range of (0, pi) by taking the center of gravity of the pulp cavity area as the original point and the radius of the outline of the dental gear as the radius;
s43, traversing the tooth tissue area to which each point belongs on each scanning line, and obtaining the inner and outer boundary lines of the dentin by combining the results of three areas of enamel, dentin and pulp cavity obtained by the tooth tissue semantic segmentation model; wherein the inner and outer boundary lines of the dentin comprise an enamel dentin boundary line and a boundary line of a dentin pulp cavity;
s44, extracting the central line of the dentin 1/2 by using a unit circle rolling tracking method according to the obtained inner and outer boundary lines of the dentin;
s45, connecting the first and last end points of the external contour line of the pulp cavity by taking the boundary curves of the external contour line of the enamel, the boundary line of the enamel dentin, the 1/2 center line of the dentin and the external contour line of the pulp cavity as edges to form closed areas such as the enamel, the outer 1/2 dentin and the inner 1/2 dentin; defining an enamel area as a shallow caries decision area, an outer 1/2 dentin as a middle caries decision area, and an inner 1/2 dentin as a deep caries decision area;
s46, performing and operation on the caries area obtained by the semantic tooth tissue segmentation model and the deep caries decision area, the middle caries decision area and the shallow caries decision area obtained in the step S45 in sequence on space, judging whether the caries area is overlapped with the caries area or not, and determining the diagnosis result of the dental caries level.
6. The automatic grading method of panoramic-based caries according to claim 5, characterized by that the step S43 includes the following steps:
and if the front and the back of the current traversal point on the scanning line belong to different areas, the current traversal point is considered as a boundary line.
7. The automatic grading method of caries based on panoramic film as claimed in claim 5, wherein step S46 further comprises the following steps:
if the sum of the calculated results is not 0, judging that coincidence exists, and determining the dental caries grade diagnosis result of the corresponding grade; if no coincidence region exists among the deep caries decision region, the middle caries decision region and the shallow caries decision region, the tooth is judged to be healthy.
8. A panoramic-based automatic caries grading system is characterized by comprising:
the tooth example segmentation and cutting module is used for segmenting and cutting the tooth example of the panoramic picture to obtain an example segmentation sample of a single tooth in the panoramic picture;
the tooth coarse classification module is used for segmenting the obtained tooth example into samples by utilizing the trained tooth coarse classification model and classifying the samples in a restoration state and a non-restoration state;
the tooth body tissue semantic segmentation module is used for obtaining a plurality of different regional tissues of the segmented teeth after the non-restored tooth sample passes through the trained tooth tissue semantic segmentation model;
and the caries grading module is used for grading healthy, shallow caries, medium caries and deep caries according to acquired tissue conditions of different areas of each tooth, a caries grading model and caries grade diagnosis standards.
CN202111652116.XA 2021-12-30 2021-12-30 Automatic caries grading method and system based on panoramic film Pending CN114332123A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111652116.XA CN114332123A (en) 2021-12-30 2021-12-30 Automatic caries grading method and system based on panoramic film

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111652116.XA CN114332123A (en) 2021-12-30 2021-12-30 Automatic caries grading method and system based on panoramic film

Publications (1)

Publication Number Publication Date
CN114332123A true CN114332123A (en) 2022-04-12

Family

ID=81018623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111652116.XA Pending CN114332123A (en) 2021-12-30 2021-12-30 Automatic caries grading method and system based on panoramic film

Country Status (1)

Country Link
CN (1) CN114332123A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309522A (en) * 2023-04-06 2023-06-23 浙江大学 Panorama piece periodontitis intelligent grading system based on two-stage deep learning model
CN116596861A (en) * 2023-04-28 2023-08-15 中山大学 Dental lesion recognition method, system, equipment and storage medium
CN117252825A (en) * 2023-09-08 2023-12-19 深圳市罗湖区人民医院 Dental caries identification method and device based on oral panoramic image
WO2023246463A1 (en) * 2022-06-24 2023-12-28 杭州朝厚信息科技有限公司 Oral panoramic radiograph segmentation method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023246463A1 (en) * 2022-06-24 2023-12-28 杭州朝厚信息科技有限公司 Oral panoramic radiograph segmentation method
CN116309522A (en) * 2023-04-06 2023-06-23 浙江大学 Panorama piece periodontitis intelligent grading system based on two-stage deep learning model
CN116309522B (en) * 2023-04-06 2024-01-26 浙江大学 Panorama piece periodontitis intelligent grading system based on two-stage deep learning model
CN116596861A (en) * 2023-04-28 2023-08-15 中山大学 Dental lesion recognition method, system, equipment and storage medium
CN116596861B (en) * 2023-04-28 2024-02-23 中山大学 Dental lesion recognition method, system, equipment and storage medium
CN117252825A (en) * 2023-09-08 2023-12-19 深圳市罗湖区人民医院 Dental caries identification method and device based on oral panoramic image

Similar Documents

Publication Publication Date Title
Jader et al. Deep instance segmentation of teeth in panoramic X-ray images
US11464467B2 (en) Automated tooth localization, enumeration, and diagnostic system and method
Panetta et al. Tufts dental database: a multimodal panoramic x-ray dataset for benchmarking diagnostic systems
CN114332123A (en) Automatic caries grading method and system based on panoramic film
US11443423B2 (en) System and method for constructing elements of interest (EoI)-focused panoramas of an oral complex
JP7152455B2 (en) Segmentation device and learning model generation method
Kumar et al. Descriptive analysis of dental X-ray images using various practical methods: A review
US20220084267A1 (en) Systems and Methods for Generating Quick-Glance Interactive Diagnostic Reports
Hou et al. Teeth U-Net: A segmentation model of dental panoramic X-ray images for context semantics and contrast enhancement
Hosntalab et al. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set
Sheng et al. Transformer-based deep learning network for tooth segmentation on panoramic radiographs
Chen et al. Missing teeth and restoration detection using dental panoramic radiography based on transfer learning with CNNs
Sonavane et al. Dental cavity classification of using convolutional neural network
US20220361992A1 (en) System and Method for Predicting a Crown and Implant Feature for Dental Implant Planning
CN114119950A (en) Artificial intelligence-based oral cavity curved surface fault layer dental image segmentation method
Huang et al. Uncertainty-based active learning by Bayesian U-Net for multi-label cone-beam CT segmentation
Yeshua et al. Automatic detection and classification of dental restorations in panoramic radiographs
Kim et al. Automatic and quantitative measurement of alveolar bone level in OCT images using deep learning
US20220358740A1 (en) System and Method for Alignment of Volumetric and Surface Scan Images
Ahmad et al. The effect of sharp contrast-limited adaptive histogram equalization (SCLAHE) on intra-oral dental radiograph images
Chen et al. Automatic and visualized grading of dental caries using deep learning on panoramic radiographs
Jaiswal et al. A cropping algorithm for automatically extracting regions of ınterest from panoramic radiographs based on maxilla and mandible parts
Chen et al. Detection of Various Dental Conditions on Dental Panoramic Radiography Using Faster R-CNN
Wu et al. Clinical tooth segmentation based on local enhancement
Ahn et al. Using artificial intelligence methods for dental image analysis: state-of-the-art reviews

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination