CN117132596A - Mandibular third molar generation-retarding type identification method and system based on deep learning - Google Patents

Mandibular third molar generation-retarding type identification method and system based on deep learning Download PDF

Info

Publication number
CN117132596A
CN117132596A CN202311394030.0A CN202311394030A CN117132596A CN 117132596 A CN117132596 A CN 117132596A CN 202311394030 A CN202311394030 A CN 202311394030A CN 117132596 A CN117132596 A CN 117132596A
Authority
CN
China
Prior art keywords
molar
mandibular
cusp
wire
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311394030.0A
Other languages
Chinese (zh)
Other versions
CN117132596B (en
Inventor
黄昕
郭宏磊
张雅婧
李家宁
焦曦影
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
STOMATOLOGICAL HOSPITAL TIANJIN MEDICAL UNIVERSITY
Original Assignee
STOMATOLOGICAL HOSPITAL TIANJIN MEDICAL UNIVERSITY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by STOMATOLOGICAL HOSPITAL TIANJIN MEDICAL UNIVERSITY filed Critical STOMATOLOGICAL HOSPITAL TIANJIN MEDICAL UNIVERSITY
Priority to CN202311394030.0A priority Critical patent/CN117132596B/en
Publication of CN117132596A publication Critical patent/CN117132596A/en
Application granted granted Critical
Publication of CN117132596B publication Critical patent/CN117132596B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of image analysis, and discloses a mandibular third molar generation-retarding type identification method and system based on deep learning, wherein the method comprises the following steps: acquiring a full-jaw digital curved surface broken sheet image, and preprocessing the full-jaw digital curved surface broken sheet image; inputting the preprocessed full-jaw digital curved surface broken layer image into a deep learning model for dividing images of a first mandibular molar, a second mandibular molar and a third mandibular molar; determining a long axis of a mandibular third molar body, a first molar facial cusp connection line, a second molar facial cusp connection line and a target angle according to images of the mandibular first molar, the mandibular second molar and the mandibular third molar; judging the type of the third molar of the lower jaw according to the angle of the target angle; the image features of the teeth are extracted through the deep learning model and the full jaw digital curved surface fracture slice, and the type of the third molar of the lower jaw can be accurately and efficiently identified according to the image features of the first molar, the second molar and the third molar.

Description

Mandibular third molar generation-retarding type identification method and system based on deep learning
Technical Field
The application relates to the technical field of image analysis, in particular to a mandibular third molar generation-retarding type identification method and system based on deep learning.
Background
As the mandibular third molar extraction of one of the important operations of the oral and maxillofacial surgery, because the contact relation between the position of the mandibular third molar and adjacent teeth is more changeable, when a patient is positioned on the comprehensive treatment table, a young doctor is difficult to accurately grasp the proper position, and misjudgment is easily caused on the intraoral condition, so that the operation is facilitated to become a more complex alveolar surgery, and the technical requirement on the doctor is higher. Although curved volume fraction radiographs are widely used to assist physicians in preoperatively choosing the way to remove and assessing risk, significant clinical experience is still required for the physician to make a relatively accurate determination, which undoubtedly presents a small challenge to young physicians with partially lacking clinical experience.
If a doctor misjudges the type of the third mandibular molar, the serious postoperative complications such as prolonged operation time, large intraoperative trauma, even root residue, root displacement, neurovascular injury and the like can be caused. In the prior art, a technical scheme for identifying a third molar type through a deep learning model is disclosed, for example (a panoramic X-ray dental film wisdom tooth diagnosis research based on machine learning, wuzhen), which adopts a CNN model to classify a wisdom tooth type, and performs feature extraction on a wisdom tooth image to obtain morphological features of wisdom teeth in the image so as to identify a wisdom tooth blocking type, however, in the wisdom tooth blocking type identification process, the wisdom tooth blocking type identification needs to obtain more morphological features of the wisdom teeth in the image, and different identification features possibly result in different conclusions, so that the wisdom tooth morphology identification accuracy of the scheme is not high, and the efficiency is lower due to the need to obtain more wisdom tooth morphological features.
Therefore, there is a need for a method and a system for identifying the type of third molar of the mandible based on deep learning, which can accurately and efficiently identify the type of third molar of the mandible through a full-jaw digital curved surface broken layer sheet.
Disclosure of Invention
In order to solve the technical problems, the application provides a method and a system for identifying the type of the third molar of the lower jaw based on deep learning, which can accurately and efficiently identify the type of the third molar of the lower jaw through a full-jaw digital curved surface broken layer sheet.
The application provides a mandibular third molar type of inhibition recognition method based on deep learning, which comprises the following steps:
s1, acquiring a full-jaw digital curved surface broken sheet image, and preprocessing the full-jaw digital curved surface broken sheet image;
s2, inputting the preprocessed full-jaw digital curved surface broken layer sheet image into a deep learning model, and dividing the images of the first lower jaw molar, the second lower jaw molar and the third lower jaw molar;
s3, determining a long axis of a mandibular third molar body, a first molar buccal cuspid connecting line, a second molar buccal cuspid connecting line and a target angle according to images of the mandibular first molar, the mandibular second molar and the mandibular third molar;
and S4, judging the type of the third molar of the mandible according to the angle of the target angle.
Further, S1, preprocessing the image of the full jaw digitized curved surface broken layer sheet includes:
s11, performing image cutting processing on the full-jaw digital curved surface broken layer sheet image;
s12, carrying out HU value normalization processing on the full jaw digital curved surface broken layer sheet image after image clipping.
Further, S2, the training method of the deep learning model includes:
acquiring a data set of a full-jaw digital curved surface broken layer sheet image;
preprocessing a data set; the preprocessing comprises image clipping processing and HU value normalization processing;
marking contours of the mandibular first molar, the mandibular second molar, and the mandibular third molar in the dataset;
and inputting the marked data set into a deep learning model, and training the deep learning model until the deep learning model converges.
Further, acquiring a dataset of the full jaw digitized curved surface segment slice image includes:
carrying out data enhancement processing on the full-jaw digital curved surface broken layer sheet image to obtain a data set; the data enhancement processing includes spatial inversion, spatial scaling, angular rotation and translational transformation.
Further, S3, determining the mandibular third molar long axis, the first molar facial cusp wire, the second molar facial cusp wire, and the target angle from the images of the mandibular first molar, the mandibular second molar, and the mandibular third molar comprises:
s31, determining a tooth long axis of the third molar of the lower jaw according to the image of the third molar of the lower jaw; the long axis of the tooth passes through the center of the tooth along the direction from the crown to the root;
s32, determining two cusp points on the cheek side of the first molar of the lower jaw and two cusp points on the cheek side of the second molar of the lower jaw according to the images of the first molar of the lower jaw and the second molar of the lower jaw, determining a cusp connecting line of the cheek side of the first molar according to the two cusp points on the cheek side of the first molar of the lower jaw, and determining a cusp connecting line of the cheek side of the second molar according to the two cusp points on the cheek side of the second molar of the lower jaw;
s33, determining a target angle reference line according to the position relation between the first molar buccal cusp connecting line and the second molar buccal cusp connecting line;
s34, taking an included angle formed by the target angle reference line and the long axis of the third molar body of the lower jaw in the anticlockwise direction as a target angle.
Further, S33, determining the target angle reference line according to the positional relationship of the first molar facial cusp connection line and the second molar facial cusp connection line includes:
when the first molar buccal cusp connecting line and the second molar buccal cusp connecting line are overlapped, the first molar buccal cusp connecting line and the second molar buccal cusp connecting line are used as target angle reference lines;
when the first molar buccal cusp connecting line and the second molar buccal cusp connecting line are parallel but not coincident, taking a bisector of a distance between the first molar buccal cusp connecting line and the second molar buccal cusp connecting line as a target angle reference line;
when the first molar buccal cusp line and the second molar buccal cusp line intersect, an angular bisector between the first molar buccal cusp line and the second molar buccal cusp line is taken as a target angular reference line.
Further, S4, determining the type of the third molar of the mandible according to the angle of the target angle includes:
s41, when the angle of the target angle is more than or equal to 0 degree and less than 45 degrees, determining that the type of the third molar of the lower jaw is horizontal;
s42, when the angle of the target angle is more than or equal to 45 degrees and less than 90 degrees, determining that the type of the third molar of the lower jaw is near-middle molar;
s43, when the angle of the target angle is equal to 90 degrees, determining that the type of the third molar of the lower jaw is vertical;
and S44, when the angle of the target angle is larger than 90 degrees, determining that the type of the third molar of the lower jaw is far-middle-range.
The application also provides a mandibular third molar type of inhibition recognition system based on deep learning, comprising:
the acquisition module is used for acquiring the full-jaw digital curved surface broken sheet image and preprocessing the full-jaw digital curved surface broken sheet image;
the segmentation module is used for inputting the preprocessed full-jaw digital curved surface segmentation slice image into a deep learning model and segmenting out images of a first mandibular molar, a second mandibular molar and a third mandibular molar;
the identification module is used for determining a long axis of a mandibular third molar body, a first molar buccal cuspid connecting line, a second molar buccal cuspid connecting line and a target angle according to images of the mandibular first molar, the mandibular second molar and the mandibular third molar;
and the judging module is used for judging the type of the third molar of the lower jaw according to the angle of the target angle.
Further, the obtaining module includes:
the preprocessing module is used for carrying out image cutting processing on the full-jaw digital curved surface broken layer sheet image; and carrying out HU value normalization processing on the full jaw digital curved surface broken layer sheet image after image clipping.
Further, the identification module is further configured to determine a target angle reference line:
when the first molar buccal cusp connecting line and the second molar buccal cusp connecting line are overlapped, the first molar buccal cusp connecting line and the second molar buccal cusp connecting line are used as target angle reference lines;
when the first molar buccal cusp connecting line and the second molar buccal cusp connecting line are parallel but not coincident, taking a bisector of a distance between the first molar buccal cusp connecting line and the second molar buccal cusp connecting line as a target angle reference line;
when the first molar buccal cusp line and the second molar buccal cusp line intersect, an angular bisector between the first molar buccal cusp line and the second molar buccal cusp line is taken as a target angular reference line.
The embodiment of the application has the following technical effects:
the method comprises the steps of dividing images of first lower teeth grinding, second lower teeth grinding and third lower teeth grinding based on a deep learning model and a full-jaw digital curved surface fault slice, judging the type of the second lower teeth grinding according to angles formed by long axes of the third lower teeth grinding bodies and connecting lines of buccal side teeth tips of the first lower teeth grinding bodies and connecting lines of buccal side teeth tips of the second lower teeth grinding bodies, and improving identification accuracy and identification efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for identifying a third molar type of a mandibular based on deep learning provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a framework of a U-NET model according to an embodiment of the present application;
FIG. 3 is a schematic view of an angle formed by a long axis of a third molar of a mandible and a target angle reference line provided by an embodiment of the present application;
FIG. 4 is a schematic illustration of a third molar type of mandibular joint of the present application being a horizontal joint;
FIG. 5 is a schematic illustration of a third mandibular molar type of inhibition of mesial-mesial type provided in accordance with an embodiment of the present application;
FIG. 6 is a schematic illustration of a third molar type of vertical resistance provided by an embodiment of the present application;
FIG. 7 is a schematic illustration of a third mandibular molar type of inhibition of distal to intermediate inhibition provided in accordance with an embodiment of the present application;
fig. 8 is a schematic structural diagram of a third molar generation-retarding type recognition system for a mandible based on deep learning according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the application, are within the scope of the application.
Fig. 1 is a flowchart of a method for identifying a third molar type of a mandibular based on deep learning according to an embodiment of the present application. Referring to fig. 1, the method specifically includes:
s1, acquiring a full-jaw digital curved surface broken sheet image, and preprocessing the full-jaw digital curved surface broken sheet image.
Specifically, the full jaw digital curved surface broken layer tablet is the most commonly used imaging examination method in clinic, and provides more visual basis for the scheme formulation of the tooth extraction, the resistance analysis, the evaluation of the preoperative operation risk and the communication between doctors and patients. Through checking the digital curved surface broken layer sheet of the whole jaw before the third molar extraction operation of the lower jaw, doctors can be helped to make a more reasonable operation scheme, the preparation before the operation is perfected, the operation wound is reduced, the operation time is shortened, and the smooth operation is ensured to the greatest extent.
S11, performing image cutting processing on the full-jaw digital curved surface broken layer sheet image.
Specifically, image cropping is used to crop the image out of the region of interest desired by the physician. Exemplary, after the obtained image cutting process is performed on the digital curved surface broken sheet image of the whole jaw, the size of the original digital curved surface broken sheet image of the whole jaw is cut from the original size of 1935×2400 to an image with a local area of 1600×1600, so that useless information in the image can be effectively reduced, and resources occupied in the subsequent model operation process are reduced.
S12, carrying out HU value normalization processing on the full jaw digital curved surface broken layer sheet image after image clipping.
Specifically, HU value normalization is used for converting pixel values of the full jaw digital curved surface fracture slice images into standardized proportions, so that analysis and comparison of images in different imaging modes are facilitated. Reading the image-cut full-jaw digital curved surface broken-layer sheet image, converting the pixel value of the image-cut full-jaw digital curved surface broken-layer sheet image into an HU value, and carrying out HU value normalization on the HU value; the formula for converting the pixel value of the full jaw digital curved surface broken layer sheet image after image clipping into the HU value is as follows:
(1)
the pixel_value is a pixel value of the full-jaw digital curved surface broken layer slice image after image clipping, slope is a first conversion factor, and interval is a second conversion factor.
The formula for HU value normalization is:
(2)
HU_norm is a normalized HU value, and HU_min and HU_max are HU minimum values and HU maximum values in the full-jaw digital curved surface broken layer sheet image after image clipping.
S2, inputting the preprocessed full-jaw digital curved surface broken layer sheet image into a deep learning model, and dividing the images of the first lower jaw molar, the second lower jaw molar and the third lower jaw molar.
For example, the deep learning model may be a U-NET model, and fig. 2 is a schematic diagram of a framework of the U-NET model provided by an embodiment of the present application, and referring to fig. 2, the U-NET model is composed of a coding sub-network, a decoding sub-network and a jump connection layer, and the network structure is similar to the shape of a U, so that the U-NET model is called. The U-NET network contains 23 convolution layers, 4 times of coding operation and 4 times of decoding operation are carried out in the network, two times of 3X 3 convolution operation and one time of 2X 2 pooling operation are involved in the coding process, the operation reduces the size of the preprocessed full-jaw digital curved surface broken-layer slice image to be one half of the original image, and the number of features is doubled. Only two 3 x 3 convolution operations are involved in the decoding process, the image reduced in the encoding process is enlarged twice, the size of the finally obtained image is the same as the size of the original image, and the number of the features is reduced by one half. Finally, a 1 x 1 convolutional layer is added to the U-NET network, which aims at mapping the features extracted in the encoding and decoding processes to the classes corresponding thereto. The decoding process extracts high-dimensional features of the input image by performing successive convolution operations and pooling operations. The mutual mapping relation between the coding network and the decoding network is the characteristic of the U-NET network, and the missing edge information is repaired by fusing the characteristics of the coding layer mapped with the U-NET network when decoding is carried out, so that the accuracy of predicting the edge information is improved. The long connection between the encoding and decoding of the U-NET network is mainly used for copying the information of the input image and transmitting the information to the downsampling, so that the information loss caused by the downsampling is promoted to be recovered by the network. Wherein the convolution operation may be represented by a specific mathematical expression:
(3)
wherein y is ij Convolution values representing the ith row and jth column of the feature map image, U and V representing the height and width of the convolution kernel matrix, U and V being the coordinates of the convolution kernel matrix changes, w uv Weight value corresponding to the u th row and v th column of convolution kernel, and x i+u-1,j+v-1 The pixel value corresponding to the j+v-1 th column of the i+u-1 th row of the feature image is represented.
Specifically, the training method of the deep learning model comprises the following steps:
a dataset of full jaw digitized curved surface fracture slice images is acquired.
Specifically, the deep learning network model generally needs a large amount of training data, but the data amount of the current full-jaw digital curved surface fracture slice image is relatively short, and in order to optimize the robustness of the model training enhancement model and reduce the problem of over fitting, the data set is obtained by carrying out data enhancement processing on the full-jaw digital curved surface fracture slice image. The data enhancement processing comprises spatial inversion, spatial scaling, angular rotation, translational transformation and the like. By way of example, after performing operations such as horizontal and/or vertical turning, rotation, scaling, shifting, etc. on the segmented sheet image of the digitized curved surface of the whole jaw, the deep learning network may treat the image after the data enhancement process as a different picture from the original image, thereby obtaining a large amount of training data to form a data set.
Preprocessing a data set; the preprocessing comprises image clipping processing and HU value normalization processing.
Specifically, the method for preprocessing the image in the dataset is the same as that in step S1, and will not be described here again.
Contours of the mandibular first molar, mandibular second molar, and mandibular third molar are noted in the dataset.
And inputting the marked data set into a deep learning model, and training the deep learning model until the deep learning model converges.
Specifically, the marked data set is input into a deep learning model, and the deep learning model is trained until the deep learning model converges, namely, the result of the contours of the first mandibular molar, the second mandibular molar and the third mandibular molar predicted by the deep learning model is close to the contours actually marked in the data set. For example, whether the deep learning model is converged can be judged through a loss function, and if the loss function value is smaller than a preset value, the deep learning model is judged to be converged.
And S3, determining a long axis of the mandibular third molar body, a first molar facial cuspid connecting line, a second molar facial cuspid connecting line and a target angle according to the images of the mandibular first molar, the mandibular second molar and the mandibular third molar.
S31, determining a tooth long axis of the third molar of the lower jaw according to the image of the third molar of the lower jaw; the long axis of the tooth passes through the center of the tooth along the direction from the crown to the root.
S32, determining two cusp points on the cheek side of the first molar of the lower jaw and two cusp points on the cheek side of the second molar of the lower jaw according to the images of the first molar of the lower jaw and the second molar of the lower jaw, determining a cusp line on the cheek side of the first molar according to the two cusp points on the cheek side of the first molar of the lower jaw, and determining a cusp line on the cheek side of the second molar according to the two cusp points on the cheek side of the second molar of the lower jaw.
S33, determining a target angle reference line according to the position relation between the first molar facial cusp connecting line and the second molar facial cusp connecting line.
Specifically, when the first molar buccal cusp line and the second molar buccal cusp line overlap, the first molar buccal cusp line and the second molar buccal cusp line are taken as target angle reference lines. When the first molar facial cusp line and the second molar facial cusp line are parallel but not coincident, a bisector of a distance between the first molar facial cusp line and the second molar facial cusp line is taken as the target angle reference line. When the first molar buccal cusp line and the second molar buccal cusp line intersect, an angular bisector between the first molar buccal cusp line and the second molar buccal cusp line is taken as a target angular reference line.
S34, taking an included angle formed by the target angle reference line and the long axis of the third molar body of the lower jaw in the anticlockwise direction as a target angle.
Specifically, fig. 3 is a schematic diagram of an included angle formed by a long axis of a third molar body of a mandible and a target angle reference line according to an embodiment of the present application, referring to fig. 3, an example is given of a target angle reference line when a first molar facial cusp line and a second molar facial cusp line overlap, and an included angle formed by the target angle reference line and the long axis of the third molar body of the mandible in a counterclockwise direction is taken as a target angle.
And S4, judging the type of the third molar of the mandible according to the angle of the target angle.
S41, when the angle of the target angle is more than or equal to 0 degree and less than 45 degrees, determining that the type of the third molar of the lower jaw is horizontal.
Specifically, fig. 4 is a schematic diagram of a third molar of the present application with a horizontal molar type, and referring to fig. 4, a target angle reference line when a first molar facial cusp line and a second molar facial cusp line are overlapped is taken as an example, and when an angle formed between the target angle reference line and a long axis of the third molar of the lower jaw in a counterclockwise direction is greater than or equal to 0 ° and less than 45 °, the molar type of the third molar of the lower jaw is determined to be a horizontal molar.
S42, when the angle of the target angle is more than or equal to 45 degrees and less than 90 degrees, determining that the type of the third molar of the lower jaw is mesial.
Specifically, fig. 5 is a schematic diagram of a third molar of the present application, referring to fig. 5, in which a target angle reference line when a first molar facial cusp line and a second molar facial cusp line are coincident is taken as an example, and when an angle formed between the target angle reference line and a long axis of the third molar of the lower jaw in a counterclockwise direction is greater than or equal to 45 ° and less than 90 °, the type of molar of the third molar of the lower jaw is determined to be mesial.
S43, when the angle of the target angle is equal to 90 degrees, determining that the type of the third molar of the lower jaw is vertical.
Specifically, fig. 6 is a schematic diagram of a vertical growth-preventing type of third molar of the lower jaw according to an embodiment of the present application, referring to fig. 6, taking a target angle reference line when a first molar facial cusp line and a second molar facial cusp line are coincident as an example, and determining the growth-preventing type of third molar of the lower jaw as vertical growth-preventing when an angle formed between the target angle reference line and a long axis of the third molar of the lower jaw in a counterclockwise direction is equal to 90 °.
And S44, when the angle of the target angle is larger than 90 degrees, determining that the type of the third molar of the lower jaw is far-middle-range.
Specifically, fig. 7 is a schematic diagram of a third molar type of lower jaw as a distal/intermediate molar according to an embodiment of the present application, and referring to fig. 7, a target angle reference line when a first molar facial cusp line and a second molar facial cusp line are overlapped is taken as an example, and when an angle formed between the target angle reference line and a long axis of the third molar body in a counterclockwise direction is greater than 90 °, the type of molar of the third molar is determined to be distal/intermediate molar.
In the embodiment of the application, the images of the first mandibular molar, the second mandibular molar and the third mandibular molar are segmented based on the deep learning model and the full-jaw digital curved surface tomogram, and then the type of the blocking of the third mandibular molar is judged according to the angle formed by the long axis of the third mandibular molar and the connecting line of the buccal cuspids of the first mandibular molar and the connecting line of the buccal cuspids of the second mandibular molar, so that the accuracy and the efficiency of recognition are improved.
Fig. 8 is a schematic structural diagram of a mandibular third molar type of resistance recognition system based on deep learning according to an embodiment of the present application, referring to fig. 8, the system specifically includes:
the acquisition module 1 is used for acquiring the full-jaw digital curved surface broken sheet image and preprocessing the full-jaw digital curved surface broken sheet image.
The segmentation module 2 is used for inputting the preprocessed full-jaw digital curved surface broken layer sheet image into a deep learning model, and segmenting out images of the first mandibular molar, the second mandibular molar and the third mandibular molar.
And the identification module 3 is used for determining a long axis of the mandibular third molar body, a first molar facial cusp connecting line and a second molar facial cusp connecting line and a target angle according to images of the mandibular first molar, the mandibular second molar and the mandibular third molar.
And the judging module 4 is used for judging the type of the third molar of the lower jaw according to the angle of the target angle.
Further, the acquisition module 1 comprises a preprocessing module 11 for performing image clipping processing on the full-jaw digital curved surface broken layer sheet image; and carrying out HU value normalization processing on the full jaw digital curved surface broken layer sheet image after image clipping. The identification module 3 is further configured to determine a target angle reference line: when the first molar buccal cusp line and the second molar buccal cusp line coincide, the first molar buccal cusp line and the second molar buccal cusp line are taken as target angle reference lines. When the first molar facial cusp line and the second molar facial cusp line are parallel but not coincident, a bisector of a distance between the first molar facial cusp line and the second molar facial cusp line is taken as the target angle reference line. When the first molar buccal cusp line and the second molar buccal cusp line intersect, an angular bisector between the first molar buccal cusp line and the second molar buccal cusp line is taken as a target angular reference line.
In the embodiment of the application, the images of the first mandibular molar, the second mandibular molar and the third mandibular molar are segmented based on the deep learning model and the full-jaw digital curved surface tomogram, and then the type of the blocking of the third mandibular molar is judged according to the angle formed by the long axis of the third mandibular molar and the connecting line of the buccal cuspids of the first mandibular molar and the connecting line of the buccal cuspids of the second mandibular molar, so that the accuracy and the efficiency of recognition are improved.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application. As used in this specification, the terms "a," "an," "the," and/or "the" are not intended to be limiting, but rather are to be construed as covering the singular and the plural, unless the context clearly dictates otherwise. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method or apparatus comprising such elements.
It should also be noted that the positional or positional relationship indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the positional or positional relationship shown in the drawings, are merely for convenience of describing the present application and simplifying the description, and do not indicate or imply that the apparatus or element in question must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present application. Unless specifically stated or limited otherwise, the terms "mounted," "connected," and the like are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the essence of the corresponding technical solutions from the technical solutions of the embodiments of the present application.

Claims (10)

1. A mandibular third molar type of resistance recognition method based on deep learning, comprising the steps of:
s1, acquiring a full-jaw digital curved surface broken sheet image, and preprocessing the full-jaw digital curved surface broken sheet image;
s2, inputting the preprocessed full-jaw digital curved surface broken layer sheet image into a deep learning model, and dividing the images of the first lower jaw molar, the second lower jaw molar and the third lower jaw molar;
s3, determining a long axis of the mandibular third molar body, the first molar facial cusp connecting line, the second molar facial cusp connecting line and a target angle according to the images of the mandibular first molar, the mandibular second molar and the mandibular third molar;
and S4, judging the type of the third molar of the mandible according to the angle of the target angle.
2. The method for identifying a third molar type of resistance based on deep learning according to claim 1, wherein the step S1 of preprocessing the digitized curved segment image of the whole jaw comprises:
s11, performing image cutting processing on the full-jaw digital curved surface broken layer sheet image;
s12, carrying out HU value normalization processing on the full jaw digital curved surface broken layer sheet image after image clipping.
3. The method for identifying the type of mandibular third molar resistor based on deep learning according to claim 2, wherein the training method of the deep learning model comprises the following steps:
acquiring a data set of the full-jaw digital curved surface broken layer sheet image;
preprocessing the data set; the preprocessing comprises image clipping processing and HU value normalization processing;
labeling contours of the first mandibular molar, the second mandibular molar, and the third mandibular molar in the dataset;
and inputting the marked data set into the deep learning model, and training the deep learning model until the deep learning model converges.
4. A method of identifying a third molar type of resistance to deep learning based on a mandibular surface of claim 3, wherein said acquiring a dataset of said full jaw digitized curved surface fracture slice image comprises:
performing data enhancement processing on the full-jaw digital curved surface broken layer slice image to obtain the data set; the data enhancement processing includes spatial inversion, spatial scaling, angular rotation and translational transformation.
5. The method of deep learning based mandibular third molar type recognition of claim 1, wherein said S3, said determining said mandibular third molar long axis, said first molar buccal cusp wire, said second molar buccal cusp wire and target angles from images of said mandibular first molar, said mandibular second molar and said mandibular third molar comprises:
s31, determining a tooth long axis of the third molar of the lower jaw according to the image of the third molar of the lower jaw; the long axis of the tooth passes through the center of the tooth along the direction from the crown to the root;
s32, determining two cusp points of the first mandibular molar and two cusp points of the second mandibular molar according to the images of the first mandibular molar and the second mandibular molar, determining a cusp line of the first mandibular molar according to the two cusp points of the first mandibular molar, and determining a cusp line of the second mandibular molar according to the two cusp points of the second mandibular molar;
s33, determining a target angle reference line according to the position relation between the first molar buccal cusp connecting line and the second molar buccal cusp connecting line;
s34, taking an included angle formed by the target angle reference line and the long axis of the third molar body of the lower jaw in the anticlockwise direction as a target angle.
6. The method for recognition of a third molar type of resistance to deep learning of a mandibular tooth according to claim 5, wherein the step of determining a target angular reference line based on a positional relationship between the first molar buccal cusp line and the second molar buccal cusp line, S33, comprises:
when the first molar buccal cusp wire and the second molar buccal cusp wire are coincident, taking the first molar buccal cusp wire and the second molar buccal cusp wire as target angle reference wires;
when the first molar buccal cusp wire and the second molar buccal cusp wire are parallel but not coincident, taking a bisector of a distance between the first molar buccal cusp wire and the second molar buccal cusp wire as a target angle reference line;
when the first molar facial cusp wire and the second molar facial cusp wire intersect, an angular bisector between the first molar facial cusp wire and the second molar facial cusp wire is taken as a target angular reference line.
7. The method for identifying a third molar type of a mandibular third molar based on deep learning according to claim 1, wherein the step S4 of determining the type of the mandibular third molar based on the angle of the target angle comprises:
s41, when the angle of the target angle is more than or equal to 0 degrees and less than 45 degrees, determining that the type of the third molar of the lower jaw is horizontal;
s42, when the angle of the target angle is more than or equal to 45 degrees and less than 90 degrees, determining that the type of the third molar of the lower jaw is near-middle molar;
s43, when the angle of the target angle is equal to 90 degrees, determining that the type of the third molar of the lower jaw is vertical;
and S44, when the angle of the target angle is larger than 90 degrees, determining that the type of the third molar of the lower jaw is distal and middle.
8. A third molar type of resistance recognition system based on deep learning, comprising:
the acquisition module is used for acquiring the full-jaw digital curved surface broken sheet image and preprocessing the full-jaw digital curved surface broken sheet image;
the segmentation module is used for inputting the preprocessed full-jaw digital curved surface segmentation slice image into a deep learning model and segmenting out images of a first mandibular molar, a second mandibular molar and a third mandibular molar;
the identification module is used for determining a long axis of the mandibular third molar body, the first molar facial cusp connecting line and the second molar facial cusp connecting line and a target angle according to the images of the mandibular first molar, the mandibular second molar and the mandibular third molar;
and the judging module is used for judging the type of the third molar of the lower jaw according to the angle of the target angle.
9. The deep learning based mandibular third molar type recognition system of claim 8, wherein the acquisition module comprises:
the preprocessing module is used for carrying out image cutting processing on the full-jaw digital curved surface broken layer sheet image; and carrying out HU value normalization processing on the full jaw digital curved surface broken layer sheet image after image clipping.
10. The deep learning based mandibular third molar type of resistance recognition system of claim 8, wherein the recognition module is further configured to determine a target angle reference line:
when the first molar buccal cusp wire and the second molar buccal cusp wire are coincident, taking the first molar buccal cusp wire and the second molar buccal cusp wire as target angle reference wires;
when the first molar buccal cusp wire and the second molar buccal cusp wire are parallel but not coincident, taking a bisector of a distance between the first molar buccal cusp wire and the second molar buccal cusp wire as a target angle reference line;
when the first molar facial cusp wire and the second molar facial cusp wire intersect, an angular bisector between the first molar facial cusp wire and the second molar facial cusp wire is taken as a target angular reference line.
CN202311394030.0A 2023-10-26 2023-10-26 Mandibular third molar generation-retarding type identification method and system based on deep learning Active CN117132596B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311394030.0A CN117132596B (en) 2023-10-26 2023-10-26 Mandibular third molar generation-retarding type identification method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311394030.0A CN117132596B (en) 2023-10-26 2023-10-26 Mandibular third molar generation-retarding type identification method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN117132596A true CN117132596A (en) 2023-11-28
CN117132596B CN117132596B (en) 2024-01-12

Family

ID=88854938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311394030.0A Active CN117132596B (en) 2023-10-26 2023-10-26 Mandibular third molar generation-retarding type identification method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN117132596B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239649A (en) * 2016-11-28 2017-10-10 可丽尔医疗科技(常州)有限公司 A kind of method of oral cavity parametrization measurement
CN110503652A (en) * 2019-08-23 2019-11-26 北京大学口腔医学院 Mandibular kinesiography and adjacent teeth and mandibular canal relationship determine method, apparatus, storage medium and terminal
CN209790043U (en) * 2018-12-04 2019-12-17 阳江市人民医院 Be applied to guiding mechanism that lower jaw third molar divides crown to pull out
CN110895816A (en) * 2019-10-14 2020-03-20 广州医科大学附属口腔医院(广州医科大学羊城医院) Method for measuring alveolar bone grinding amount before mandibular bone planting plan operation
CN112927225A (en) * 2021-04-01 2021-06-08 潘俞欢 Wisdom tooth growth state auxiliary detection system based on artificial intelligence
CN113449426A (en) * 2021-07-01 2021-09-28 正雅齿科科技(上海)有限公司 Digital tooth arrangement method, system, apparatus and medium
CN114429070A (en) * 2022-01-26 2022-05-03 华侨大学 Optimal design method for structure of molar prosthesis implant
CN115137505A (en) * 2022-07-01 2022-10-04 四川大学 Manufacturing process of standard dental retention guide plate based on digitization technology
CN218676304U (en) * 2022-08-03 2023-03-21 同济大学附属口腔医院 Three-dimensional visual impacted tooth model of extracing tooth
CN116503389A (en) * 2023-06-25 2023-07-28 南京邮电大学 Automatic detection method for external absorption of tooth root

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239649A (en) * 2016-11-28 2017-10-10 可丽尔医疗科技(常州)有限公司 A kind of method of oral cavity parametrization measurement
CN209790043U (en) * 2018-12-04 2019-12-17 阳江市人民医院 Be applied to guiding mechanism that lower jaw third molar divides crown to pull out
CN110503652A (en) * 2019-08-23 2019-11-26 北京大学口腔医学院 Mandibular kinesiography and adjacent teeth and mandibular canal relationship determine method, apparatus, storage medium and terminal
CN110895816A (en) * 2019-10-14 2020-03-20 广州医科大学附属口腔医院(广州医科大学羊城医院) Method for measuring alveolar bone grinding amount before mandibular bone planting plan operation
CN112927225A (en) * 2021-04-01 2021-06-08 潘俞欢 Wisdom tooth growth state auxiliary detection system based on artificial intelligence
CN113449426A (en) * 2021-07-01 2021-09-28 正雅齿科科技(上海)有限公司 Digital tooth arrangement method, system, apparatus and medium
CN114429070A (en) * 2022-01-26 2022-05-03 华侨大学 Optimal design method for structure of molar prosthesis implant
CN115137505A (en) * 2022-07-01 2022-10-04 四川大学 Manufacturing process of standard dental retention guide plate based on digitization technology
CN218676304U (en) * 2022-08-03 2023-03-21 同济大学附属口腔医院 Three-dimensional visual impacted tooth model of extracing tooth
CN116503389A (en) * 2023-06-25 2023-07-28 南京邮电大学 Automatic detection method for external absorption of tooth root

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIN HUANG ETAL: ""Fault Feature Extraction of Gearbox Based on Kurtosis-Weighted Singular Values"", 《2018 PROGNOSTICS AND SYSTEM HEALTH MANAGEMENT CONFERENCE》, pages 1274 - 1279 *
黄昕 等: ""三维解剖软件在口腔颌面外科课程整合中的应用评价"", 《中国高等医学教育》, pages 117 - 119 *

Also Published As

Publication number Publication date
CN117132596B (en) 2024-01-12

Similar Documents

Publication Publication Date Title
US11734825B2 (en) Segmentation device and method of generating learning model
US20210118132A1 (en) Artificial Intelligence System For Orthodontic Measurement, Treatment Planning, And Risk Assessment
US11443423B2 (en) System and method for constructing elements of interest (EoI)-focused panoramas of an oral complex
KR101554272B1 (en) Diagnosis assitance system utilizing panoramic radiographs, and diagnosis assistance program utilizing panoramic radiographs
CN110612069A (en) Dynamic dental arch picture
US20220012815A1 (en) Artificial Intelligence Architecture For Evaluating Dental Images And Documentation For Dental Procedures
US20210357688A1 (en) Artificial Intelligence System For Automated Extraction And Processing Of Dental Claim Forms
JP2008520344A (en) Method for detecting and correcting the orientation of radiographic images
CN109767841B (en) Similar model retrieval method and device based on craniomaxillofacial three-dimensional morphological database
JP6830082B2 (en) Dental analysis system and dental analysis X-ray system
US20210358604A1 (en) Interface For Generating Workflows Operating On Processing Dental Information From Artificial Intelligence
US20210353393A1 (en) Artificial Intelligence Platform for Determining Dental Readiness
US20210217170A1 (en) System and Method for Classifying a Tooth Condition Based on Landmarked Anthropomorphic Measurements.
US20220361992A1 (en) System and Method for Predicting a Crown and Implant Feature for Dental Implant Planning
Pattanachai et al. Tooth recognition in dental radiographs via Hu's moment invariants
CN117132596B (en) Mandibular third molar generation-retarding type identification method and system based on deep learning
Chen et al. Detection of various dental conditions on dental panoramic radiography using Faster R-CNN
CN113658679A (en) Automatic evaluation method for alveolar nerve injury risk under medical image
WO2023194500A1 (en) Tooth position determination and generation of 2d reslice images with an artificial neural network
Bodhe et al. Design and development of deep learning approach for dental implant planning
Ahn et al. Using artificial intelligence methods for dental image analysis: state-of-the-art reviews
Zhou et al. NKUT: Dataset and Benchmark for Pediatric Mandibular Wisdom Teeth Segmentation
Liu et al. A new lateral cephalogram image stitching technique using gaussian mixture model and normalized cross-correlation
WO2023246463A1 (en) Oral panoramic radiograph segmentation method
US20230013902A1 (en) System and Method for Correcting for Distortions of a Diagnostic Image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant