CN115170531A - Method and system for processing mandibular impacted wisdom tooth image - Google Patents

Method and system for processing mandibular impacted wisdom tooth image Download PDF

Info

Publication number
CN115170531A
CN115170531A CN202210856504.8A CN202210856504A CN115170531A CN 115170531 A CN115170531 A CN 115170531A CN 202210856504 A CN202210856504 A CN 202210856504A CN 115170531 A CN115170531 A CN 115170531A
Authority
CN
China
Prior art keywords
oral cavity
image
image data
wisdom tooth
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210856504.8A
Other languages
Chinese (zh)
Inventor
谢祥雨
沈复民
申恒涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Koala Youran Technology Co ltd
Original Assignee
Chengdu Koala Youran Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Koala Youran Technology Co ltd filed Critical Chengdu Koala Youran Technology Co ltd
Priority to CN202210856504.8A priority Critical patent/CN115170531A/en
Publication of CN115170531A publication Critical patent/CN115170531A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Abstract

The invention discloses a method and a system for processing mandibular impacted wisdom tooth images, which are used for acquiring oral cavity images to be identified; inputting the oral cavity image to be identified into a trained first network model, and outputting the impacted wisdom tooth type in the oral cavity image; and inputting the oral cavity image to be recognized into the trained second network model, and outputting the position of the wisdom tooth blocked in the oral cavity image. Wherein, first network model obtains based on optimizing convNext network training, combines four kinds of classification standard of hindering the life wisdom tooth, acquires the type of hindering the life wisdom tooth in the oral cavity image, and the second network model obtains based on target detection network training, combines four kinds of classification standard of hindering the life wisdom tooth, acquires the position of hindering the life wisdom tooth in obtaining the oral cavity image. The type of the impacted wisdom tooth identified by the invention is finer, and the positioned position of the impacted wisdom tooth is more accurate, thereby being more beneficial to assisting doctors to diagnose.

Description

Method and system for processing mandibular impacted wisdom tooth image
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a system for processing mandibular impacted wisdom tooth images.
Background
The wisdom teeth refer to four third molars which are arranged at the innermost upper, lower, left and right parts on alveolar bones in the oral cavity of human beings, and are called as the wisdom teeth because the four third molars begin to erupt right at the age of about 20 years, and the physiological and psychological development of the human body is nearly mature at the moment.
Wherein, hindering the wisdom tooth and being the wisdom tooth because there is the influence of gum resistance, bone resistance or adjacent tooth resistance at the in-process that sprouts, lead to the unable normal sprouting of wisdom tooth, the phenomenon that also can not sprout in addition afterwards, it has a lot of according to the categorised method of difference to hinder the phenomenon of growing, and each hinders and grow all has its special characteristics of living, and the corresponding also easy exquisiteness sends out some specific oral diseases.
At present, for the identification of the wisdom tooth arrestment type, one of a VGG (Visual Geometry Group) Network, a ResNet (Residual Neural Network) Network, a DenseNet (density relational Network) Network and an EfficientNet Network (efficiency Network) is used for carrying out the wisdom tooth arrestment type identification on an oral cavity image picture. The type of wisdom tooth it recognizes is: vertical impedance (vertical impedance); horizontal inhibition (horizontal infection); mesio-obstructive (mesioangular impaction); distal obstruction (distoangular impaction); buccal impaction (buccaangular impaction); lingual occlusion (linguoangular occlusion); one of inverted occlusion, wherein the data tag is generally labeled according to the relationship of the long axis of the occluding wisdom tooth to the long axis of the second molar. However, the models for identifying the wisdom tooth impacted type have a large improvement space in speed and precision, the classification of the wisdom tooth type is not fine enough, and the three-dimensional position of the impacted wisdom tooth cannot be expressed, so that a doctor cannot give an accurate risk assessment result when diagnosing.
Disclosure of Invention
In view of this, the present invention provides a method and a system for processing images of mandibular impacted wisdom teeth, and aims to solve the technical problems of inaccurate identification of the type of impacted wisdom teeth and inaccurate position location.
In order to solve the technical problems, the invention provides a method for processing the mandibular impacted wisdom tooth image, which comprises the following steps:
acquiring an oral cavity image to be identified;
inputting the oral cavity image to be identified into a trained first network model, and outputting the impacted wisdom tooth type in the oral cavity image;
and inputting the oral cavity image to be recognized into the trained second network model, and outputting the position of the impacted wisdom tooth in the oral cavity image.
Optionally, the training method of the first network model includes:
constructing a convolutional neural network, and optimizing the convolutional neural network to generate an optimized convolutional neural network;
and inputting the training data in the training data set into the optimized convolutional neural network model for training, and outputting a first network model.
Optionally, the establishing a convolutional neural network, and optimizing the convolutional neural network to generate an optimized convolutional neural network, includes:
establishing a ConvNext network;
optimizing a first convolution kernel of the ConvNext network into two convolution kernels with the length, the width (1, 4) and the length (4, 1);
and adding an attention mechanism module in the ConvNext module to generate an optimized ConvNext network.
Optionally, the training method of the second network model includes:
and constructing a target detection network, inputting training data in the training data set into a target detection network model for training, and outputting a second network model.
Optionally, the method for acquiring the training data set includes:
generating additional oral high-quality image data based on the oral raw image data to form a data set;
marking the original image data of the oral cavity and the high-quality image data of the oral cavity in the data set according to the classification standard of the impacted wisdom teeth;
and generating a training data set based on the marked oral cavity original image data and the oral cavity high-quality image data.
Optionally, the generating additional oral cavity high quality image data based on the oral cavity raw image data includes:
the data set is augmented with an enhanced super-resolution generation countermeasure network to generate additional oral high quality image data on the oral raw image data.
Optionally, the training data set is divided into a training set, a verification set and a test set.
Optionally, the marking of the original image data of the oral cavity and the high-quality image data of the oral cavity in the data set according to the classification standard of the impacted wisdom teeth comprises:
marking the original image data of the oral cavity and the high-quality image data of the oral cavity in the data set according to the relation between the long axis of the impacted wisdom tooth and the long axis of the second molar;
marking the original image data of the oral cavity and the high-quality image data of the oral cavity in the data set according to the relationship between the impacted wisdom teeth, the mandibular branch and the second molar;
marking the original image data of the oral cavity and the high-quality image data of the oral cavity in the data set according to the depth of the impacted wisdom teeth in the jaw bone;
and marking the position of the oral cavity original image data and the oral cavity high-quality image data in the data set in the dentition according to the impacted wisdom teeth.
Optionally, the mandibular impacted wisdom tooth image processing method performs data enhancement on the training data using mosaic, scaling, translation.
In addition, to achieve the above object, the present invention further provides a mandible molar wisdom tooth image processing system, comprising:
the image acquisition module is used for acquiring an oral cavity image to be identified;
the type output module is used for putting the oral cavity image to be identified into a trained first network model and outputting the type of the impacted wisdom teeth in the oral cavity image;
and the position output module is used for putting the oral cavity image to be recognized into the trained second network model and outputting the position of the wisdom teeth in the oral cavity image.
Compared with the prior art, the invention discloses a method and a system for processing mandibular impacted wisdom tooth images, which are used for acquiring oral cavity images to be identified; putting the oral cavity image to be identified into a trained first network model, and outputting the type of the impacted wisdom teeth in the oral cavity image; and putting the oral cavity image to be identified into a trained second network model, and outputting the position of the impacted wisdom tooth in the oral cavity image. Wherein, first network model obtains based on optimizing convNext network training, combines four kinds of classification standard of hindering the life wisdom tooth, acquires the type of hindering the life wisdom tooth in the oral cavity image, and the second network model obtains based on target detection network training, combines four kinds of classification standard of hindering the life wisdom tooth, acquires the position of hindering the life wisdom tooth in obtaining the oral cavity image. The ConvNext network is optimized through the method, so that the optimized model is more robust, the speed is guaranteed, the precision is improved, the four classification standards of the impacted wisdom teeth are combined, the recognized impacted wisdom teeth are more precise in type, and the positioned impacted wisdom teeth are more accurate, so that diagnosis by a doctor is facilitated.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic step diagram of a method for processing a mandibular impacted wisdom tooth image according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an optimized convolutional neural network according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating the universal classification of four classes of impacted wisdom teeth according to one embodiment of the present invention;
figure 4 is a diagram of an impacted wisdom tooth combined with partial impacted wisdom tooth classification criteria provided by an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Among the prior art, generally only mark with a kind of classification standard to hindering the classification of living wisdom tooth to this obtains hinders living wisdom tooth type and hinders the position of living wisdom tooth accurate inadequately, can only describe roughly, when diagnosing according to hindering living wisdom tooth type and position to the doctor, can improve the diagnosis degree of difficulty undoubtedly, and still can have the risk of misdiagnosing, therefore how accurate discernment hinders the type of living wisdom tooth and accurate location hinders the three-dimensional position of living wisdom tooth just is the key difficult problem that reduces the diagnosis degree of difficulty.
Referring to fig. 1, a schematic step diagram of a method for processing a mandibular impacted wisdom tooth image according to an embodiment of the present invention includes:
and S11, acquiring an oral cavity image to be identified.
When acquiring an oral cavity image to be identified, the embodiment adopts CBCT (Cone beam CT), that is, a Cone beam projection computer reconstruction tomography apparatus to acquire, and the biggest difference compared with tomo CT (spiral CT) is that projection data of tomo CT is one-dimensional, reconstructed image data is two-dimensional, and a reconstructed three-dimensional image is formed by stacking a plurality of continuous two-dimensional slices, and metal artifacts of the image are heavy. Projection data of the CBCT are two-dimensional, and a three-dimensional image is directly obtained after reconstruction. Therefore, the CBCT is adopted, so that the three-dimensional image data can be acquired quickly, the isotropic spatial resolution is high, and the three-dimensional position information of the impacted wisdom teeth in the image can be acquired more accurately and more finely.
S12, inputting the oral cavity image to be recognized into the trained first network model, and outputting the impacted wisdom tooth type in the oral cavity image.
The first network model is obtained by training an optimized convolutional neural network model, wherein the convolutional neural network selects a ConvNext network in the aspect of model selection, and is obtained by optimizing the ConvNext network when the convolutional neural network model is optimized, and specifically, the ConvNext original network is improved in two aspects:
one is as follows: the first convolution kernel after the oral cavity image is input is changed from the original length and width (4, 4) to two convolution kernels of the length and width (1, 4) and (4, 1), and the step size is not changed. By doing so, parameters can be reduced, nonlinear mapping can be increased, and performance can be improved.
The second step is as follows: SE Block is added in ConvNext Block, so that attention mechanism can be added to the network, the network can be more focused on parts concerned by people when features are extracted, and network performance is improved.
It should be noted that, in the model selection,
VGG network: simple structure and strong applicability. However, when the network depth is deepened, the problem of model degradation can occur, so that the precision is not high;
ResNet network: and a residual error structure is introduced, so that the network degradation caused by deepening of the network depth is not worried about. But over-fitting tends to occur on small data sets;
DenseNet network: by taking the residual structure of the resnet as a reference, the parameters are reduced compared with the resnet, but the calculated amount is increased;
EfficientNet network: when the size of the training image is large, the training speed is very slow; the speed of using Depthwise volumes in shallow networks can be slow;
a ConvNext network: the network is optimized on the basis of a Resnet network, and the advantages of a Swin-Transformer network structure are referred; also by taking advantage of the group convolution of ResNeXt, a more aggressive depthwise restriction is adopted, namely the group number and the channel number are the same, and Layer Normalization is used to replace Batch Normalization. And finally, the recognition speed of the model is ensured, and the recognition precision of the model is improved.
Therefore, on the basis of ensuring the recognition speed of the model and the recognition accuracy of the model, the convNext network is definitely outstanding, and secondly, after further optimization of the convNext network, a more robust optimized model can be obtained, so as to realize accurate recognition of the wisdom-inhibiting teeth in combination with the four classification criteria of the wisdom-inhibiting teeth in the embodiment, and further accurately describe the three-dimensional position of the teeth.
Further, in the process of training the optimized ConvNext network model and outputting the first network model, firstly, based on the oral cavity original image data, an enhanced super-resolution generation countermeasure network (ESRGAN) is used for generating additional high-quality CBCT image data on the original data CBCT image for expanding a data set; secondly, marking original data and data generated by expansion in the data set according to four classification standards of the impacted wisdom teeth, dividing a training set, a verification set and a test set, and further performing data enhancement on the training data by using mosaics (data enhancement method), zooming and translating so as to solve the over-fitting problem caused by less training data and improve the performance of the network.
Wherein, the data set is divided into a training set, a verification set and a test set according to the ratio of 6: 2.
1. Firstly, training a model by using a training set, calculating loss by using a loss function, and then reversely propagating and updating the weight and the bias of the model.
2. And then, verifying the model by using the verification set, and adjusting the hyper-parameters in the process of training the model according to the verification result (other parameters except weight and bias, also called hyper-parameters, are included in the training process, and are only related to the parameters of an optimizer, the learning rate and the like in the process of training the model).
3. And recording the information after the model is adjusted.
And (3) repeating the steps 1, 2 and 3 (one epoch in the training process is one round of step 1, step 2 and step 3), and obtaining a trained model after a certain number of epochs.
After the trained model is obtained, a test result is obtained on the trained model by using the test set, and the test result can visually evaluate the quality of the model trained by people.
It should be noted that, in the prior art, when identifying the type of the impacted wisdom teeth on the oral image picture, only one of the four classified criteria of the impacted wisdom teeth is selected for identification, and specifically, the relationship between the long axis of the impacted wisdom teeth and the long axis of the second molar teeth is labeled, as shown in fig. 3, the categories of the criteria are respectively: vertical impedance (vertical impedance), horizontal impedance (horizontal impedance), near-medial impedance (medial impedance), far-medial impedance (distal impedance), buccal impedance (buccolchical impedance), lingual impedance (linguongular impedance), inverted impedance (inverted impedance). The other three classification standards of the impacted wisdom teeth are as follows:
1. according to the relation of the impacted wisdom tooth, the mandible branch and the second molar, the method specifically comprises the following steps:
class I: between the front edge of the mandible branch and the far middle surface of the second molar, enough clearance can accommodate the near and far middle diameters of the crown of the impacted wisdom tooth.
Class II: the gap between the front edge of the mandible support and the far-middle surface of the second molar is not large, and the near-far-middle diameter of wisdom teeth can not be accommodated.
Class III: all or most of the impacted wisdom teeth are located within the mandibular ramus.
2. According to the depth of the impacted wisdom tooth in the jaw bone, the jaw bone is divided into a high position (position A) and a middle position (position B), and a low position (position C) is impacted, which specifically comprises the following steps:
high steric hindrance, the highest position of the impacted wisdom tooth is parallel to or higher than the plane of the dental arch.
The middle position of the impacted wisdom tooth is lower than the plane but higher than the neck of the second molar.
The highest position of the impacted wisdom teeth is lower than the neck of the second molar. Bone burial life-stopping (i.e. the tooth is all embedded in the bone)
3. Depending on the position in the dentition, in particular: buccal side, lingual side and neutral side.
Referring to fig. 4, which is a diagram of the combination of partial hindered wisdom teeth classification criteria, specifically given by the combination of the classification criteria one and classification criteria two, the result obtained by further refining classification on a single hindered wisdom teeth classification criteria, based on which the identification of the type of the hindered wisdom teeth after combining the four types of hindered wisdom teeth classification criteria can result in more accurate type results and more accurate position description, such as one identification result of the hindered wisdom teeth: between the front edge of the mandible branch and the far-middle surface of the second molar, enough clearance can accommodate the near-far middle diameter, the high position obturation, the near-middle obturation and the lingual side displacement of the obturation wisdom tooth crown.
S13, inputting the oral cavity image to be recognized into the trained second network model, and outputting the position of the impacted wisdom tooth in the oral cavity image.
The second network model is obtained by training a target detection network model, wherein the target detection network selects a yolov5 target detection model, the yolov5 target detection model is an advanced stage of improvement and innovation based on yolov4, the precursors of the yolov5 target detection model also comprise yolov, yolov2 and yolov3, and compared with versions of the precursors, the yolov5 target detection model has the characteristics of high convergence rate and strong model customization on data sets, and the recognition rate is higher.
In this embodiment, utilize yolov5 target detection model to train the training data that above-mentioned optimization convNext network chose for use and mark the training data according to the categorised standard of four kinds of wisdom teeth that hinder, and then obtain the three-dimensional position description information who hinders the wisdom tooth in the oral cavity image, the wisdom tooth that hinders, the mandibular nerve of location, and calculate the distance relation of the two, can be more accurate describe the concrete position of wisdom tooth that hinders, the doctor of being convenient for diagnoses and subsequent medical suggestion according to the position of hindering the wisdom tooth.
According to the method for processing the mandibular impacted wisdom tooth image, provided by the embodiment of the invention, the oral cavity image to be identified is acquired; putting the oral cavity image to be identified into a trained first network model, and outputting the impacted wisdom tooth type in the oral cavity image; and putting the oral cavity image to be identified into a trained second network model, and outputting the position of the impacted wisdom tooth in the oral cavity image. Wherein, first network model obtains based on optimizing convNext network training, combines four kinds of classification standard of hindering the life wisdom tooth, acquires the type of hindering the life wisdom tooth in the oral cavity image, and the second network model obtains based on target detection network training, combines four kinds of classification standard of hindering the life wisdom tooth, acquires the position of hindering the life wisdom tooth in obtaining the oral cavity image. The type of the impacted wisdom teeth identified by the invention is finer, and the positioned position of the impacted wisdom teeth is more accurate, thereby being more beneficial to assisting doctors in diagnosing.
Further, an embodiment of the present invention provides a mandibular impacted wisdom tooth image processing system, including:
the image acquisition module is used for acquiring an oral cavity image to be identified;
the type output module is used for putting the oral cavity image to be identified into the trained first network model and outputting the impacted wisdom tooth type in the oral cavity image;
and the position output module is used for putting the oral cavity image to be recognized into the trained second network model and outputting the position of the wisdom teeth in the oral cavity image.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or system that comprises the element.
The above is only a preferred embodiment of the present invention, and it should be noted that the above preferred embodiment should not be considered as limiting the present invention, and the protection scope of the present invention should be subject to the scope defined by the claims. It will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the spirit and scope of the invention, and these modifications and adaptations should be considered within the scope of the invention.

Claims (10)

1. A method for processing mandibular impacted wisdom tooth images is characterized by comprising the following steps:
acquiring an oral cavity image to be identified;
inputting the oral cavity image to be identified into a trained first network model, and outputting the impacted wisdom tooth type in the oral cavity image;
and inputting the oral cavity image to be recognized into the trained second network model, and outputting the position of the wisdom tooth blocked in the oral cavity image.
2. The method for image processing of mandibular impacted wisdom tooth according to claim 1, wherein the method for training the first network model comprises:
constructing a convolutional neural network, and optimizing the convolutional neural network to generate an optimized convolutional neural network;
and inputting the training data in the training data set into the optimized convolutional neural network model for training, and outputting a first network model.
3. The method for processing the mandibular impacted wisdom tooth image according to claim 2, wherein said creating a convolutional neural network and optimizing said convolutional neural network to generate an optimized convolutional neural network comprises:
establishing a ConvNext network;
optimizing a first convolution kernel of the ConvNext network into two convolution kernels with the length and the width of (1, 4) and (4, 1);
and adding an attention mechanism module in the ConvNext module to generate an optimized ConvNext network.
4. The method for image processing of mandibular impacted wisdom teeth according to claim 1, wherein the method for training the second network model comprises:
and constructing a target detection network, inputting training data in the training data set into a target detection network model for training, and outputting a second network model.
5. The method for image processing of mandibular impacted wisdom teeth according to claim 2 or 4, wherein the method for acquiring the training data set comprises:
generating additional oral high-quality image data based on the oral raw image data to form a data set;
marking the original image data of the oral cavity and the high-quality image data of the oral cavity in the data set according to the classification standard of the impacted wisdom teeth;
and generating a training data set based on the marked oral original image data and the oral high-quality image data.
6. The method of image processing of mandibular impacted wisdom teeth according to claim 5, wherein said generating additional oral high quality image data based on the oral raw image data comprises:
the data set is augmented with an enhanced super-resolution generation countermeasure network to generate additional oral high quality image data on the oral raw image data.
7. The method for image processing of mandibular impacted wisdom tooth according to claim 5, wherein the training data set is divided into a training set, a validation set, and a test set.
8. The method for processing the mandibular impacted wisdom tooth image according to claim 5, wherein said labeling the oral cavity raw image data and the oral cavity high quality image data in the data set according to classification criteria of impacted wisdom teeth comprises:
marking the original image data of the oral cavity and the high-quality image data of the oral cavity in the data set according to the relation between the long axis of the impacted wisdom tooth and the long axis of the second molar;
marking the original image data of the oral cavity and the high-quality image data of the oral cavity in the data set according to the relationship between the impacted wisdom teeth, the mandibular branch and the second molar;
marking the original image data of the oral cavity and the high-quality image data of the oral cavity in the data set according to the depth of the impacted wisdom teeth in the jaw bone;
and marking the position of the oral cavity original image data and the oral cavity high-quality image data in the data set in the dentition according to the impacted wisdom teeth.
9. The method of image processing of mandibular impacted wisdom tooth according to claim 5, wherein the training data is data enhanced using mosaic, zoom, translation.
10. A mandibular impacted wisdom tooth image processing system, comprising:
the image acquisition module is used for acquiring an oral cavity image to be identified;
the type output module is used for putting the oral cavity image to be identified into a trained first network model and outputting the type of the impacted wisdom teeth in the oral cavity image;
and the position output module is used for putting the oral cavity image to be recognized into the trained second network model, and the position of the wisdom tooth blocked in the output cavity image.
CN202210856504.8A 2022-07-21 2022-07-21 Method and system for processing mandibular impacted wisdom tooth image Pending CN115170531A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210856504.8A CN115170531A (en) 2022-07-21 2022-07-21 Method and system for processing mandibular impacted wisdom tooth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210856504.8A CN115170531A (en) 2022-07-21 2022-07-21 Method and system for processing mandibular impacted wisdom tooth image

Publications (1)

Publication Number Publication Date
CN115170531A true CN115170531A (en) 2022-10-11

Family

ID=83494924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210856504.8A Pending CN115170531A (en) 2022-07-21 2022-07-21 Method and system for processing mandibular impacted wisdom tooth image

Country Status (1)

Country Link
CN (1) CN115170531A (en)

Similar Documents

Publication Publication Date Title
US11398013B2 (en) Generative adversarial network for dental image super-resolution, image sharpening, and denoising
US11464467B2 (en) Automated tooth localization, enumeration, and diagnostic system and method
US11367188B2 (en) Dental image synthesis using generative adversarial networks with semantic activation blocks
US11189028B1 (en) AI platform for pixel spacing, distance, and volumetric predictions from dental images
US20210118132A1 (en) Artificial Intelligence System For Orthodontic Measurement, Treatment Planning, And Risk Assessment
US11443423B2 (en) System and method for constructing elements of interest (EoI)-focused panoramas of an oral complex
US20210357688A1 (en) Artificial Intelligence System For Automated Extraction And Processing Of Dental Claim Forms
CN106572831A (en) Identification of areas of interest during intraoral scans
CN109712703B (en) Orthodontic prediction method and device based on machine learning
Rangel et al. Integration of digital dental casts in cone-beam computed tomography scans
US20210217170A1 (en) System and Method for Classifying a Tooth Condition Based on Landmarked Anthropomorphic Measurements.
US20220084267A1 (en) Systems and Methods for Generating Quick-Glance Interactive Diagnostic Reports
CN109767841A (en) A kind of scale model search method and device based on cranium jaw face three-dimensional configuration database
CN113223010A (en) Method and system for fully automatically segmenting multiple tissues of oral cavity image
Gerken et al. Objective computerised assessment of residual ridge resorption in the human maxilla and maxillary sinus pneumatisation
CN113112477A (en) Anterior tooth immediate planting measurement and analysis method based on artificial intelligence
CN116823729A (en) Alveolar bone absorption judging method based on SegFormer and oral cavity curved surface broken sheet
Hettiarachchi et al. Linear and volumetric analysis of maxillary sinus pneumatization in a sri lankan population using cone beam computer tomography
CN115170531A (en) Method and system for processing mandibular impacted wisdom tooth image
KR102448169B1 (en) Method and apparatus for predicting orthodontic treatment result based on deep learning
JP7269587B2 (en) segmentation device
US20230240800A1 (en) Method for constructing and displaying 3d computer models of the temporomandibular joints
Maruta et al. Automatic machine learning-based classification of mandibular third molar impaction status
Gerasimenko et al. Results of processing CT scans of the jaw and preparing it for searching for the mandibular canal
Bansod et al. Artificial intelligence & its contemporary applications in dentistry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination