CN110503652A - Mandibular kinesiography and adjacent teeth and mandibular canal relationship determine method, apparatus, storage medium and terminal - Google Patents

Mandibular kinesiography and adjacent teeth and mandibular canal relationship determine method, apparatus, storage medium and terminal Download PDF

Info

Publication number
CN110503652A
CN110503652A CN201910785246.7A CN201910785246A CN110503652A CN 110503652 A CN110503652 A CN 110503652A CN 201910785246 A CN201910785246 A CN 201910785246A CN 110503652 A CN110503652 A CN 110503652A
Authority
CN
China
Prior art keywords
mandibular
segmentation
image
training
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910785246.7A
Other languages
Chinese (zh)
Other versions
CN110503652B (en
Inventor
傅开元
刘木清
毛伟玉
徐子能
丁鹏
白海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Feather Care Cabbage Information Technology Co Ltd
Peking University School of Stomatology
Original Assignee
Beijing Feather Care Cabbage Information Technology Co Ltd
Peking University School of Stomatology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Feather Care Cabbage Information Technology Co Ltd, Peking University School of Stomatology filed Critical Beijing Feather Care Cabbage Information Technology Co Ltd
Priority to CN201910785246.7A priority Critical patent/CN110503652B/en
Publication of CN110503652A publication Critical patent/CN110503652A/en
Application granted granted Critical
Publication of CN110503652B publication Critical patent/CN110503652B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The invention discloses a kind of Impacted Mandibular Wisdoms and adjacent teeth and mandibular canal relationship to determine method, apparatus, storage medium and terminal, this method comprises: establishing 3D segmentation convolutional neural networks model;Obtain the Impacted Mandibular Wisdom of actual samples, the CBCT image sequence of adjacent second molar and mandibular canal;The CBCT image sequence of actual samples is input to 3D segmentation convolutional neural networks, to obtain Impacted Mandibular Wisdom, the segmentation result sequence of adjacent second molar and mandibular canal of the actual samples of 3D segmentation convolutional Neural pessimistic concurrency control output.The solution of the present invention, can solve Dental X-ray film or curved surface laminagram judges the problem of Impacted Mandibular Wisdom and adjacent teeth and mandibular canal relationship reliability difference, achieve the effect that promotion judges reliability.

Description

Method and device for determining relationship between mandible wisdom tooth and adjacent teeth and mandible tube, storage medium and terminal
Technical Field
The invention belongs to the technical field of computers, and relates to an image processing method, an image processing device, a storage medium and a terminal, in particular to a method, a device, a storage medium and a terminal for determining the relationship between mandible wisdom teeth and adjacent teeth and mandible vessels, more particularly to a method, a device, a storage medium and a terminal for determining the relationship between mandible impacted wisdom teeth and adjacent teeth and mandible vessels, and particularly relates to a method, a device, a storage medium and a terminal for automatically evaluating and dynamically displaying the CBCT three-dimensional image relationship between mandible impacted wisdom teeth and adjacent teeth and mandible vessels based on deep learning.
Background
Wisdom tooth extraction is one of the most common procedures in oral and maxillofacial surgery. The accurate assessment of the positions of the tooth root and the mandibular canal (mandibular canal) before the operation can effectively avoid the occurrence of postoperative complications such as nerve injury. The conventional image examination before extraction of wisdom teeth is a tip sheet or a curved surface body sheet, and Cone Beam Computed Tomography (CBCT) examination is usually adopted to accurately evaluate the root form, the impacted position and direction, the relationship between adjacent teeth and the relationship between the root and the mandibular canal of mandibular impacted wisdom teeth (impacted third molar). The physician then needs to continuously observe a large number of slices in different directions and layers for analysis and evaluation.
In the conventional image examination, the apical plate or the curved surface body layer plate is a two-dimensional image, when the images of the tooth root and the mandibular canal are overlapped, the relationship can not be accurately judged, and the accuracy and specificity for predicting the exposure of the inferior alveolar neurovascular bundle in the operation are lower.
The CBCT is used as a three-dimensional sectional image, avoids the overlapping of the images, provides comprehensive and accurate information of teeth, jaw bones and mandible tubes, and brings burden and risk of image diagnosis. Firstly, the CBCT data has large information amount, time is consumed for browsing and diagnosing, and doctors can make mistakes due to factors such as fatigue and emotion; secondly, most oral physicians do not receive professional craniomaxillary facial image diagnosis training, and are not familiar with direct interest areas, namely anatomy outside teeth and image diagnosis, so that misdiagnosis, missed diagnosis and the like of diseases often occur.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The present invention aims to solve the above-mentioned drawbacks and provide a method, an apparatus, a storage medium, and a terminal for determining the relationship between a mandible impacted wisdom tooth and adjacent teeth and mandible canal, so as to solve the problem of poor reliability in determining the position of the cusp and mandible canal by using an apex sheet or a curved surface layer sheet, and achieve the effect of improving the reliability of determination.
The invention provides a method for determining the relationship between mandibular impacted wisdom teeth, adjacent teeth and mandibular canal, which comprises the following steps: establishing a 3D segmentation convolution neural network model; the 3D segmentation convolutional neural network model is used for outputting a segmentation result sequence of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal in the sample based on a CBCT image sequence of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal in the sample; the segmentation result sequence comprises: relation between mandibular impacted wisdom teeth and mandibular canal; relationship between mandibular impacted wisdom teeth and adjacent second molars; acquiring a CBCT image sequence of actually sampled mandibular impacted wisdom teeth, adjacent second molars and mandibular canal; and inputting the actually sampled CBCT image sequence into the 3D segmentation convolution neural network model to obtain a segmentation result sequence of the actually sampled mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal output by the 3D segmentation convolution neural network model.
Optionally, building a 3D segmented convolutional neural network model, including: acquiring sample data of a first set quantity, and taking one part of the sample data as a training sample and the other part of the sample data as a test sample; wherein the sample data comprises: original CBCT images sketched by images of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal; based on a set 3D segmentation convolutional neural network structure, carrying out deep learning processing on the training sample to obtain a training model of the 3D segmentation convolutional neural network structure; testing the training model by using the test sample to obtain a test result; and correcting the training model according to the error between the set standards of the corresponding relation between the input and the output in the test result to obtain a corrected model of the 3D segmentation convolutional neural network structure, wherein the corrected model is used as the 3D segmentation convolutional neural network model.
Optionally, performing deep learning processing on the training samples, including: carrying out data preprocessing, image down-sampling processing, image normalization processing and/or image amplification processing on the original CBCT image of the training sample to obtain an image processing result; performing model segmentation training on the 3D segmentation convolutional neural network structure based on the image processing result to obtain a model segmentation result; performing morphological processing on the model segmentation result to obtain a smooth segmentation result; taking the smooth segmentation result as the input of a 3D segmentation convolution neural network structure, carrying out model training on the 3D segmentation convolution neural network structure, and outputting a model training result at least containing the relation between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal; wherein, the relation of the mandibular impacted wisdom tooth and the mandibular canal includes: a bone interval is arranged between the root of the mandibular impacted wisdom tooth and the mandibular canal, the root of the mandibular impacted wisdom tooth is attached to the mandibular canal, and the root of the mandibular impacted wisdom tooth presses the mandibular canal; the relationship between the mandibular impacted wisdom tooth and the adjacent teeth includes: the mandibular impacted wisdom tooth is closely attached to the adjacent tooth, and the mandibular impacted wisdom tooth presses the adjacent tooth to absorb.
Optionally, the data preprocessing includes: mapping pixel values of an original CBCT image of the training sample into a set pixel range by utilizing an LUT; wherein the LUT is configured to: mapping the pixel values of the pixels of which the pixel values are smaller than the lower limit of a set pixel range or larger than the upper limit of the set pixel range in the original CBCT image of the training sample to 0, and mapping the pixel values of the rest pixels to the set pixel range according to a linear relation; and/or the image down-sampling process comprises: intercepting the set layer number of the middle layer of the original CBCT image of the training sample as an input sample of the subsequent model training under the condition that the total layer number of the original CBCT image of the training sample reaches the set layer number; under the condition that the total number of layers of the original CBCT images of the training sample does not reach the set number of layers, repeatedly copying the set layers in the total number of layers of the original CBCT images of the training sample until the total number of layers reaches the set number of layers; the resolution of each layer of image is directly sampled to a set image sample size by using a bilinear difference value; and/or the image gray value normalization processing comprises the following steps: normalizing the original CBCT image of the training sample according to a set convergence rate; and/or, the image augmentation process includes: rotating each CBCT sample in the training samples by a set angle according to a set probability; gaussian noise and left and right mirror images are added to increase the number of training samples; and/or performing model segmentation training on the 3D segmentation convolutional neural network structure, wherein the model segmentation training comprises the following steps: selecting a second set number of samples as a verification set based on the image processing result; determining initial model parameters by adopting a random initialization method, and performing iterative training on the 3D segmentation convolution neural network structure by adopting a gradient descent algorithm; and determining target model parameters of the 3D segmentation convolutional neural network structure according to the Dice value on the verification set as a model segmentation result.
In accordance with the above method, another aspect of the present invention provides a device for determining the relationship between mandibular impacted wisdom teeth and adjacent teeth and mandibular canal, comprising: the determining unit is used for establishing a 3D segmentation convolutional neural network model; the 3D segmentation convolutional neural network model is used for outputting a segmentation result sequence of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal in the sample based on a CBCT image sequence of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal in the sample; the segmentation result sequence comprises: relationship between mandibular impacted wisdom tooth and adjacent teeth and mandibular canal; the acquisition unit is used for acquiring a CBCT image sequence of actually sampled mandibular impacted wisdom teeth, adjacent second molars and mandibular canal; the determining unit is further configured to input the actually sampled CBCT image sequence to the 3D segmentation convolutional neural network model to obtain a segmentation result sequence of the actually sampled mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal output by the 3D segmentation convolutional neural network model.
Optionally, the determining unit establishes a 3D segmented convolutional neural network model, including: acquiring sample data of a first set quantity, and taking one part of the sample data as a training sample and the other part of the sample data as a test sample; wherein the sample data comprises: original CBCT images sketched by images of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal; based on a set 3D segmentation convolutional neural network structure, carrying out deep learning processing on the training sample to obtain a training model of the 3D segmentation convolutional neural network structure; testing the training model by using the test sample to obtain a test result; and correcting the training model according to the error between the set standards of the corresponding relation between the input and the output in the test result to obtain a corrected model of the 3D segmentation convolutional neural network structure, wherein the corrected model is used as the 3D segmentation convolutional neural network model.
Optionally, the determining unit performs deep learning processing on the training samples, including: carrying out data preprocessing, image down-sampling processing, image normalization processing and/or image amplification processing on the original CBCT image of the training sample to obtain an image processing result; performing model segmentation training on the 3D segmentation convolutional neural network structure based on the image processing result to obtain a model segmentation result; performing morphological processing on the model segmentation result to obtain a smooth segmentation result; taking the smooth segmentation result as the input of a 3D segmentation convolutional neural network structure, carrying out model training on the 3D segmentation convolutional neural network structure, and outputting a model training result at least containing the relation between the wisdom tooth root and the mandible; wherein, the relation of the mandibular impacted wisdom tooth and the mandibular canal includes: a bone interval is arranged between the root of the mandibular impacted wisdom tooth and the mandibular canal, the root of the mandibular impacted wisdom tooth is attached to the mandibular canal, and the root of the mandibular impacted wisdom tooth presses the mandibular canal; the relationship between the mandibular impacted wisdom tooth and the adjacent teeth includes: the mandibular impacted wisdom tooth is closely attached to the adjacent tooth, and the mandibular impacted wisdom tooth presses the adjacent tooth to absorb.
Optionally, the data preprocessing of the determination unit includes: mapping pixel values of an original CBCT image of the training sample into a set pixel range by utilizing an LUT; wherein the LUT is configured to: mapping the pixel values of the pixels of which the pixel values are smaller than the lower limit of a set pixel range or larger than the upper limit of the set pixel range in the original CBCT image of the training sample to 0, and mapping the pixel values of the rest pixels to the set pixel range according to a linear relation; and/or the image down-sampling processing of the determining unit comprises: intercepting the set layer number of the middle layer of the original CBCT image of the training sample as an input sample of the subsequent model training under the condition that the total layer number of the original CBCT image of the training sample reaches the set layer number; under the condition that the total number of layers of the original CBCT images of the training sample does not reach the set number of layers, repeatedly copying the set layers in the total number of layers of the original CBCT images of the training sample until the total number of layers reaches the set number of layers; the resolution of each layer of image is directly sampled to a set image sample size by using a bilinear difference value; and/or the image gray value normalization processing of the determining unit comprises the following steps: normalizing the original CBCT image of the training sample according to a set convergence rate; and/or the image augmentation processing of the determination unit comprises: rotating each CBCT sample in the training samples by a set angle according to a set probability; gaussian noise and left and right mirror images are added to increase the number of training samples; and/or the determining unit performs model segmentation training on the 3D segmentation convolutional neural network structure, and the model segmentation training comprises the following steps: selecting a second set number of samples as a verification set based on the image processing result; determining initial model parameters by adopting a random initialization method, and performing iterative training on the 3D segmentation convolution neural network structure by adopting a gradient descent algorithm; and determining target model parameters of the 3D segmentation convolutional neural network structure according to the Dice value on the verification set as a model segmentation result.
In accordance with the above apparatus, a further aspect of the present invention provides a terminal, including: the device for determining the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal is described above.
In accordance with the above method, a further aspect of the present invention provides a storage medium comprising: the storage medium has stored therein a plurality of instructions; the instructions are used for loading and executing the method for determining the relationship between the mandible impacted wisdom tooth and the adjacent teeth and the mandible tube by the processor.
In accordance with the above method, a further aspect of the present invention provides a terminal, including: a processor for executing a plurality of instructions; a memory to store a plurality of instructions; the instructions are stored in the memory, and loaded and executed by the processor to perform the method for determining the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal.
According to the scheme, the automatic segmentation and three-dimensional display of the images of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal are realized, the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal is automatically judged, the amount of manual labor is small, and the accuracy and reliability of the judgment can be guaranteed.
Further, according to the scheme of the invention, by adopting an artificial intelligence technology based on deep learning, the CBCT three-dimensional image intelligent automatic evaluation and three-dimensional dynamic display of the relationship between the mandibular impacted wisdom tooth and the adjacent teeth as well as the mandibular impacted wisdom tooth and the mandibular canal are realized, and the reliability of judging the relationship between the mandibular impacted wisdom tooth and the adjacent teeth as well as between the mandibular impacted wisdom tooth and the mandibular canal is improved.
Further, according to the scheme of the invention, the relation between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal is automatically judged, and the automatic segmentation and three-dimensional display of the images of the mandibular impacted wisdom tooth, the adjacent second molar teeth and the mandibular canal are realized, so that the relation between the mandibular impacted wisdom tooth and the adjacent teeth and the relation between the mandibular impacted wisdom tooth and the mandibular canal can be accurately evaluated, and the efficiency is high.
Further, according to the scheme of the invention, an application software is developed by utilizing an AI technology and a CBCT image to automatically judge and display the three-dimensional relationship between the mandibular impacted wisdom tooth and the adjacent teeth as well as the mandibular canal, so that the relationship between the mandibular impacted wisdom tooth and the adjacent teeth as well as between the mandibular impacted wisdom tooth and the mandibular canal can be accurately evaluated, and the method is accurate and efficient.
Further, according to the scheme of the invention, the automatic segmentation and three-dimensional display of the images of the mandibular impacted wisdom tooth, the adjacent teeth and the mandibular canal are realized by adopting an artificial intelligence technology based on deep learning, and the reliability of judging the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and between the mandibular impacted wisdom tooth and the mandibular canal can be improved.
Therefore, according to the scheme of the invention, the problem of poor reliability of judging the positions of the apex and the mandibular canal by the apex sheet or the curved surface body sheet is solved by realizing automatic segmentation and three-dimensional display of the images of the mandibular obstruction wisdom tooth, the adjacent teeth and the mandibular canal and automatically judging the relation between the mandibular obstruction wisdom tooth and the adjacent teeth and the mandibular canal, so that the effect of improving the judgment reliability is achieved, the amount of manual labor is small, and the processing efficiency is high.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
Fig. 1 is a schematic flow chart of an embodiment of the method for determining the relationship between mandibular impacted wisdom teeth, adjacent teeth and mandibular canal according to the present invention;
FIG. 2 is a schematic flow chart illustrating one embodiment of modeling a 3D segmented convolutional neural network in accordance with the present invention;
FIG. 3 is a flowchart illustrating an embodiment of deep learning processing performed on the training samples according to the method of the present invention;
FIG. 4 is a schematic flow chart illustrating one embodiment of model segmentation training for a 3D segmented convolutional neural network structure in the method of the present invention;
fig. 5 is a schematic structural diagram of an embodiment of the device for determining the relationship between mandibular impacted wisdom teeth and adjacent teeth and mandibular canal of the present invention;
FIG. 6 is a schematic diagram showing a comparison between an original image and a mapped image based on a mapping process, wherein the original image is on the left side and the mapped image is on the right side;
fig. 7 is a schematic diagram showing comparison of images before and after segmentation based on the segmentation process, wherein the left side is an original image, and the right side is a segmentation result image (adjacent teeth, mandible wisdom teeth and mandible canal in sequence from top to bottom);
FIG. 8 is another schematic diagram showing the comparison of the images before and after segmentation based on the segmentation process, wherein the left side is the segmentation result based on the threshold value, and the right side is the algorithm segmentation result of the present invention (from top to bottom, the adjacent teeth, the mandibular impacted wisdom teeth, and the mandibular canal);
fig. 9 is a schematic flow chart of the process of evaluating and three-dimensionally dynamically displaying the three-dimensional image relationship between mandibular impacted wisdom teeth, adjacent teeth and mandibular canal CBCT based on deep learning;
fig. 10 is a schematic structural diagram of a 3D convolutional neural network model.
The reference numbers in the embodiments of the present invention are as follows, in combination with the accompanying drawings:
102-a determination unit; 104-acquisition unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
According to an embodiment of the present invention, a method for determining the relationship between mandibular impacted wisdom teeth and adjacent teeth and mandibular canal is provided, as shown in fig. 1, which is a schematic flow chart of an embodiment of the method of the present invention. The method for determining the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal can be used to determine the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the relationship between the mandibular impacted wisdom tooth and the mandibular canal, and can include steps S110 to S130.
At step S110, a 3D segmented convolutional neural network model is built. The 3D segmentation convolutional neural network model can be used for outputting a segmentation result sequence of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal in the sample based on a CBCT image sequence of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal in the sample so as to judge the relationship between the mandibular impacted wisdom tooth and the adjacent wisdom tooth as well as the relationship between the mandibular canal and the mandible. The segmentation result sequence may include: relationship between mandibular impacted wisdom tooth and adjacent teeth and mandibular canal.
Optionally, the specific process of building the 3D segmented convolutional neural network model in step S110 may be further described with reference to a schematic flow chart of an embodiment of building the 3D segmented convolutional neural network model in the method shown in fig. 2, where the specific process may include: step S210 to step S240.
Step S210, obtaining a first set amount of sample data, and using one part of the sample data as a training sample and another part as a test sample. Wherein, the sample data may include: and (3) an original CBCT image which is sketched by images of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal.
For example: of the 200 collected data, 150 were used as training samples and 50 were used as test samples.
And step S220, performing deep learning processing on the training sample based on the set 3D segmentation convolutional neural network structure to obtain a training model of the 3D segmentation convolutional neural network structure.
For example: because the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal are all three-dimensional structures, and the image layers have a spatial structure relationship, the model designs a segmentation model SNet by adopting a 3D segmentation Convolutional neural network (Convolutional neural networks) structure, the input of the model is a CBCT image sequence, and the output is a corresponding segmentation result sequence.
More optionally, with reference to a schematic flow chart of an embodiment of performing deep learning processing on the training sample in the method of the present invention shown in fig. 3, a specific process of performing deep learning processing on the training sample in step S220 is further described, which may include: step S310 to step S340.
And S310, performing data preprocessing, image down-sampling processing, image normalization processing and/or image amplification processing on the original CBCT image of the training sample to obtain an image processing result.
Still alternatively, the specific manner of each processing in step S310 may include the following processing scenario or multiple processing scenarios.
The first processing scenario: the data preprocessing may include: mapping pixel values of the original CBCT image of the training sample into a set pixel range using a LUT.
Wherein the LUT is configured to: and mapping the pixel values of the pixels of which the pixel values are smaller than the lower limit of the set pixel range or larger than the upper limit of the set pixel range in the original CBCT image of the training sample to 0, and mapping the pixel values of the rest pixels to the set pixel range according to a linear relation. For example: and utilizing the LUT to perform data preprocessing on the original CBCT image of the training sample so as to map the pixel values of the original CBCT image of the training sample into a set pixel range. Wherein the LUT is configured to: and mapping the pixel values of the pixels of which the pixel values are smaller than the lower limit of the set pixel range or larger than the upper limit of the set pixel range in the original CBCT image of the training sample to 0, and mapping the pixel values of the rest pixels to the set pixel range according to a linear relation.
For example: data preprocessing: the pixel value of the original CBCT image is 16 bits of integer type, i.e. the value range is [ -2 ]15,215) The pixel values are mapped to integer 8 bits, i.e. the value range is [0, 255], using a Look-up table (LUT)]. The design of the LUT may include: the pixel value is less than-2000 or greater than 5000, the mapping value is 0, and the rest pixels are mapped to [0, 255] according to the linearity]The before and after mapping is shown in fig. 6.
The second processing scenario: the image down-sampling process may include: and under the condition that the total number of layers of the original CBCT images of the training sample reaches a set number of layers, intercepting the set number of layers of the middle layer of the original CBCT images of the training sample as an input sample for subsequent model training. And under the condition that the total number of layers of the original CBCT images of the training sample does not reach the set number of layers, repeatedly copying the set layers in the total number of layers of the original CBCT images of the training sample until the total number of layers reaches the set number of layers.
The resolution of each layer of image adopts bilinear difference value to directly sample to set image sample size.
For example: image down-sampling: considering the hardware performance limit of the training model, the number of layers and the image size (i.e. resolution) of the original CBCT image need to be downsampled; because the positions of the mandibular impacted wisdom teeth, the adjacent second molars and the mandibular canal in the CBCT image are mainly concentrated in the middle layer, and the number of layers is about 40-80, the 96 layers in the middle of the CBCT are directly intercepted as input samples for subsequent model training, and when the total number of layers of the CBCT is less than 96, the first layer is repeatedly copied until the number of layers reaches 96; the resolution of each layer of image is directly down-sampled to 176 × 176 by using bilinear difference, and the size of a final down-sampled CBCT image sample is as follows: 176 × 176 × 96.
The third processing case: the image gray value normalization processing may include: and carrying out normalization processing on the original CBCT image of the training sample according to the set convergence rate.
For example: normalizing the gray value of the image: in order to make the model training converge quickly, the image needs to be normalized according to the following formula (where x represents the original gray value and y represents the normalized gray value):
wherein, 0.5 and 2 are used as normalization processing coefficients, and other values can be selected according to actual requirements.
The fourth processing scenario: the image augmentation process may include: rotating each CBCT sample in the training samples by a set angle according to a set probability; and gaussian noise and left and right mirror images are added to increase the number of training samples.
For example: image augmentation: and (3) rotating each CBCT sample in the training samples with a certain probability (-10 degrees to 10 degrees), and adding Gaussian noise and left and right mirror images to increase the number of the training samples and meet the requirement of deep learning on the amount of the training samples.
Therefore, the optimization effect of the training samples can be improved through various specific treatments to the training samples, and the reliability of the result obtained by model training is further improved.
And S320, performing model segmentation training on the 3D segmentation convolutional neural network structure based on the image processing result to obtain a model segmentation result.
Further optionally, with reference to a flowchart of an embodiment of training the model segmentation of the 3D segmented convolutional neural network structure in the method of the present invention shown in fig. 4, a specific process of training the model segmentation of the 3D segmented convolutional neural network structure in step S320 is further described, which may include: step S410 to step S430.
Step S410, based on the image processing result, selecting a second set number of samples as a verification set. For example: and the verification set is a set number of samples selected from the image processing result.
And step S420, determining initial model parameters by adopting a random initialization method, and performing iterative training on the 3D segmentation convolution neural network structure by adopting a gradient descent algorithm.
And step S430, determining target model parameters of the 3D segmentation convolutional neural network structure according to the Dice value on the verification set as a model segmentation result.
For example: model training: taking 20% of training samples, namely 30 samples, as a verification set; the model parameters are subjected to iterative training by adopting a kaiming _ he random initialization method and a gradient descent algorithm Adam, and the optimal parameters of the model are determined according to the Dice value (the standard for evaluating the segmentation result is calculated in a mode of 2S 1/S2, wherein S1 refers to the intersection of the segmentation result region and the real region, and S2 refers to the union of the segmentation result region and the real region) on the verification set.
Therefore, a verification set is selected from the image processing results of the original CBCT images of the training samples, iterative training is carried out on the set 3D segmentation convolutional neural network structure by adopting a gradient descent algorithm based on the initial model parameters, and then target model parameters are determined according to the Dice value on the verification set to serve as model segmentation results, so that the determination mode of the target model parameters is simple and convenient, and the determination results are reliable.
And step S330, performing morphological processing on the model segmentation result to obtain a smooth segmentation result.
For example: and (3) processing the segmentation result: the morphological opening operation (erosion and then expansion) processing is performed on the model segmentation result, the main purpose is to smooth the segmentation result, the segmentation result is shown in fig. 7, and fig. 8 is the comparison of the three-dimensional visualization of the whole CBCT image segmentation result and the threshold segmentation result based on the manually selected region.
And step S340, taking the smooth segmentation result as the input of the 3D segmentation convolutional neural network structure, performing model training on the 3D segmentation convolutional neural network structure, and outputting a model training result at least containing the relation between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal.
Wherein, the relation of the mandibular impacted wisdom tooth and the mandibular canal includes: a bone interval is arranged between the root of the mandibular impacted wisdom tooth and the mandibular canal, the root of the mandibular impacted wisdom tooth is attached to the mandibular canal, and the root of the mandibular impacted wisdom tooth presses the mandibular canal; the relationship between the mandibular impacted wisdom tooth and the adjacent teeth includes: the mandibular impacted wisdom tooth is closely attached to the adjacent tooth, and the mandibular impacted wisdom tooth presses the adjacent tooth to absorb.
For example: judging the space relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal: and (3) taking the segmentation result graph as the input of a self-designed 3D classification convolutional neural network CNet, training the model, outputting the model as three types of results, namely, bone spacing, mutual attachment and compression, training the model to be similar to the SNet, changing the Dice value into the accuracy of an evaluation function, and automatically identifying and judging the spatial relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular impacted wisdom tooth and the mandibular canal after the model training is finished.
Therefore, after the training sample is subjected to image processing, model segmentation training and morphology processing are carried out, and then model training is carried out on the 3D segmentation convolutional neural network structure, so that the reliability and accuracy of model training are improved.
And step S230, testing the training model by using the test sample to obtain a test result.
And S240, correcting the training model according to the error between the set standards of the corresponding relation between the input and the output in the test result to obtain a corrected model of the 3D segmentation convolutional neural network structure, wherein the corrected model is used as the 3D segmentation convolutional neural network model.
For example: and (3) carrying out automatic diagnosis and image segmentation on 50 test samples by adopting a deep learning algorithm, and comparing the result with a gold standard.
Therefore, a set number of sample data are trained and tested based on a set 3D segmentation convolutional neural network structure to obtain a 3D segmentation convolutional neural network model, so that the 3D segmentation convolutional neural network model is accurately and reliably established; and can be corrected according to the sample condition, and is flexible and wide in application range.
At step S120, CBCT image sequences of the actually sampled mandibular impacted wisdom tooth, adjacent second molars and mandibular canal are acquired.
For example: the image delineation of the mandibular impacted wisdom tooth, adjacent second molars, and mandibular canal is performed by an oral imaging professional, for example, the imaging software used may be ITK-SNAP 3.8.0(itksnap.
At step S130, the actually sampled CBCT image sequence is input to the 3D segmented convolutional neural network model to obtain a segmentation result sequence of the actually sampled mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal output by the 3D segmented convolutional neural network model, so as to determine the relationship between the actually sampled mandibular impacted wisdom tooth and the adjacent teeth, and the relationship between the mandibular impacted wisdom tooth and the mandibular canal.
For example: the automatic evaluation and three-dimensional dynamic display scheme of the CBCT three-dimensional image relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal based on deep learning is provided, the automatic segmentation and three-dimensional display of the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal image can be realized, the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal can be automatically judged, and the imaging diagnosis can be automatically made; the AI technology and the CBCT image can be utilized to develop application software to automatically judge and display the three-dimensional relationship between the mandibular impacted wisdom tooth and the adjacent teeth as well as the mandibular canal, namely, the artificial intelligence technology based on deep learning is adopted to realize the automatic segmentation, three-dimensional display and automatic diagnosis of the mandibular impacted wisdom tooth and the adjacent teeth as well as the mandibular canal image.
For example: the method adopts an artificial intelligence technology based on Deep learning (Deep learning) to realize the intelligent automatic evaluation and three-dimensional dynamic display of the CBCT three-dimensional image of the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal, namely: the method can automatically judge the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal, realize the automatic segmentation and three-dimensional display of the images of the mandibular impacted wisdom tooth, the adjacent second molar teeth and the mandibular canal, and can be used for helping the stomatologist accurately, quickly and intuitively diagnose the mandibular impacted wisdom tooth condition and guide the operation in the aspect of accurately evaluating the position relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal (mandibular canal), and is convenient for the doctor to communicate before the operation.
From this, through segmenting the convolution neural network model based on the 3D who establishes, the CBCT image sequence that hinders the wisdom tooth, adjacent second molar and the mandibular canal according to the lower jaw of actual sampling can directly obtain the segmentation result sequence that hinders the wisdom tooth, adjacent second molar and the mandibular canal of actual sampling, and operation process is simple, and the result of obtaining is reliable and accurate.
Through a large amount of experimental verification, adopt the technical scheme of this embodiment, through realizing that the mandible hinders automatic segmentation and the three-dimensional display of wisdom tooth, adjacent second molar and mandibular canal image to the relation of wisdom tooth and adjacent tooth and mandibular canal is hindered to the automatic judgement, the amount of manual labor is little, and the accuracy and the reliability of judging can obtain the assurance.
According to the embodiment of the invention, the device for determining the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal is also provided, which corresponds to the method for determining the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal. Referring to fig. 5, a schematic diagram of an embodiment of the apparatus of the present invention is shown. The device for determining the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal can be used to determine the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal, and can comprise: a determination unit 102 and an acquisition unit 104.
In an alternative example, the determining unit 102 may be configured to build a 3D segmented convolutional neural network model. The 3D segmentation convolutional neural network model can be used for outputting a segmentation result sequence of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal in the sample based on a CBCT image sequence of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal in the sample so as to judge the relationship between the mandibular impacted wisdom tooth and the adjacent wisdom tooth as well as the relationship between the mandibular canal and the mandible. The segmentation result sequence may include: relationship between mandibular impacted wisdom tooth and adjacent teeth and mandibular canal. The specific function and processing of the determination unit 102 are referred to in step S110.
Optionally, the determining unit 102 builds a 3D segmentation convolutional neural network model, which may include:
the determining unit 102 may be further configured to obtain sample data of a first set number, and use one part of the sample data as a training sample and another part of the sample data as a test sample. Wherein, the sample data may include: and (3) an original CBCT image which is sketched by images of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal. The specific function and processing of the determination unit 102 are also referred to in step S210.
For example: of the 200 collected data, 150 were used as training samples and 50 were used as test samples.
The determining unit 102 may be further configured to perform deep learning processing on the training sample based on a set 3D segmented convolutional neural network structure to obtain a training model of the 3D segmented convolutional neural network structure. The specific function and processing of the determination unit 102 are also referred to in step S220.
For example: because the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal are all three-dimensional structures, and the image layers have a spatial structure relationship, the model designs a segmentation model SNet by adopting a 3D segmentation Convolutional neural network (Convolutional neural networks) structure, the input of the model is a CBCT image sequence, and the output is a corresponding segmentation result sequence.
More optionally, the determining unit 102 performs deep learning processing on the training samples, which may include:
the determining unit 102 may be further configured to perform data preprocessing, image downsampling, image normalization, and/or image augmentation on the original CBCT image of the training sample to obtain an image processing result. The specific function and processing of the determination unit 102 are also referred to in step S310.
Still alternatively, the determining unit 102 may perform data preprocessing, image downsampling, image normalization, and/or image augmentation on the original CBCT image of the training sample to obtain a specific processing manner in the image processing result, which may include one or more of the following processing cases.
The first processing scenario: the data preprocessing of the determining unit 102 may include: mapping pixel values of the original CBCT image of the training sample into a set pixel range using a LUT.
Wherein the LUT is configured to: and mapping the pixel values of the pixels of which the pixel values are smaller than the lower limit of the set pixel range or larger than the upper limit of the set pixel range in the original CBCT image of the training sample to 0, and mapping the pixel values of the rest pixels to the set pixel range according to a linear relation. For example: and utilizing the LUT to perform data preprocessing on the original CBCT image of the training sample so as to map the pixel values of the original CBCT image of the training sample into a set pixel range. Wherein the LUT is configured to: and mapping the pixel values of the pixels of which the pixel values are smaller than the lower limit of the set pixel range or larger than the upper limit of the set pixel range in the original CBCT image of the training sample to 0, and mapping the pixel values of the rest pixels to the set pixel range according to a linear relation.
For example: data preprocessing: the pixel value of the original CBCT image is 16 bits of integer type, i.e. the value range is [ -2 ]15,215) The pixel values are mapped to integer 8 bits, i.e. the value range is [0, 255], using a Look-up table (LUT)]. The design of the LUT may include: the pixel value is less than-2000 or greater than 5000, the mapping value is 0, and the rest pixels are mapped to [0, 255] according to the linearity]The before and after mapping is shown in fig. 6.
The second processing scenario: the image down-sampling process of the determining unit 102 may include: and under the condition that the total number of layers of the original CBCT images of the training sample reaches a set number of layers, intercepting the set number of layers of the middle layer of the original CBCT images of the training sample as an input sample for subsequent model training. And under the condition that the total number of layers of the original CBCT images of the training sample does not reach the set number of layers, repeatedly copying the set layers in the total number of layers of the original CBCT images of the training sample until the total number of layers reaches the set number of layers.
The resolution of each layer of image adopts bilinear difference value to directly sample to set image sample size.
For example: image down-sampling: considering the hardware performance limit of the training model, the number of layers and the image size (i.e. resolution) of the original CBCT image need to be downsampled; because the positions of the mandibular impacted wisdom teeth, the adjacent second molars and the mandibular canal in the CBCT image are mainly concentrated in the middle layer, and the number of layers is about 40-80, the 96 layers in the middle of the CBCT are directly intercepted as input samples for subsequent model training, and when the total number of layers of the CBCT is less than 96, the first layer is repeatedly copied until the number of layers reaches 96; the resolution of each layer of image is directly down-sampled to 176 × 176 by using bilinear difference, and the size of a final down-sampled CBCT image sample is as follows: 176 × 176 × 96.
The third processing case: the image gray-value normalization processing of the determining unit 102 may include: and carrying out normalization processing on the original CBCT image of the training sample according to the set convergence rate.
For example: normalizing the gray value of the image: in order to make the model training converge quickly, the image needs to be normalized according to the following formula (where x represents the original gray value and y represents the normalized gray value):
wherein, 0.5 and 2 are used as normalization processing coefficients, and other values can be selected according to actual requirements.
The fourth processing scenario: the image expansion process of the determination unit 102 may include: rotating each CBCT sample in the training samples by a set angle according to a set probability; and gaussian noise and left and right mirror images are added to increase the number of training samples.
For example: image augmentation: and (3) rotating each CBCT sample in the training samples with a certain probability (-10 degrees to 10 degrees), and adding Gaussian noise and left and right mirror images to increase the number of the training samples and meet the requirement of deep learning on the amount of the training samples.
Therefore, the optimization effect of the training samples can be improved through various specific treatments to the training samples, and the reliability of the result obtained by model training is further improved.
The determining unit 102 may be further configured to perform model segmentation training on the 3D segmentation convolutional neural network structure based on the image processing result, so as to obtain a model segmentation result. The specific function and processing of the determination unit 102 are also referred to in step S320.
Still further optionally, the determining unit 102 performs model segmentation training on the 3D segmented convolutional neural network structure, which may include:
the determining unit 102 may be further configured to select a second set number of samples as a verification set based on the image processing result. For example: and the verification set is a set number of samples selected from the image processing result.
The determining unit 102 may be further configured to determine an initial model parameter by using a random initialization method, and perform iterative training on the 3D segmented convolutional neural network structure by using a gradient descent algorithm.
The determining unit 102 may be further configured to determine, as a model segmentation result, a target model parameter of the 3D segmented convolutional neural network structure according to the Dice value on the verification set.
For example: model training: taking 20% of training samples, namely 30 samples, as a verification set; the model parameters are subjected to iterative training by adopting a kaiming _ he random initialization method and a gradient descent algorithm Adam, and the optimal parameters of the model are determined according to the Dice value (the standard for evaluating the segmentation result is calculated in a mode of 2S 1/S2, wherein S1 refers to the intersection of the segmentation result region and the real region, and S2 refers to the union of the segmentation result region and the real region) on the verification set.
Therefore, a verification set is selected from the image processing results of the original CBCT images of the training samples, iterative training is carried out on the set 3D segmentation convolutional neural network structure by adopting a gradient descent algorithm based on the initial model parameters, and then target model parameters are determined according to the Dice value on the verification set to serve as model segmentation results, so that the determination mode of the target model parameters is simple and convenient, and the determination results are reliable.
The determining unit 102 may be further configured to perform morphological processing on the model segmentation result to obtain a smooth segmentation result. The specific function and processing of the determination unit 102 are also referred to in step S330.
For example: and (3) processing the segmentation result: the morphological opening operation (erosion and then expansion) processing is performed on the model segmentation result, the main purpose is to smooth the segmentation result, the segmentation result is shown in fig. 7, and fig. 8 is the comparison of the three-dimensional visualization of the whole CBCT image segmentation result and the threshold segmentation result based on the manually selected region.
The determining unit 102 may be further configured to use the smooth segmentation result as an input of a 3D segmentation convolutional neural network structure, perform model training on the 3D segmentation convolutional neural network structure, and output a model training result at least including a relationship between a root of wisdom tooth and a mandibular canal. The specific function and processing of the determination unit 102 are also referred to in step S340.
Wherein, the relation of the mandibular impacted wisdom tooth and the mandibular canal includes: a bone interval is arranged between the root of the mandibular impacted wisdom tooth and the mandibular canal, the root of the mandibular impacted wisdom tooth is attached to the mandibular canal, and the root of the mandibular impacted wisdom tooth presses the mandibular canal; the relationship between the mandibular impacted wisdom tooth and the adjacent teeth includes: the mandibular impacted wisdom tooth is closely attached to the adjacent tooth, and the mandibular impacted wisdom tooth presses the adjacent tooth to absorb.
For example: judging the space relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal: and (3) taking the segmentation result graph as the input of a self-designed 3D classification convolutional neural network CNet, training the model, outputting the model as three types of results, namely, bone spacing, mutual attachment and compression, training the model to be similar to the SNet, changing the Dice value into the accuracy of an evaluation function, and automatically identifying and judging the spatial relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular impacted wisdom tooth and the mandibular canal after the model training is finished.
Therefore, after the training sample is subjected to image processing, model segmentation training and morphology processing are carried out, and then model training is carried out on the 3D segmentation convolutional neural network structure, so that the reliability and accuracy of model training are improved.
The determining unit 102 may be further configured to utilize the test sample to test the training model to obtain a test result. The specific function and processing of the determination unit 102 are also referred to in step S230.
The determining unit 102 may be further configured to correct the training model according to an error between the set criteria of the input and output correspondence in the test result, so as to obtain a corrected model of the 3D segmented convolutional neural network structure, which is used as the 3D segmented convolutional neural network model. The specific function and processing of the determination unit 102 are also referred to step S240.
For example: and (3) carrying out automatic diagnosis and image segmentation on 50 test samples by adopting a deep learning algorithm, and comparing the result with a gold standard.
Therefore, a set number of sample data are trained and tested based on a set 3D segmentation convolutional neural network structure to obtain a 3D segmentation convolutional neural network model, so that the 3D segmentation convolutional neural network model is accurately and reliably established; and can be corrected according to the sample condition, and is flexible and wide in application range.
In an alternative example, the acquisition unit 104 may be used to acquire a sequence of CBCT images of an actually sampled mandibular impacted wisdom tooth, adjacent second molars and mandibular canal. The specific function and processing of the acquisition unit 104 are referred to in step S120.
For example: the image delineation of the mandibular impacted wisdom tooth, adjacent second molars, and mandibular canal is performed by an oral imaging professional, for example, the imaging software used may be ITK-SNAP 3.8.0(itksnap.
In an optional example, the determining unit 102 may be further configured to input the actually sampled CBCT image sequence into the 3D segmented convolutional neural network model to obtain a segmentation result sequence of the actually sampled mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal output by the 3D segmented convolutional neural network model, so as to achieve the determination of the relationship between the actually sampled mandibular impacted wisdom tooth and the adjacent teeth and mandibular canal. The specific function and processing of the determination unit 102 are also referred to in step S130.
For example: the automatic evaluation and three-dimensional dynamic display scheme of the CBCT three-dimensional image relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal based on deep learning is provided, the automatic segmentation and three-dimensional display of the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal image can be realized, the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal can be automatically judged, and the imaging diagnosis can be automatically made; the AI technology and the CBCT image can be utilized to develop application software to automatically judge and display the three-dimensional relationship between the mandibular impacted wisdom tooth and the adjacent teeth as well as the mandibular canal, namely, the artificial intelligence technology based on deep learning is adopted to realize the automatic segmentation, three-dimensional display and automatic diagnosis of the mandibular impacted wisdom tooth and the adjacent teeth as well as the mandibular canal image.
For example: the method adopts an artificial intelligence technology based on Deep learning (Deep learning) to realize the intelligent automatic evaluation and three-dimensional dynamic display of the CBCT three-dimensional image of the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal, namely: the method can automatically judge the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal, realize the automatic segmentation and three-dimensional display of the images of the mandibular impacted wisdom tooth, the adjacent second molar teeth and the mandibular canal, and can be used for helping the stomatologist accurately, quickly and intuitively diagnose the mandibular impacted wisdom tooth condition and guide the operation in the aspect of accurately evaluating the positions of the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal (mandibular canal), and is convenient for the doctor to communicate before the operation.
From this, through segmenting the convolution neural network model based on the 3D who establishes, the CBCT image sequence that hinders the wisdom tooth, adjacent second molar and the mandibular canal according to the lower jaw of actual sampling can directly obtain the segmentation result sequence that hinders the wisdom tooth, adjacent second molar and the mandibular canal of actual sampling, and operation process is simple, and the result of obtaining is reliable and accurate.
Since the processes and functions implemented by the apparatus of this embodiment substantially correspond to the embodiments, principles and examples of the method shown in fig. 1 to 4, the description of this embodiment is not detailed, and reference may be made to the related descriptions in the foregoing embodiments, which are not repeated herein.
Through a large number of tests, the technical scheme of the invention is adopted, and the artificial intelligence technology based on deep learning is adopted, so that the CBCT three-dimensional image intelligent automatic evaluation and three-dimensional dynamic display of the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal are realized, and the reliability of judging the positions of the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal is improved.
According to an embodiment of the present invention, there is also provided a terminal corresponding to a mandibular impacted wisdom tooth and adjacent teeth and mandibular canal relationship determining apparatus. The terminal may include: the device for determining the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal is described above.
In an optional embodiment, the invention provides a scheme for automatically evaluating and displaying three-dimensional dynamic state of CBCT (cone-based computed tomography) three-dimensional image relationship between mandibular impacted wisdom teeth and adjacent teeth and mandibular canal based on deep learning. According to the scheme, the CBCT three-dimensional image intelligent automatic evaluation and three-dimensional dynamic display of the relationship between the mandibular impacted wisdom tooth, the adjacent teeth and the mandibular canal are realized by adopting an artificial intelligence technology based on Deep learning (Deep learning), namely: the method can automatically judge the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal, realize the automatic segmentation and three-dimensional display of the images of the mandibular impacted wisdom tooth, the adjacent second molar teeth and the mandibular canal, and can be used for helping the stomatologist accurately, quickly and intuitively diagnose the mandibular impacted wisdom tooth condition and guide the operation in the aspect of accurately evaluating the positions of the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal (mandibular canal), and is convenient for the doctor to communicate before the operation.
The scheme of the invention can realize automatic segmentation and three-dimensional display of the images of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal, automatically judge the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal, and automatically make an imaging diagnosis.
In an alternative specific example, a specific implementation process of the scheme of the present invention may be exemplarily described with reference to examples shown in fig. 6 to 9.
As shown in fig. 9, in the solution of the present invention, the process of automatically evaluating and three-dimensionally and dynamically displaying the CBCT three-dimensional image relationship between the mandibular impacted wisdom tooth and the adjacent teeth and mandibular canal based on deep learning may include:
1) case selection
200 patients who were subjected to CBCT examination of mandibular impacted wisdom teeth in a hospital radiology department on 2018, 1 month and 1 day were selected continuously. The scanner model was 3D accutomo (J Morita mfg.corp., Kyoto, Japan), voltage: 90kV, current: 5mA, scanning range: 6cm × 6 cm. Each case image data is derived in dicom format.
Among them, dicom (digital Imaging and Communications in medicine), which is an international standard for medical images and related information (ISO 12052), defines a medical image format for data exchange with quality meeting clinical needs.
Inclusion criteria were: the method has the advantages that the image is clear, and no motion artifact or hardening artifact influencing the observation of the lower jaw obstructive wisdom tooth is generated; and completely scanning the mandible impacted wisdom teeth, the adjacent second molar and the mandible tube in the molar area.
Exclusion criteria: the method comprises the steps of (1) motion artifact exists; and secondly, the hardening artifact caused by the prosthesis in the scanning range influences the observation of the mandible impacted wisdom teeth.
2) Labeling of images
Of the 200 collected data, 150 were used as training samples and 50 were used as test samples.
(1) Labeling of training samples: first, an oral imaging professional A performs image delineation of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal, for example, the image software may use ITK-SNAP 3.8.0(itksnap. org), and the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and mandibular canal is determined.
Optionally, the relationship of the mandibular impacted wisdom tooth to the mandibular canal may comprise: a bone interval is arranged between the root of the mandibular impacted wisdom tooth and the mandibular canal, the root of the mandibular impacted wisdom tooth is attached to the mandibular canal, and the root of the mandibular impacted wisdom tooth presses the mandibular canal; the relationship between the mandibular impacted wisdom tooth and the adjacent teeth includes: the mandibular impacted wisdom tooth is closely attached to the adjacent tooth, and the mandibular impacted wisdom tooth presses the adjacent tooth to absorb.
And then another oral imaging physician B modifies and confirms the relationship between the sketched image and the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal.
(2) Labeling of test samples: firstly, an oral imaging professional A carries out image delineation of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal, and judges the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal. The other two stomatographers B, C determine the relationship between the delineated images, the mandibular impacted wisdom tooth, the adjacent teeth and the mandibular canal, and use them as the golden standard for image segmentation and diagnosis.
3) And (5) deep learning model training.
The 150 training samples are subjected to deep learning model training.
(1) Deep learning model design: because the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal are all three-dimensional structures and the image layers have a spatial structure relationship, a 3D Convolutional neural network (Convolutional neural networks) structure is adopted to design a segmentation model SNet, the SNet structure adopts a coding-decoding (Encoder-Decoder) form, the image is firstly subjected to Convolutional feature extraction, then the extracted features are subjected to multi-scale fusion and are up-sampled to the size of the input image to form a segmentation result, and the model structure diagram is shown in fig. 10. The input of the model is a CBCT image sequence, and the output is a corresponding segmentation result sequence.
(2) Data preprocessing: the pixel value of the original CBCT image is 16 bits of integer type, i.e. the value range is [ -2 ]15,215) The pixel values are mapped to integer 8 bits, i.e. the value range is [0, 255], using a Look-up table (LUT)]。
Optionally, the design of the LUT may include: the pixel value is less than-2000 or greater than 5000, the mapping value is 0, the rest pixels are mapped to [0, 255] according to linearity, and a comparison graph before and after mapping is shown in FIG. 6. The left image in fig. 6 is the original image, and since it is a 16-bit image, it is not displayed clearly on an 8-bit display (8-bit displays are used daily), and it is necessary to process it into a right image.
The LUT is essentially a RAM. After data is written into RAM in advance, every time a signal is input, it is equal to inputting an address to make table look-up, finding out the content correspondent to the address, then outputting. The mapping table is a mapping table of pixel gray values, and the mapping table changes the actually sampled pixel gray values into another corresponding gray value through certain transformation such as threshold, inversion, binarization, contrast adjustment, linear transformation and the like, so that the mapping table can play a role in highlighting useful information of an image and enhancing optical contrast of the image.
(3) Image down-sampling: considering the hardware performance limit of the training model, the number of layers and the image size (i.e. resolution) of the original CBCT image need to be downsampled; because the positions of the mandibular impacted wisdom teeth, the adjacent second molars and the mandibular canal in the CBCT image are mainly concentrated in the middle layer, and the number of layers is about 40-80, the 96 layers in the middle of the CBCT are directly intercepted as input samples for subsequent model training, and when the total number of layers of the CBCT is less than 96, the first layer is repeatedly copied until the number of layers reaches 96; the resolution of each layer of image is directly down-sampled to 176 × 176 by using bilinear difference, and the size of a final down-sampled CBCT image sample is as follows: 176 × 176 × 96.
Wherein a sample sequence is sampled once every few samples, and the thus obtained new sequence is a down-sampling of the original sequence.
(4) Normalizing the gray value of the image: in order to make the model training converge quickly, the image needs to be normalized according to the following formula (where x represents the original gray value and y represents the normalized gray value):
wherein, 0.5 and 2 are used as normalization processing coefficients, and other values can be selected according to actual requirements.
(5) Image augmentation: and (3) rotating each CBCT sample in the training samples with a certain probability (-10 degrees to 10 degrees), and adding Gaussian noise and left and right mirror images to increase the number of the training samples and meet the requirement of deep learning on the amount of the training samples.
(6) Model training: taking 20% of training samples, namely 30 samples, as a verification set; the model parameters are subjected to iterative training by adopting a kaiming _ he random initialization method and a gradient descent algorithm Adam, and the optimal parameters of the model are determined according to the Dice value (the standard for evaluating the segmentation result is calculated in a mode of 2S 1/S2, wherein S1 refers to the intersection of the segmentation result region and the real region, and S2 refers to the union of the segmentation result region and the real region) on the verification set.
Wherein, dice is a common index in the medical image, and represents a segmentation result of a ground truth. In machine learning, a ground truth refers to the accuracy of the training set in classifying supervised learning techniques.
(7) And (3) processing the segmentation result: the morphological opening operation (erosion and then expansion) processing is performed on the model segmentation result, the main purpose is to smooth the segmentation result, the segmentation result is shown in fig. 7, and fig. 8 is the comparison of the three-dimensional visualization of the whole CBCT image segmentation result and the threshold segmentation result based on the manually selected region.
(8) Judging the space relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal: and (3) taking the segmentation result graph as the input of a self-designed 3D classification convolutional neural network CNet, training the model, outputting the model as three types of results, namely bone spacing, mutual attachment and compression, training the model to be similar to the SNet, changing the Dice value into the accuracy of the evaluation function, and automatically identifying and judging the spatial relationship between the mandibular impacted wisdom teeth and the adjacent teeth and the mandibular canal after the model training is finished.
4) And testing the deep learning model and comparing the deep learning model with imaging doctors and clinicians.
(1) And (3) carrying out automatic diagnosis and image segmentation on 50 test samples by adopting a deep learning algorithm, and comparing the result with a gold standard. Calculating the diagnosis accuracy (judging the space relation between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal: bone spacing, sticking and pressing), the tooth and the mandibular canal segmentation accuracy and the time required by diagnosis.
(2) The imaging physician D diagnoses and segments the images of 50 test samples, compares the results with the gold standard, and calculates the diagnosis accuracy, segmentation accuracy, and time required for diagnosis.
(3) The oral surgeon F diagnosed and image-segmented 50 test specimens and compared the results to the gold standard. And calculating the diagnosis accuracy, the segmentation accuracy and the time required by diagnosis.
According to the scheme, an application software can be developed by utilizing an AI (artificial intelligence) technology and a CBCT (cone-beam computed tomography) image to automatically judge and display the three-dimensional relationship between the mandibular impacted wisdom tooth and the adjacent teeth as well as the mandibular canal, namely, the artificial intelligence technology based on deep learning is adopted to realize the automatic segmentation, three-dimensional display and automatic diagnosis of the mandibular impacted wisdom tooth and the adjacent teeth as well as the mandibular canal image. Moreover, clinical popularization and application show that: after rigorous data training, the artificial intelligence system can quickly process a large amount of CBCT data information, help clinicians to efficiently and accurately judge the relationship between mandibular impacted wisdom teeth and adjacent teeth and mandibular canal, improve the working efficiency, save the working time and increase the diagnosis accuracy.
Since the processes and functions implemented by the terminal of this embodiment substantially correspond to the embodiments, principles, and examples of the apparatus shown in fig. 5, reference may be made to the related descriptions in the foregoing embodiments for details which are not described in detail in the description of this embodiment, and no further description is given here.
Through a large number of tests, the technical scheme of the invention can accurately estimate the positions of the mandibular impacted wisdom tooth, the adjacent teeth and the mandibular canal by automatically judging the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal and realizing the automatic segmentation and three-dimensional display of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal images, and has high efficiency.
There is also provided, in accordance with an embodiment of the present invention, a storage medium corresponding to a method for determining a relationship between mandibular impacted wisdom teeth and adjacent teeth and mandibular canal. The storage medium may include: the storage medium has stored therein a plurality of instructions; the instructions are used for loading and executing the method for determining the relationship between the mandible impacted wisdom tooth and the adjacent teeth and the mandible tube by the processor.
Since the processing and functions implemented by the storage medium of this embodiment substantially correspond to the embodiments, principles, and examples of the methods shown in fig. 1 to fig. 4, details are not described in the description of this embodiment, and reference may be made to the related descriptions in the foregoing embodiments, which are not described herein again.
Through a large number of tests, the technical scheme of the invention is adopted, and an application software is developed by utilizing AI technology and CBCT images to automatically judge and display the three-dimensional relationship between the mandibular impacted wisdom tooth and the adjacent teeth as well as the mandibular canal, so that the positions of the mandibular impacted wisdom tooth and the adjacent teeth as well as the mandibular canal can be accurately evaluated, and the method is accurate and efficient.
According to the embodiment of the invention, a terminal corresponding to the method for determining the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal is also provided. The terminal can include: a processor for executing a plurality of instructions; a memory to store a plurality of instructions; the instructions are stored in the memory, and loaded and executed by the processor to perform the method for determining the relationship between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal.
Since the processing and functions implemented by the terminal of this embodiment substantially correspond to the embodiments, principles, and examples of the methods shown in fig. 1 to fig. 4, details are not described in the description of this embodiment, and reference may be made to the related descriptions in the foregoing embodiments, which are not described herein again.
Through a large number of tests, the technical scheme of the invention is adopted, and the artificial intelligence technology based on deep learning is adopted, so that the automatic segmentation and three-dimensional display of the images of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal are realized, and the reliability of the position judgment of the mandibular impacted wisdom tooth, the adjacent wisdom tooth and the mandibular canal can be improved.
In summary, it is readily understood by those skilled in the art that the advantageous modes described above can be freely combined and superimposed without conflict.
The above description is only an example of the present invention, and is not intended to limit the present invention, and it is obvious to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (10)

1. A method for determining the relationship between mandibular impacted wisdom teeth and adjacent teeth and mandibular canal is characterized by comprising the following steps:
establishing a 3D segmentation convolution neural network model; the 3D segmentation convolutional neural network model is used for outputting a segmentation result sequence of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal in the sample based on a CBCT image sequence of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal in the sample; the segmentation result sequence comprises: relationship between mandibular impacted wisdom tooth and adjacent teeth and mandibular canal;
acquiring a CBCT image sequence of actually sampled mandibular impacted wisdom teeth, adjacent second molars and mandibular canal;
and inputting the actually sampled CBCT image sequence into the 3D segmentation convolution neural network model to obtain a segmentation result sequence of the actually sampled mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal output by the 3D segmentation convolution neural network model.
2. The method of claim 1, wherein building a 3D segmented convolutional neural network model comprises:
acquiring sample data of a first set quantity, and taking one part of the sample data as a training sample and the other part of the sample data as a test sample; wherein the sample data comprises: original CBCT images sketched by images of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal;
based on a set 3D segmentation convolutional neural network structure, carrying out deep learning processing on the training sample to obtain a training model of the 3D segmentation convolutional neural network structure;
testing the training model by using the test sample to obtain a test result;
and correcting the training model according to the error between the set standards of the corresponding relation between the input and the output in the test result to obtain a corrected model of the 3D segmentation convolutional neural network structure, wherein the corrected model is used as the 3D segmentation convolutional neural network model.
3. The method of claim 2, wherein performing deep learning processing on the training samples comprises:
carrying out data preprocessing, image down-sampling processing, image normalization processing and/or image amplification processing on the original CBCT image of the training sample to obtain an image processing result;
performing model segmentation training on the 3D segmentation convolutional neural network structure based on the image processing result to obtain a model segmentation result;
performing morphological processing on the model segmentation result to obtain a smooth segmentation result;
taking the smooth segmentation result as the input of a 3D segmentation convolution neural network structure, carrying out model training on the 3D segmentation convolution neural network structure, and outputting a model training result at least containing the relation between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal;
wherein, the relation of the mandibular impacted wisdom tooth and the mandibular canal includes: a bone interval is arranged between the root of the mandibular impacted wisdom tooth and the mandibular canal, the root of the mandibular impacted wisdom tooth is attached to the mandibular canal, and the root of the mandibular impacted wisdom tooth presses the mandibular canal; the relationship between the mandibular impacted wisdom tooth and the adjacent teeth includes: the mandibular impacted wisdom tooth is closely attached to the adjacent tooth, and the mandibular impacted wisdom tooth presses the adjacent tooth to absorb.
4. The method of claim 3,
the data preprocessing comprises the following steps: mapping pixel values of an original CBCT image of the training sample into a set pixel range by utilizing an LUT;
wherein the LUT is configured to: mapping the pixel values of the pixels of which the pixel values are smaller than the lower limit of a set pixel range or larger than the upper limit of the set pixel range in the original CBCT image of the training sample to 0, and mapping the pixel values of the rest pixels to the set pixel range according to a linear relation;
and/or the presence of a gas in the gas,
the image down-sampling process comprises the following steps: intercepting the set layer number of the middle layer of the original CBCT image of the training sample as an input sample of the subsequent model training under the condition that the total layer number of the original CBCT image of the training sample reaches the set layer number; under the condition that the total number of layers of the original CBCT images of the training sample does not reach the set number of layers, repeatedly copying the set layers in the total number of layers of the original CBCT images of the training sample until the total number of layers reaches the set number of layers;
the resolution of each layer of image is directly sampled to a set image sample size by using a bilinear difference value;
and/or the presence of a gas in the gas,
the image gray value normalization processing comprises the following steps: normalizing the original CBCT image of the training sample according to a set convergence rate;
and/or the presence of a gas in the gas,
the image augmentation process includes: rotating each CBCT sample in the training samples by a set angle according to a set probability; gaussian noise and left and right mirror images are added to increase the number of training samples;
and/or the presence of a gas in the gas,
performing model segmentation training on the 3D segmented convolutional neural network structure, including:
selecting a second set number of samples as a verification set based on the image processing result;
determining initial model parameters by adopting a random initialization method, and performing iterative training on the 3D segmentation convolution neural network structure by adopting a gradient descent algorithm;
and determining target model parameters of the 3D segmentation convolutional neural network structure according to the Dice value on the verification set as a model segmentation result.
5. A device for determining the relationship between mandibular impacted wisdom teeth, adjacent teeth and mandibular canal, comprising:
the determining unit is used for establishing a 3D segmentation convolutional neural network model; the 3D segmentation convolutional neural network model is used for outputting a segmentation result sequence of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal in the sample based on a CBCT image sequence of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal in the sample; the segmentation result sequence comprises: relationship between mandibular impacted wisdom tooth and adjacent teeth and mandibular canal;
the acquisition unit is used for acquiring a CBCT image sequence of actually sampled mandibular impacted wisdom teeth, adjacent second molars and mandibular canal;
the determining unit is further configured to input the actually sampled CBCT image sequence to the 3D segmentation convolutional neural network model to obtain a segmentation result sequence of the actually sampled mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal output by the 3D segmentation convolutional neural network model.
6. The apparatus of claim 5, wherein the determining unit builds a 3D segmented convolutional neural network model comprising:
acquiring sample data of a first set quantity, and taking one part of the sample data as a training sample and the other part of the sample data as a test sample; wherein the sample data comprises: original CBCT images sketched by images of the mandibular impacted wisdom tooth, the adjacent second molar and the mandibular canal;
based on a set 3D segmentation convolutional neural network structure, carrying out deep learning processing on the training sample to obtain a training model of the 3D segmentation convolutional neural network structure;
testing the training model by using the test sample to obtain a test result;
and correcting the training model according to the error between the set standards of the corresponding relation between the input and the output in the test result to obtain a corrected model of the 3D segmentation convolutional neural network structure, wherein the corrected model is used as the 3D segmentation convolutional neural network model.
7. The apparatus of claim 6, wherein the determining unit performs deep learning processing on the training samples, and comprises:
carrying out data preprocessing, image down-sampling processing, image normalization processing and/or image amplification processing on the original CBCT image of the training sample to obtain an image processing result;
performing model segmentation training on the 3D segmentation convolutional neural network structure based on the image processing result to obtain a model segmentation result;
performing morphological processing on the model segmentation result to obtain a smooth segmentation result;
taking the smooth segmentation result as the input of a 3D segmentation convolution neural network structure, carrying out model training on the 3D segmentation convolution neural network structure, and outputting a model training result at least containing the relation between the mandibular impacted wisdom tooth and the adjacent teeth and the mandibular canal;
wherein, the relation of the mandibular impacted wisdom tooth and the mandibular canal includes: a bone interval is arranged between the root of the mandibular impacted wisdom tooth and the mandibular canal, the root of the mandibular impacted wisdom tooth is attached to the mandibular canal, and the root of the mandibular impacted wisdom tooth presses the mandibular canal; the relationship between the mandibular impacted wisdom tooth and the adjacent teeth includes: the mandibular impacted wisdom tooth is closely attached to the adjacent tooth, and the mandibular impacted wisdom tooth presses the adjacent tooth to absorb.
8. The apparatus of claim 7, wherein the data preprocessing of the determining unit comprises: mapping pixel values of an original CBCT image of the training sample into a set pixel range by utilizing an LUT;
wherein the LUT is configured to: mapping the pixel values of the pixels of which the pixel values are smaller than the lower limit of a set pixel range or larger than the upper limit of the set pixel range in the original CBCT image of the training sample to 0, and mapping the pixel values of the rest pixels to the set pixel range according to a linear relation;
and/or the presence of a gas in the gas,
the image down-sampling processing of the determination unit includes: intercepting the set layer number of the middle layer of the original CBCT image of the training sample as an input sample of the subsequent model training under the condition that the total layer number of the original CBCT image of the training sample reaches the set layer number; under the condition that the total number of layers of the original CBCT images of the training sample does not reach the set number of layers, repeatedly copying the set layers in the total number of layers of the original CBCT images of the training sample until the total number of layers reaches the set number of layers;
the resolution of each layer of image is directly sampled to a set image sample size by using a bilinear difference value;
and/or the presence of a gas in the gas,
the image gray value normalization processing of the determining unit comprises the following steps: normalizing the original CBCT image of the training sample according to a set convergence rate;
and/or the presence of a gas in the gas,
the image augmentation processing of the determination unit includes: rotating each CBCT sample in the training samples by a set angle according to a set probability; gaussian noise and left and right mirror images are added to increase the number of training samples;
and/or the presence of a gas in the gas,
the determining unit performs model segmentation training on the 3D segmentation convolutional neural network structure, and the model segmentation training comprises the following steps:
selecting a second set number of samples as a verification set based on the image processing result;
determining initial model parameters by adopting a random initialization method, and performing iterative training on the 3D segmentation convolution neural network structure by adopting a gradient descent algorithm;
and determining target model parameters of the 3D segmentation convolutional neural network structure according to the Dice value on the verification set as a model segmentation result.
9. A terminal, comprising: a mandibular impacted wisdom tooth and adjacent teeth and mandibular canal relationship determining apparatus as claimed in any one of claims 5 to 8;
or,
a processor for executing a plurality of instructions;
a memory to store a plurality of instructions;
wherein the instructions are for being stored by the memory and loaded and executed by the processor to perform the method of determining the relationship of mandibular impacted wisdom teeth to adjacent teeth and mandibular canal according to any of claims 1 to 5.
10. A storage medium having a plurality of instructions stored therein; the plurality of instructions for loading and executing by a processor the method for determining the relationship of mandibular wisdom tooth to adjacent teeth and mandibular canal according to any one of claims 1 to 5.
CN201910785246.7A 2019-08-23 2019-08-23 Method and device for determining relationship between mandible wisdom tooth and adjacent teeth and mandible tube, storage medium and terminal Active CN110503652B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910785246.7A CN110503652B (en) 2019-08-23 2019-08-23 Method and device for determining relationship between mandible wisdom tooth and adjacent teeth and mandible tube, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910785246.7A CN110503652B (en) 2019-08-23 2019-08-23 Method and device for determining relationship between mandible wisdom tooth and adjacent teeth and mandible tube, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN110503652A true CN110503652A (en) 2019-11-26
CN110503652B CN110503652B (en) 2022-02-25

Family

ID=68589205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910785246.7A Active CN110503652B (en) 2019-08-23 2019-08-23 Method and device for determining relationship between mandible wisdom tooth and adjacent teeth and mandible tube, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN110503652B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967539A (en) * 2020-09-29 2020-11-20 北京大学口腔医学院 Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment
WO2021212940A1 (en) * 2020-04-21 2021-10-28 宁波深莱医疗科技有限公司 Segmentation method for dental jaw three-dimensional digital model
CN113658679A (en) * 2021-07-13 2021-11-16 南京邮电大学 Automatic evaluation method for alveolar nerve injury risk under medical image
CN115281709A (en) * 2022-10-08 2022-11-04 太原理工大学 C-shaped root canal detection device and method for mandibular second molar
CN115830034A (en) * 2023-02-24 2023-03-21 淄博市中心医院 Data analysis system for oral health management
CN116543914A (en) * 2023-05-12 2023-08-04 广州医科大学附属口腔医院(广州医科大学羊城医院) Model construction method for evaluating second molar tooth root absorption degree based on CBCT image
CN117036365A (en) * 2023-10-10 2023-11-10 南京邮电大学 Third molar tooth root number identification method based on deep attention network
CN117132596A (en) * 2023-10-26 2023-11-28 天津医科大学口腔医院 Mandibular third molar generation-retarding type identification method and system based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986123A (en) * 2017-06-01 2018-12-11 无锡时代天使医疗器械科技有限公司 The dividing method of tooth jaw three-dimensional digital model
CN109360196A (en) * 2018-09-30 2019-02-19 北京羽医甘蓝信息技术有限公司 Method and device based on deep learning processing oral cavity radiation image
CN109903396A (en) * 2019-03-20 2019-06-18 洛阳中科信息产业研究院(中科院计算技术研究所洛阳分所) A kind of tooth three-dimensional model automatic division method based on surface parameterization
CN109978841A (en) * 2019-03-12 2019-07-05 北京羽医甘蓝信息技术有限公司 The method and apparatus of whole scenery piece impacted tooth identification based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986123A (en) * 2017-06-01 2018-12-11 无锡时代天使医疗器械科技有限公司 The dividing method of tooth jaw three-dimensional digital model
CN109360196A (en) * 2018-09-30 2019-02-19 北京羽医甘蓝信息技术有限公司 Method and device based on deep learning processing oral cavity radiation image
CN109978841A (en) * 2019-03-12 2019-07-05 北京羽医甘蓝信息技术有限公司 The method and apparatus of whole scenery piece impacted tooth identification based on deep learning
CN109903396A (en) * 2019-03-20 2019-06-18 洛阳中科信息产业研究院(中科院计算技术研究所洛阳分所) A kind of tooth three-dimensional model automatic division method based on surface parameterization

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FAUSTO MILLETARI 等: ""V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation", 《2016 FOURTH INTERNATIONAL CONFERENCE ON 3D VISION》 *
YUMA MIKI等: ""Classification of teeth in cone-beam CT using deep convolutional neural network", 《COMPUTERS IN BIOLOGY AND MEDICINE》 *
刘东旭: "三维CT在口腔正畸中的应用研究", 《中国优秀博硕士学位论文全文数据库(博士)医药卫生科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021212940A1 (en) * 2020-04-21 2021-10-28 宁波深莱医疗科技有限公司 Segmentation method for dental jaw three-dimensional digital model
US11989934B2 (en) 2020-04-21 2024-05-21 Ningbo Shenlai Medical Technology Co., Ltd. Method for segmenting 3D digital model of jaw
CN111967539A (en) * 2020-09-29 2020-11-20 北京大学口腔医学院 Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment
CN113658679A (en) * 2021-07-13 2021-11-16 南京邮电大学 Automatic evaluation method for alveolar nerve injury risk under medical image
CN113658679B (en) * 2021-07-13 2024-02-23 南京邮电大学 Automatic assessment method for risk of alveolar nerve injury under medical image
CN115281709A (en) * 2022-10-08 2022-11-04 太原理工大学 C-shaped root canal detection device and method for mandibular second molar
CN115830034A (en) * 2023-02-24 2023-03-21 淄博市中心医院 Data analysis system for oral health management
CN116543914A (en) * 2023-05-12 2023-08-04 广州医科大学附属口腔医院(广州医科大学羊城医院) Model construction method for evaluating second molar tooth root absorption degree based on CBCT image
CN117036365A (en) * 2023-10-10 2023-11-10 南京邮电大学 Third molar tooth root number identification method based on deep attention network
CN117132596A (en) * 2023-10-26 2023-11-28 天津医科大学口腔医院 Mandibular third molar generation-retarding type identification method and system based on deep learning
CN117132596B (en) * 2023-10-26 2024-01-12 天津医科大学口腔医院 Mandibular third molar generation-retarding type identification method and system based on deep learning

Also Published As

Publication number Publication date
CN110503652B (en) 2022-02-25

Similar Documents

Publication Publication Date Title
CN110503652B (en) Method and device for determining relationship between mandible wisdom tooth and adjacent teeth and mandible tube, storage medium and terminal
US11398013B2 (en) Generative adversarial network for dental image super-resolution, image sharpening, and denoising
US11366985B2 (en) Dental image quality prediction platform using domain specific artificial intelligence
US11189028B1 (en) AI platform for pixel spacing, distance, and volumetric predictions from dental images
US11367188B2 (en) Dental image synthesis using generative adversarial networks with semantic activation blocks
US20210118132A1 (en) Artificial Intelligence System For Orthodontic Measurement, Treatment Planning, And Risk Assessment
US11464467B2 (en) Automated tooth localization, enumeration, and diagnostic system and method
KR101839789B1 (en) System for generating interpretation data of dental image
US20220012815A1 (en) Artificial Intelligence Architecture For Evaluating Dental Images And Documentation For Dental Procedures
US11276151B2 (en) Inpainting dental images with missing anatomy
US10997475B2 (en) COPD classification with machine-trained abnormality detection
US20210357688A1 (en) Artificial Intelligence System For Automated Extraction And Processing Of Dental Claim Forms
US11357604B2 (en) Artificial intelligence platform for determining dental readiness
US20200411167A1 (en) Automated Dental Patient Identification And Duplicate Content Extraction Using Adversarial Learning
CN110246580B (en) Cranial image analysis method and system based on neural network and random forest
US11217350B2 (en) Systems and method for artificial-intelligence-based dental image to text generation
US20200387829A1 (en) Systems And Methods For Dental Treatment Prediction From Cross- Institutional Time-Series Information
US20220084267A1 (en) Systems and Methods for Generating Quick-Glance Interactive Diagnostic Reports
US20210358604A1 (en) Interface For Generating Workflows Operating On Processing Dental Information From Artificial Intelligence
US11311247B2 (en) System and methods for restorative dentistry treatment planning using adversarial learning
Leonardi et al. An evaluation of cellular neural networks for the automatic identification of cephalometric landmarks on digital images
CN111127430A (en) Method and device for determining medical image display parameters
Chen et al. Detection of various dental conditions on dental panoramic radiography using Faster R-CNN
CN113658679A (en) Automatic evaluation method for alveolar nerve injury risk under medical image
US20230252748A1 (en) System and Method for a Patch-Loaded Multi-Planar Reconstruction (MPR)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant