CN115205469A - Tooth and alveolar bone reconstruction method, equipment and medium based on CBCT - Google Patents

Tooth and alveolar bone reconstruction method, equipment and medium based on CBCT Download PDF

Info

Publication number
CN115205469A
CN115205469A CN202211082339.1A CN202211082339A CN115205469A CN 115205469 A CN115205469 A CN 115205469A CN 202211082339 A CN202211082339 A CN 202211082339A CN 115205469 A CN115205469 A CN 115205469A
Authority
CN
China
Prior art keywords
tooth
cbct
segmentation
alveolar bone
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211082339.1A
Other languages
Chinese (zh)
Inventor
王都洋
陈雨晴
王艳福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hansf Hangzhou Medical Technology Co ltd
Original Assignee
Hansf Hangzhou Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hansf Hangzhou Medical Technology Co ltd filed Critical Hansf Hangzhou Medical Technology Co ltd
Priority to CN202211082339.1A priority Critical patent/CN115205469A/en
Publication of CN115205469A publication Critical patent/CN115205469A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/51Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for dentistry
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Computing Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Artificial Intelligence (AREA)
  • Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Optics & Photonics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Pulmonology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a tooth and alveolar bone reconstruction method, equipment and a medium based on CBCT, which preprocesses a CBCT sectional image through a self-adaptive algorithm and reduces the influence of CBCT data produced by different manufacturers on the algorithm; obtaining loose and compact 3D tooth interested areas by utilizing a target detection algorithm and a 3D segmentation algorithm respectively for accurate 3D tooth and tooth socket segmentation; tooth detection is carried out according to a target detection algorithm, and the tooth detection algorithm classifies the teeth by positioning the bounding box of each tooth, so that the problems of tooth classification error and unclear tooth boundaries between adjacent similarities are solved. And finally, segmenting the distribution conditions of all teeth and upper and lower alveolar bones from the detected tooth region, and performing point cloud processing on all segmented data to finally reconstruct all teeth and upper and lower alveolar bones. The invention effectively solves the problem of low accuracy of CBCT tooth and alveolar bone segmentation, effectively adapts to data produced by different CBCT manufacturers, and has good tooth and alveolar bone reconstruction effect.

Description

Tooth and alveolar bone reconstruction method, equipment and medium based on CBCT
Technical Field
The invention relates to the technical field of image processing, in particular to a tooth and alveolar bone reconstruction method, equipment and a medium based on CBCT.
Background
According to oral disease survey reports, nearly 90% of people worldwide suffer from a certain degree of oral problems, of which many need dental treatment. While more and more people are achieving the goal of improving facial appearance by improving dental health. In China, the oral health and dental correction industry has gradually become one of the most potential industries, and the development of digitalization and informatization for promoting the oral health industry is not slow.
With the rapid development of the rapid innovation of artificial intelligence, CBCT, oral cavity scanning and face scanners, and dental three-dimensional (3D) printing have been developed. Digital dentistry improves dentistry efficiency and improves the accuracy of orthodontic diagnosis, treatment planning and surgical guidelines. One of the basic components of digital dentistry is the 3D segmentation of teeth and sockets from CBCT images. In addition, accurate digital models of individual tooth and socket geometry facilitate simulated prosthesis evaluation, head measurement analysis, computer-assisted digital implant planning, and occlusion irregularity prediction. In all current imaging modalities, only CBCT imaging can acquire complete tooth and alveolar bone data. Therefore, accurate reconstruction of 3D tooth models using 3D cone computed tomography (CBCT) image data is particularly important in dentistry.
Artificial intelligence and image recognition technology have already exerted strong vitality in various industries, and dental specialists can greatly reduce the working intensity and improve the efficiency of oral treatment by the aid of computer vision technology. The positions of the teeth and the alveolar bones in the CBCT image can be rapidly identified by using artificial intelligence and an image identification technology. However, automatically and accurately performing 3D single tooth segmentation from CBCT images is a difficult task. The reason is as follows: (1) The similarity between the tooth root and the peripheral alveolar bone is high; (2) The adhesion boundary between adjacent teeth of the crown part is fuzzy; (3) The output data of CBCT of different manufacturers can not have a uniform standard.
In recent years, some scholars at home and abroad carry out intensive research on the CBCT segmentation and reconstruction algorithm. Traditional methods, in large part, are based on level set methods. Unfortunately, level set based approaches have fundamental limitations in achieving fully automatic segmentation. Recently, deep learning methods have been applied to 3D tooth segmentation. However, these methods suffer from misclassification errors caused by the similarity of adjacent teeth. Most of the existing algorithms are different in performance on CBCT data of different manufacturers, and are difficult to adapt to the CBCT data of all the manufacturers. Therefore, it is an urgent problem to improve the adaptability of the accurate segmentation and reconstruction of the CBCT tooth and alveolar bone to the CBCT data of manufacturers.
Therefore, there is a need for further improvements in CBCT-based dental and alveolar bone reconstruction methods, apparatuses and media to solve the above-mentioned problems.
Disclosure of Invention
The purpose of the application is: the method, the device and the medium for reconstructing the tooth and the alveolar bone based on the CBCT are provided, the problem of low accuracy of the CBCT tooth and the alveolar bone segmentation is effectively solved, the method and the device are effectively suitable for data produced by different CBCT manufacturers, and the tooth and alveolar bone reconstruction effect is good.
The technical scheme is that the tooth and alveolar bone reconstruction method based on CBCT is characterized in that: the method comprises the following steps:
s1: making a CBCT reconstructed data set;
s2: data preprocessing: carrying out window width window level and normalization processing on the CBCT data;
s3: extracting a CBCT data ROI region: respectively detecting the ROI area of the CBCT data obtained in the step S2 by using a target detection algorithm and a 3D segmentation algorithm to obtain an accurate ROI area
Figure DEST_PATH_IMAGE001
S4: CBCT two-dimensional tooth positioning and classification; carrying out tooth detection through a target detection algorithm, positioning a boundary box of each tooth to classify the teeth, and adding a CA position attention mechanism to obtain a bounding box
Figure 69851DEST_PATH_IMAGE002
And classification results
Figure DEST_PATH_IMAGE003
S5: CBCT two-dimensional dental example segmentation: from tooth ROI area
Figure 347511DEST_PATH_IMAGE004
In the step (4), example segmentation is performed on all teeth, and the tooth classification obtained in the step (4)
Figure 533773DEST_PATH_IMAGE003
And an enclosure
Figure 725720DEST_PATH_IMAGE002
Combining to obtain high-precision segmentation result
Figure DEST_PATH_IMAGE005
S6: CBCT upper and lower alveolar bone segmentation: segmenting the upper alveolar bone and the lower alveolar bone of the ROI obtained in the S3 by utilizing a U-net segmentation algorithm;
s7: and (3) carrying out point cloud processing on the segmentation result: performing point cloud processing on the tooth segmentation result obtained in the step S5 and the upper and lower alveolar bone segmentation results obtained in the step S6;
wherein, the step S4 specifically includes:
s41, obtaining the accurate ROI area of the tooth according to the S3
Figure 867988DEST_PATH_IMAGE006
Cutting out corresponding ROI area from original two-dimensional image and label
Figure DEST_PATH_IMAGE007
S42, for ROI area
Figure 316549DEST_PATH_IMAGE008
And (5) training a target detection algorithm to obtain an enclosure and a category of each tooth.
Preferably, the step S1 specifically includes: collecting different CBCT data, marking the CBCT broken-line sheets, and obtaining the shape marking information of each tooth and the upper and lower alveolar bones and the category information of each tooth.
Preferably, the step S2 specifically includes:
s21, acquiring original CBCT image data, and storing the original CBCT image data in a DICOM or NII.GZ format;
s22, fitting CBCT data of different manufacturers, adjusting the window width and window level of the CBCT data to [ a, b ], normalizing each fault slice to be a standard image [0, 255], and adopting the following concrete formula:
Figure DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 36112DEST_PATH_IMAGE010
representing normalized second of CBCT slice
Figure DEST_PATH_IMAGE011
Line, first
Figure 703854DEST_PATH_IMAGE012
The pixel values of the columns are then compared,
Figure DEST_PATH_IMAGE013
representing the pre-normalization of CBCT slice
Figure 467673DEST_PATH_IMAGE011
Line, first
Figure 992195DEST_PATH_IMAGE012
Of a column
Figure 667896DEST_PATH_IMAGE014
Value of,
Figure 77012DEST_PATH_IMAGE014
showing the CT absorption degree of different human tissues;
s23: dividing the adjusted labels into a training set, a verification set and a test set according to the proportion of 8;
s24: and respectively storing two-dimensional pictures of the divided training set, the divided verification set and the divided test set, wherein the labels are stored in a PNG format, and the label of each tooth is the pixel value corresponding to the tooth.
Preferably, S31, generating a target detection label according to the two-dimensional picture and the label in the step S2, and generating a surrounding frame with the label format of all teeth, wherein the label is generated by the following expression:
Figure DEST_PATH_IMAGE015
where Nt represents the number of categories of teeth, and np.
S32, detecting the tooth and jaw bone areas by using a single-stage target detection algorithm YOLO V5, and extracting the tooth and jaw bone areas to obtain loose ROI areas
Figure 808252DEST_PATH_IMAGE016
S33, generating semantic segmentation labels according to the three-dimensional data and the labels in the step S2, wherein the label and the pixel value of each tooth are both 1, the jaw bone label is set to be 2, and the pixel values of other areas are 0;
s34, utilizing a 3D V-Net segmentation algorithm to detect the target area in the step S32
Figure DEST_PATH_IMAGE017
Coarse division of teeth and jaw bones and division of jaw bones are carried out to obtain the division boundary of each tooth, and the overall boundary curve of the teeth is fitted by using morphology to obtain the accurate tooth ROI area
Figure 831572DEST_PATH_IMAGE018
And jaw ROI area
Figure DEST_PATH_IMAGE019
Preferably, the target detection algorithm is implemented by applying an attenuation equation to the NMS algorithm to adapt to the filtering of the tooth bounding box, the NMS attenuation equation being expressed as follows:
Figure 433717DEST_PATH_IMAGE020
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE021
represents the intersection ratio of x and y,
Figure 443261DEST_PATH_IMAGE022
Figure DEST_PATH_IMAGE023
are two hyper-parameters which are,
Figure 680207DEST_PATH_IMAGE022
the weight is represented by a weight that is,
Figure 438210DEST_PATH_IMAGE023
represents the attenuation weight, wherein,
Figure 963869DEST_PATH_IMAGE024
Figure DEST_PATH_IMAGE025
preferably, the attention mechanism formula in step S4 is expressed as follows:
Figure 370580DEST_PATH_IMAGE026
wherein
Figure DEST_PATH_IMAGE027
Is shown in the c-th channel
Figure 337399DEST_PATH_IMAGE028
The output of the feature at the location is,
Figure DEST_PATH_IMAGE029
in order to be an input, the user can select,
Figure 203986DEST_PATH_IMAGE030
for the attention weight of the feature map in the height direction,
Figure DEST_PATH_IMAGE031
the attention weight in the width direction is indicated.
Preferably, the step S5 specifically includes:
s51, the example segmentation adopts U-net + + as a basic network, and the main network selects Resnet34 to perform segmentation training to obtain an example segmentation result
Figure 872734DEST_PATH_IMAGE032
S52, dividing the example obtained in the step S51 into a plurality of examples
Figure DEST_PATH_IMAGE033
With the bounding box obtained in step S4
Figure 115758DEST_PATH_IMAGE034
Merging to obtain the final segmentation boundary
Figure DEST_PATH_IMAGE035
S53, dividing the example into results
Figure 937084DEST_PATH_IMAGE036
According to the tooth classification obtained in S4
Figure DEST_PATH_IMAGE037
Carrying out reassignment classification to obtain the final segmentation classification
Figure 207528DEST_PATH_IMAGE038
Preferably, the step S7 specifically includes:
s71, mean filtering is carried out on the tooth segmentation result obtained in the step S5 and the upper and lower alveolar bone segmentation results obtained in the step S6 to obtain a filtered result
Figure DEST_PATH_IMAGE039
Wherein the gaussian kernel size is 3*3;
s72, filtering the result
Figure 865037DEST_PATH_IMAGE039
Performing Sobel edge detection to obtain an edge detection result
Figure 957758DEST_PATH_IMAGE040
The computational expression of the Sobel operator is as follows:
Figure DEST_PATH_IMAGE041
wherein the content of the first and second substances,
Figure 882857DEST_PATH_IMAGE042
Figure DEST_PATH_IMAGE043
representing pixel values in the x and y directions respectively,
Figure 91247DEST_PATH_IMAGE044
and
Figure DEST_PATH_IMAGE045
representing the convolution kernel in the x and y directions,
Figure 750898DEST_PATH_IMAGE046
representing the original image.
S73, edge detection is obtained
Figure DEST_PATH_IMAGE047
Superposing according to the Z axis of the image to obtain the final point cloud data
Figure 568682DEST_PATH_IMAGE048
Figure 161337DEST_PATH_IMAGE048
The mathematical expression of (a) is as follows;
Figure DEST_PATH_IMAGE049
wherein the content of the first and second substances,
Figure 275049DEST_PATH_IMAGE050
indicating the result of the edge detection of the alveolar bone,
Figure DEST_PATH_IMAGE051
which represents the detection of the edges of the teeth,
Figure 671264DEST_PATH_IMAGE052
the number of Z-axes of the image is indicated,
Figure DEST_PATH_IMAGE053
the number of categories of teeth is indicated.
S74, aiming at the obtained point cloud data
Figure 540343DEST_PATH_IMAGE054
Poisson reconstruction is performed to obtain final reconstructed alveolar bone and example tooth results.
The present invention also provides an electronic device, comprising: one or more processors; storage means for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement a CBCT-based tooth and alveolar bone reconstruction method as provided by the present invention.
The present invention also provides a computer readable storage medium storing a computer program executable by a computer processor to implement a CBCT-based tooth and alveolar bone reconstruction method as described in any one of the above.
Compared with the prior art, the application has the following obvious advantages and effects:
1. in the invention, tooth detection is carried out through a target detection algorithm, which is to adopt an attenuation formula for an NMS algorithm to position a boundary frame of each tooth to classify the teeth, thereby solving the problems of wrong tooth classification and unclear tooth boundary between adjacent similarities.
2. In the invention, a CA attention mechanism is added in the target detection, so that the problem of inaccurate tooth classification is solved;
3. in the invention, the target detection and the example segmentation are combined to obtain a more accurate tooth segmentation result.
4. In the invention, the ROI of the tooth and the alveolar bone is more accurately obtained by combining the 2D target detection and the 3D semantic segmentation.
Drawings
Fig. 1 is an overall flow chart in the present application.
Fig. 2 is a schematic diagram of a network in the present application.
Fig. 3 is a CBCT image of a tooth according to the present application.
Fig. 4 is an image of the tooth after CBCT image adjustment in the present application.
FIG. 5 is a graph of the segmentation results of an example of a CBCT image of a tooth according to the present application.
FIG. 6 is a final example segmentation result plot of CBCT teeth in combination with target detection in the present application.
FIG. 7 is a graph of the reconstruction results of CBCT teeth in the present application.
Fig. 8 is a schematic structural diagram of an electronic device in the present application.
Reference numbers in this application:
processor 101, storage device 102, output device 103, output device 104, bus 105.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some structures related to the present invention are shown in the drawings, not all of them.
Before discussing exemplary embodiments in greater detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations (or steps) can be performed in parallel, concurrently, or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The CBCT-based dental and alveolar bone reconstruction method, apparatus and medium provided in the present application will be described in detail with reference to the following embodiments and alternatives thereof.
Fig. 1 is an overall flowchart of a CBCT-based tooth and alveolar bone reconstruction method provided in an embodiment of the present invention, and fig. 2 is a network diagram of a CBCT-based tooth and alveolar bone reconstruction method provided in an embodiment of the present invention. The embodiment of the invention can be suitable for the condition of the tooth and alveolar bone reconstruction method of CBCT. The method can be executed by a CBCT-based dental and alveolar bone reconstruction device, which can be implemented in software and/or hardware and integrated on any electronic device with network communication function. As shown in fig. 1 and 2, the tooth and alveolar bone reconstruction method of CBCT provided in the embodiment of the present application may include the following steps:
s1: making a CBCT reconstructed data set;
as shown in fig. 3, which is a CBCT image of teeth in the embodiment of the present invention, step S1 specifically includes collecting CBCT data of a plurality of different manufacturers, labeling CBCT fragment sheets, and obtaining shape labeling information of each tooth and upper and lower alveolar bones and category information of each tooth, where the shape labeling information is a label for performing semantic segmentation on each picture, the obtained semantic segmentation labeling area and target frame labeling information, and the category information is that each tooth is an individual ID category. And (3) adopting itk-snap software for labeling, wherein the labeling is carried out according to an FDI standard format, and labeling personnel are all completed by dentists with more than 3 years of working experience. Wherein, the label of each tooth is the pixel value corresponding to the tooth.
S2: data preprocessing: carrying out window width and window level and normalization processing on the CBCT data;
fig. 4 shows an image of the tooth after CBCT image adjustment in the present application. In the embodiment of the application, the ranges [ a, b ] of the window width and the window level of the tooth and the jaw bone are respectively obtained by carrying out cluster fitting on CBCT data of different manufacturers, and then all the CBCT data are subjected to window width and window level normalization processing. Wherein only the image is adjusted and the label remains unchanged. The step S2 specifically includes:
s21, acquiring original CBCT image data, uniformly adjusting volume pixels of the original CBCT image data by 0.4 according to data produced by different manufacturers, and storing the original CBCT image data as three-dimensional data in a DICOM or NII
Figure DEST_PATH_IMAGE055
And S22, carrying out cluster fitting on CBCT data of different manufacturers to obtain a window width and window level range [ a, b ] belonging to teeth and jawbones. Adopting a fault slice mode for each three-dimensional data, adjusting the window width and window level of the CBCT data to [ a, b ], normalizing each fault slice to a standard image [0, 255], and adopting the following concrete formula:
Figure 190767DEST_PATH_IMAGE056
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE057
representing normalized second of CBCT slice
Figure 832970DEST_PATH_IMAGE058
Line, first
Figure DEST_PATH_IMAGE059
The value of the pixel of (a) is,
Figure 827733DEST_PATH_IMAGE060
representing the pre-normalization of CBCT slice
Figure 925002DEST_PATH_IMAGE058
Line, first
Figure 898774DEST_PATH_IMAGE059
Of columns
Figure DEST_PATH_IMAGE061
The value of the sum of the values,
Figure 852823DEST_PATH_IMAGE061
shows the absorption degree of CT by different human tissues,
Figure 771101DEST_PATH_IMAGE061
the lower, darker the image, indicating low absorption, i.e. low density, regions, such as soft tissue with more fluid; bai Ying denotes a high absorption zone, i.e., a high density zone, such as a tooth;
s23: dividing the adjusted three-dimensional data and the labels into training sets according to the proportion of 8
Figure 563739DEST_PATH_IMAGE062
Verification set
Figure DEST_PATH_IMAGE063
And test set
Figure 313389DEST_PATH_IMAGE064
. Wherein the three-dimensional data is for training of a 3D network;
s24: training set divided
Figure 907181DEST_PATH_IMAGE062
Verification set
Figure 188121DEST_PATH_IMAGE063
And test set
Figure 784450DEST_PATH_IMAGE064
And (three-dimensional data format) slicing and storing the three data sets respectively according to a fault slice mode, wherein the labels are stored in a PNG format, and the label of each tooth is the pixel value corresponding to the tooth. Due to the difference between the 2D data and the 3D data, three data need to be integrated and then re-adjusted to 9:1 Format partitioning into training sets
Figure DEST_PATH_IMAGE065
And verification set
Figure 998393DEST_PATH_IMAGE066
. Wherein the two-dimensional picture is used for training of a 2D algorithm. The CBCT fault image is preprocessed through a self-adaptive algorithm, so that the influence of CBCT data produced by different manufacturers on the algorithm is avoided;
s3: extracting a CBCT data ROI region: firstly, the 2D CBCT data generated in the step S2 is subjected to ROI area rough detection by using a target detection algorithm, and then the 3D CBCT data generated in the step S2 is segmented by using a 3D segmentation algorithm to obtain an accurate ROI area
Figure DEST_PATH_IMAGE067
In this embodiment of the present application, step S3 specifically includes:
s31, according to the two-dimensional picture and the label in the step S2: (
Figure 559825DEST_PATH_IMAGE068
) And generating target detection labels, wherein the label format is a surrounding frame of all teeth and jaws. Wherein, the surrounding frame is the coordinates of the upper left corner and the lower right corner. The tooth label is used for searching the boundary area of all teeth according to the boundary of each tooth so as to determine the tooth surrounding frame. The bounding box of the jaw is the boundary of the maxilla and mandible, and the expression generated by the label is as follows:
Figure DEST_PATH_IMAGE069
where Nt represents the number of categories of teeth, and np.
S32, generating two-dimensional data according to the S2
Figure 141110DEST_PATH_IMAGE070
And (4) training a target detection algorithm, and detecting the tooth area by a network selection single-stage target detection algorithm YOLO V5. The label and the image generated in S31 are sent into a target detection algorithm to obtain rough tooth and jaw bone areas of each image, then the two-dimensional tooth and jaw bone areas are mapped to three-dimensional volume data, the tooth and jaw bone areas are extracted, and loose ROI areas are obtained
Figure DEST_PATH_IMAGE071
S33, according to the three-dimensional data and the label in the step S2
Figure 446189DEST_PATH_IMAGE072
Generating semantic segmentation labels, wherein the label and the pixel value of each tooth are both 1, the label and the pixel value of a jaw bone are set to be 2, and the pixel values of other areas are 0;
s34, generating three according to S2Dimension semantic segmentation data
Figure DEST_PATH_IMAGE073
Segmentation training was performed on 3D V-Net. The trained 3D V-Net segmentation algorithm is used for detecting the region of the target in the step S32
Figure 671896DEST_PATH_IMAGE074
Rough tooth segmentation and jaw segmentation are performed. Obtaining the segmentation boundary of each tooth, and obtaining the accurate tooth ROI area by using morphology to fit the integral boundary curve of the tooth
Figure DEST_PATH_IMAGE075
. Because of the unity of jaw data, jaw ROI area is directly performed according to the segmentation result of jaw
Figure 279595DEST_PATH_IMAGE076
Is determined.
In the present embodiment, the boundary line of the dental arch region is extracted using morphology. Applying cubic spline curve fitting, interpolation and extrapolation techniques to the boundary to obtain a smooth curve completely passing through the dental arch region
Figure DEST_PATH_IMAGE077
The reference curve is represented as:
Figure 456499DEST_PATH_IMAGE078
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE079
is a line segment, composed of
Figure 925788DEST_PATH_IMAGE080
Figure DEST_PATH_IMAGE081
The composition of the dots is as follows,
Figure 504537DEST_PATH_IMAGE082
indicating the number of points. By using a mode of combining 2D target detection and 3D semantic segmentation, the ROI of the tooth and the alveolar bone is obtained more accurately.
S4: CBCT two-dimensional tooth positioning and classification; in the embodiment of the application, a single tooth is identified through numbers according to FDI dental symbols, the tooth detection is carried out through a target detection algorithm, the bounding box of each tooth is positioned to classify the teeth, the tooth detection is carried out through the target detection algorithm, the problem that the tooth boundaries between adjacent similarities are not clear is solved, meanwhile, a CA position attention mechanism is added to ensure the accurate classification of the tooth categories, and the bounding box is obtained
Figure DEST_PATH_IMAGE083
And classification results
Figure 283137DEST_PATH_IMAGE084
. Step S4 specifically includes:
s41, obtaining the accurate tooth ROI area according to the S3
Figure DEST_PATH_IMAGE085
From the original two-dimensional image and the label: (
Figure 714381DEST_PATH_IMAGE086
) Cutting out corresponding ROI (region of interest) in
Figure DEST_PATH_IMAGE087
(ii) a Wherein, when clipping, the clipping area is set as
Figure 220318DEST_PATH_IMAGE088
Area [80%,120%]。
S42, for ROI area
Figure DEST_PATH_IMAGE089
And training a target detection algorithm to obtain an enclosure and a category of each tooth.
Since the dental target is a small target, each tooth belongs to a class and the teeth are close together, the filtering of the tooth bounding boxes by previous NMSs and Soft-NMSs brute force the deletion of some correct bounding boxes. Thus, the object detection algorithm is improved by applying a decay equation to the NMS algorithm to accommodate the filtering of the tooth bounding box, the NMS decay equation being expressed as follows:
Figure 143319DEST_PATH_IMAGE090
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE091
the cross-over ratio of x and y is shown,
Figure 92821DEST_PATH_IMAGE092
Figure DEST_PATH_IMAGE093
are two hyper-parameters which are,
Figure 837792DEST_PATH_IMAGE092
the weight is represented by a weight that is,
Figure 366993DEST_PATH_IMAGE093
the attenuation weights are shown, where,
Figure 484116DEST_PATH_IMAGE094
Figure DEST_PATH_IMAGE095
in the embodiment of the present application, YOLO V5 is selected as the basic network, and since the positions of the teeth are relatively fixed and are arranged in a certain order, in order to prevent the classification category from being incorrect, we add an additional CA attention mechanism. The CA attention mechanism mainly learns the emphasis of the target based on the position information. The expression of the CA attention module is as follows:
Figure 338939DEST_PATH_IMAGE096
wherein
Figure DEST_PATH_IMAGE097
Shown in the c-th channel
Figure 977731DEST_PATH_IMAGE098
The output of the feature at the location is,
Figure DEST_PATH_IMAGE099
in order to be input, the user can input the information,
Figure 858094DEST_PATH_IMAGE100
for the attention weight of the feature map in the height direction,
Figure DEST_PATH_IMAGE101
the attention weight in the width direction is indicated.
S5: CBCT two-dimensional tooth instance segmentation: from tooth ROI area
Figure 362DEST_PATH_IMAGE102
In the step (4), example segmentation is performed on all teeth, and the tooth classification obtained in the step (4)
Figure DEST_PATH_IMAGE103
And an enclosure
Figure 291666DEST_PATH_IMAGE104
Combining to obtain high-precision segmentation result
Figure DEST_PATH_IMAGE105
(ii) a As shown in fig. 5, a tooth CBCT image example segmentation result map in the present application is shown, and Z1 in fig. 5 shows that the example segmentation result has a problem of adjacent tooth boundary blurring. In the embodiment of the present application, step S5 specifically includes:
s51, the example segmentation adopts U-net + + as a basic network, and the backbone network selects Resnet34 to perform segmentation training, wherein training data are example segmentation data generated in S2. Unlike semantic segmentation, realEach tooth in the example segment is a separate class. ROI area of tooth
Figure 653639DEST_PATH_IMAGE106
Slicing the data into two-dimensional data according to a fault slice mode, and sending the two-dimensional data into trained U-net + + to obtain an example segmentation result
Figure DEST_PATH_IMAGE107
S52, dividing the example obtained in the step S51 into a plurality of examples
Figure 570649DEST_PATH_IMAGE107
And the surrounding frame obtained in step S4
Figure 380473DEST_PATH_IMAGE108
Merging to obtain the final segmentation boundary
Figure DEST_PATH_IMAGE109
. The confidence coefficient of the boundary of the example segmentation is higher, the boundary takes the example segmentation boundary as a reference, and when the example segmentation boundary exceeds the surrounding frame, the boundary of the surrounding frame is taken as a new boundary;
s53, dividing the example into results
Figure 858990DEST_PATH_IMAGE110
According to the tooth classification obtained in S4
Figure DEST_PATH_IMAGE111
Carrying out reassignment classification to obtain the final segmentation classification
Figure 206795DEST_PATH_IMAGE112
S54, example segmentation has the advantage of accurate segmentation, target detection has the advantage of accurate classification, and
Figure 740544DEST_PATH_IMAGE109
and
Figure 936033DEST_PATH_IMAGE112
combining to obtain accurate tooth multi-class segmentation result
Figure DEST_PATH_IMAGE113
S6: CBCT upper and lower alveolar bone segmentation: jaw bone ROI obtained by S3 through U-net segmentation algorithm
Figure 319872DEST_PATH_IMAGE114
The upper and lower alveolar bone are divided to obtain the upper and lower alveolar bone division result
Figure DEST_PATH_IMAGE115
(ii) a As shown in fig. 6, which is a final example segmentation result of CBCT teeth combined with target detection in the present application, Z2 in fig. 6 indicates that the boundary of adjacent teeth is clear, and Resnet18 is selected by U-net backbone network.
S7: point cloud processing of segmentation results: the accurate tooth segmentation result obtained in the S5 is obtained
Figure 889394DEST_PATH_IMAGE116
And S6, the upper and lower alveolar bone segmentation results
Figure DEST_PATH_IMAGE117
Carrying out point cloud processing; as shown in fig. 7, which is a reconstruction result map of CBCT teeth in the present application, step S7 specifically includes:
s71, accurate tooth segmentation result obtained in the step S5
Figure 898938DEST_PATH_IMAGE118
And the upper and lower alveolar bone segmentation results obtained in step S6
Figure 965245DEST_PATH_IMAGE117
Carrying out mean value filtering to obtain a filtered result
Figure DEST_PATH_IMAGE119
Wherein the Gaussian kernel size is 3*3;
s72, filteringAfter the result
Figure 503674DEST_PATH_IMAGE120
Performing Sobel edge detection to obtain an edge detection result
Figure DEST_PATH_IMAGE121
The computational expression of the Sobel operator is as follows:
Figure 826071DEST_PATH_IMAGE122
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE123
Figure 265405DEST_PATH_IMAGE124
representing pixel values in the x and y directions respectively,
Figure DEST_PATH_IMAGE125
and
Figure 215912DEST_PATH_IMAGE126
representing the convolution kernel in the x and y directions,
Figure 394084DEST_PATH_IMAGE046
representing the original image.
S73, edge detection is obtained
Figure DEST_PATH_IMAGE127
Superposing according to the Z axis of the image to obtain the final point cloud data
Figure 841594DEST_PATH_IMAGE128
Figure 396204DEST_PATH_IMAGE128
The mathematical expression of (a) is as follows;
Figure DEST_PATH_IMAGE129
wherein the content of the first and second substances,
Figure 201217DEST_PATH_IMAGE130
indicating the result of the edge detection of the alveolar bone,
Figure DEST_PATH_IMAGE131
which is indicative of the detection of the edges of the teeth,
Figure 347028DEST_PATH_IMAGE132
the number of Z-axes of the image is indicated,
Figure DEST_PATH_IMAGE133
the number of categories of teeth is indicated.
S74, aiming at the obtained point cloud data
Figure 879903DEST_PATH_IMAGE128
Poisson reconstruction is performed to obtain final reconstructed alveolar bone and example tooth results.
In the embodiment of the present application, the poisson reconstruction in step S74 includes the following steps: firstly, carrying out point cloud filtering on an original point cloud, secondly, carrying out smooth adjustment on the point cloud, and finally, carrying out Poisson reconstruction by calculating a normal vector of the point cloud.
According to the invention, the CBCT fault image is preprocessed through a self-adaptive algorithm, so that the influence of CBCT data produced by different manufacturers on the algorithm is avoided; obtaining loose and compact 3D tooth interested areas by utilizing a target detection algorithm and a 3D segmentation algorithm respectively for accurate 3D tooth and tooth socket segmentation; tooth detection is carried out according to a target detection algorithm, and the boundary frames of all teeth are positioned to classify the teeth, so that the problems of tooth classification error and unclear tooth boundaries between adjacent similarities are solved. And finally, segmenting the distribution conditions of all teeth and upper and lower alveolar bones from the detected tooth region, and performing point cloud processing on all segmented data to finally reconstruct all teeth and upper and lower alveolar bones. The invention effectively solves the problem of low accuracy of CBCT tooth and alveolar bone segmentation, effectively adapts to data produced by different CBCT manufacturers, and has good tooth and alveolar bone reconstruction effect.
The present invention further provides an electronic device, as shown in fig. 8, which is a schematic structural diagram of an electronic device in the present application, and includes one or more processors 101 and a storage device 102; the number of the processors 101 in the electronic device may be one or more, and one processor 101 is taken as an example in fig. 8; storage 102 is used to store one or more programs; the one or more programs are executable by the one or more processors 101 to cause the one or more processors 101 to implement a CBCT-based tooth and alveolar bone reconstruction method according to any one of the embodiments of the present invention.
The electronic device may further include: an input device 103 and an output device 104. The processor 101, the storage device 102, the input device 103, and the output device 104 in the electronic apparatus may be connected by a bus 105 or by other means, and fig. 8 illustrates an example in which the connection is made by the bus 105.
The storage device 102 in the electronic apparatus is used as a computer readable storage medium for storing one or more programs, which may be software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the CBCT-based tooth and alveolar bone reconstruction method provided in the embodiments of the present invention. The processor 101 executes various functional applications and data processing of the electronic device by running software programs, instructions and modules stored in the storage device 102, so as to implement the CBCT-based tooth and alveolar bone reconstruction method in the above method embodiment.
The storage device 102 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device, and the like. In addition, the storage device 102 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the storage 102 may further include memory located remotely from the processor 101, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 103 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus. The output device 104 may include a display device such as a display screen.
And, when one or more programs included in the above-mentioned electronic device are executed by the one or more processors 101, the programs perform the following operations:
making a CBCT reconstructed data set;
data preprocessing: carrying out window width window level and normalization processing on the CBCT data;
extracting a CBCT data ROI region: respectively detecting the ROI (region of interest) of the CBCT data obtained in the step S2 by using a target detection algorithm and a 3D segmentation algorithm to obtain an accurate ROI
Figure 221891DEST_PATH_IMAGE134
CBCT two-dimensional tooth positioning and classification; carrying out tooth detection through a target detection algorithm, positioning a boundary box of each tooth to classify the teeth, and adding a CA position attention mechanism to obtain a bounding box
Figure DEST_PATH_IMAGE135
And classification results
Figure 163302DEST_PATH_IMAGE136
CBCT two-dimensional tooth instance segmentation: from tooth ROI area
Figure DEST_PATH_IMAGE137
In the step (4), example segmentation is performed on all teeth, and the tooth classification obtained in the step (4)
Figure 106113DEST_PATH_IMAGE138
And an enclosure
Figure DEST_PATH_IMAGE139
Combining to obtain high-precision segmentation result
Figure 483873DEST_PATH_IMAGE140
CBCT upper and lower alveolar bone segmentation: utilizing a U-net segmentation algorithm to segment the upper alveolar bone and the lower alveolar bone in the ROI obtained in the S3 to obtain upper and lower alveolar bone segmentation results;
point cloud processing of segmentation results: and performing point cloud processing on the tooth segmentation result obtained in the step S5 and the upper and lower alveolar bone segmentation results obtained in the step S6.
Of course, it will be understood by those skilled in the art that when the one or more programs included in the electronic device are executed by the one or more processors 101, the programs may also perform operations associated with the tooth and alveolar bone reconstruction method of CBCT provided in any of the embodiments of the present invention.
It should be further noted that the present invention also provides a computer readable storage medium, which stores a computer program, which can be executed by a computer processor, to implement the CBCT-based tooth and alveolar bone reconstruction method of the above embodiments. The computer program may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Since any modifications, equivalents, improvements, etc. made within the spirit and principles of the application may readily occur to those skilled in the art, it is intended to be included within the scope of the claims of this application.

Claims (10)

1. A tooth and alveolar bone reconstruction method based on CBCT is characterized in that: the method comprises the following steps:
s1: making a CBCT reconstructed data set;
s2: data preprocessing: carrying out window width window level and normalization processing on the CBCT data;
s3: extraction of ROI (region of interest) of CBCT (cone beam computed tomography) data: using target detection and 3D segmentation algorithm pairs, respectivelyDetecting the ROI area through the CBCT data set of the step S2 to obtain an accurate ROI area
Figure 175821DEST_PATH_IMAGE001
S4: CBCT two-dimensional tooth positioning and classification; tooth detection is carried out through a target detection algorithm, a boundary box of each tooth is positioned to classify the teeth, and meanwhile, a CA position attention mechanism is added to obtain an enclosing frame
Figure 546629DEST_PATH_IMAGE002
And classification results
Figure 588534DEST_PATH_IMAGE003
S5: CBCT two-dimensional tooth instance segmentation: from tooth ROI area
Figure 213551DEST_PATH_IMAGE004
In the step (4), example segmentation is performed on all teeth, and the tooth classification obtained in the step (4)
Figure 931977DEST_PATH_IMAGE005
And an enclosure
Figure 275233DEST_PATH_IMAGE006
Combining to obtain high-precision segmentation result
Figure 917567DEST_PATH_IMAGE007
S6: CBCT upper and lower alveolar bone segmentation: utilizing a U-net segmentation algorithm to segment the upper alveolar bone and the lower alveolar bone in the ROI obtained in the S3 to obtain upper and lower alveolar bone segmentation results;
s7: point cloud processing of segmentation results: performing point cloud processing on the tooth segmentation result obtained in the step S5 and the upper and lower alveolar bone segmentation results obtained in the step S6;
wherein, the step S4 specifically includes:
s41, obtaining the essence according to the step S3Definite tooth ROI area
Figure 849620DEST_PATH_IMAGE008
Cutting out corresponding ROI area from original two-dimensional image and label
Figure 551997DEST_PATH_IMAGE009
S42, to ROI area
Figure 569500DEST_PATH_IMAGE010
And (5) training a target detection algorithm to obtain an enclosure and a category of each tooth.
2. The CBCT-based tooth and alveolar bone reconstruction method according to claim 1, wherein the step S1 specifically comprises: collecting different CBCT data, marking the CBCT broken-line sheets, and obtaining the shape marking information of each tooth and the upper and lower alveolar bones and the category information of each tooth.
3. The CBCT-based tooth and alveolar bone reconstruction method according to claim 1, wherein the step S2 specifically comprises:
s21, acquiring original CBCT image data, and storing the original CBCT image data in a DICOM or NII.GZ format;
s22, fitting CBCT data of different manufacturers, adjusting the window width and window level of the CBCT data to [ a, b ], normalizing each fault slice to be a standard image [0, 255], and adopting the following concrete formula:
Figure 749946DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 349554DEST_PATH_IMAGE012
representing normalized second of CBCT slice
Figure 415643DEST_PATH_IMAGE013
Line, line 1
Figure 936754DEST_PATH_IMAGE014
The pixel values of the columns are selected,
Figure 920890DEST_PATH_IMAGE015
representing the first before normalization of CBCT slice fracture
Figure 827535DEST_PATH_IMAGE016
Line, line 1
Figure 871715DEST_PATH_IMAGE017
Of columns
Figure 863810DEST_PATH_IMAGE018
The value of the sum of the values,
Figure 651638DEST_PATH_IMAGE018
showing the CT absorption degree of different human tissues;
s23: dividing the adjusted three-dimensional image data into a training set, a verification set and a test set according to the proportion of 8;
s24: and respectively storing two-dimensional pictures of the divided training set, the divided verification set and the divided test set, wherein the labels are stored in a PNG format, and the label of each tooth is the pixel value corresponding to the tooth.
4. The CBCT-based tooth and alveolar bone reconstruction method according to claim 1, wherein: the S3 specifically includes:
s31, generating a target detection label according to the two-dimensional picture and the label in the step S2, and generating a surrounding frame with a label format of all teeth and jawbones;
s32, detecting the tooth and jaw bone areas by using a single-stage target detection algorithm YOLO V5, and extracting the tooth and jaw bone areas to obtain loose ROI areas
Figure 163522DEST_PATH_IMAGE019
S33, generating semantic segmentation labels according to the three-dimensional image data and the labels in the step S2, wherein the label and the pixel value of each tooth are both 1, the jaw label is 2, and the pixel values of other areas are 0;
s34, utilizing a 3D V-Net segmentation algorithm to detect the region of the target in the step S32
Figure 362291DEST_PATH_IMAGE019
Performing tooth coarse segmentation and jaw segmentation to obtain the segmentation boundary of each tooth, and fitting the overall boundary curve of the tooth by using morphology to obtain an accurate tooth ROI (region of interest)
Figure 389153DEST_PATH_IMAGE020
And jaw ROI area
Figure 918354DEST_PATH_IMAGE021
5. The CBCT-based tooth and alveolar bone reconstruction method according to claim 1, wherein: the target detection algorithm adopts an attenuation formula for the NMS algorithm to adapt to the filtering of the tooth surrounding box, and the NMS attenuation formula is expressed as follows:
Figure 596329DEST_PATH_IMAGE022
wherein the content of the first and second substances,
Figure 919994DEST_PATH_IMAGE023
represents the cross-over ratio of x and y,
Figure 434152DEST_PATH_IMAGE024
Figure 813050DEST_PATH_IMAGE025
are two hyper-parameters which are,
Figure 33946DEST_PATH_IMAGE024
the weight is represented by a weight that is,
Figure 856409DEST_PATH_IMAGE025
represents the attenuation weight, wherein,
Figure 44814DEST_PATH_IMAGE026
Figure 978135DEST_PATH_IMAGE027
6. a CBCT-based dental and alveolar bone reconstruction method according to claim 1, wherein: the attention mechanism in step S4 is expressed as follows:
Figure 302806DEST_PATH_IMAGE028
wherein
Figure 296169DEST_PATH_IMAGE029
Is shown in the c-th channel
Figure 722603DEST_PATH_IMAGE030
The output of the feature at the location is,
Figure 460881DEST_PATH_IMAGE031
in order to be an input, the user can select,
Figure 656370DEST_PATH_IMAGE032
for the attention weight of the feature map in the height direction,
Figure 820635DEST_PATH_IMAGE033
the attention weight in the width direction is indicated.
7. The CBCT-based tooth and alveolar bone reconstruction method according to claim 1, wherein: the step S5 specifically includes:
s51, example segmentation adopts U-net + + as a basic network, a main network selects Resnet34 to perform segmentation training to obtain an example segmentation result, and the method for reconstructing the tooth and the alveolar bone based on the CBCT according to claim 1 is characterized in that: the step S5 specifically includes:
s51, example segmentation adopts U-net + + as a basic network, and the main network selects Resnet34 to perform segmentation training to obtain an example segmentation result
Figure 718053DEST_PATH_IMAGE034
S52, dividing the example obtained in the step S51 into a plurality of examples
Figure 993176DEST_PATH_IMAGE035
With the bounding box obtained in step S4
Figure 558019DEST_PATH_IMAGE036
Merging to obtain the final segmentation boundary
Figure 362027DEST_PATH_IMAGE037
S53, dividing the example into results
Figure 825369DEST_PATH_IMAGE034
According to the tooth classification obtained in S4
Figure 825555DEST_PATH_IMAGE038
Carrying out reassignment classification to obtain a high-precision segmentation result
Figure 57953DEST_PATH_IMAGE039
8. The CBCT-based tooth and alveolar bone reconstruction method according to claim 1, wherein: the step S7 specifically includes:
s71, mean filtering is carried out on the tooth segmentation result obtained in the step S5 and the upper and lower alveolar bone segmentation results obtained in the step S6 to obtain a filtered result
Figure 236125DEST_PATH_IMAGE040
Wherein the gaussian kernel size is 3*3;
s72, filtering the result
Figure 436031DEST_PATH_IMAGE041
Performing Sobel edge detection to obtain an edge detection result
Figure 990640DEST_PATH_IMAGE042
The computational expression of the Sobel operator is as follows:
Figure 811966DEST_PATH_IMAGE043
wherein the content of the first and second substances,
Figure 941465DEST_PATH_IMAGE044
Figure 113820DEST_PATH_IMAGE045
representing pixel values in the x and y directions respectively,
Figure 455808DEST_PATH_IMAGE046
and
Figure 662799DEST_PATH_IMAGE047
representing the convolution kernel in the x and y directions,
Figure 182773DEST_PATH_IMAGE048
representing an original image;
s73, edge detection is obtained
Figure 826113DEST_PATH_IMAGE049
Overlapping the images along the Z axis to obtain the final point cloud data
Figure 988104DEST_PATH_IMAGE050
Figure 49601DEST_PATH_IMAGE050
The mathematical expression of (a) is as follows;
Figure 261182DEST_PATH_IMAGE051
wherein the content of the first and second substances,
Figure 142551DEST_PATH_IMAGE052
indicating the result of the edge detection of the alveolar bone,
Figure 639391DEST_PATH_IMAGE053
which represents the detection of the edges of the teeth,Kindicates the number of Z-axes of the image,
Figure 742345DEST_PATH_IMAGE054
the number of categories of teeth is represented;
s74, aiming at the obtained point cloud data
Figure 932018DEST_PATH_IMAGE050
Poisson reconstruction is performed to obtain final reconstructed alveolar bone and example tooth results.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the CBCT-based dental and alveolar bone reconstruction method of any one of claims 1 to 8.
10. A computer-readable storage medium, having stored thereon a computer program, characterized in that the computer program is executable by a computer processor for executing computer-readable instructions for carrying out the method according to any one of claims 1 to 8.
CN202211082339.1A 2022-09-06 2022-09-06 Tooth and alveolar bone reconstruction method, equipment and medium based on CBCT Pending CN115205469A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211082339.1A CN115205469A (en) 2022-09-06 2022-09-06 Tooth and alveolar bone reconstruction method, equipment and medium based on CBCT

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211082339.1A CN115205469A (en) 2022-09-06 2022-09-06 Tooth and alveolar bone reconstruction method, equipment and medium based on CBCT

Publications (1)

Publication Number Publication Date
CN115205469A true CN115205469A (en) 2022-10-18

Family

ID=83571937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211082339.1A Pending CN115205469A (en) 2022-09-06 2022-09-06 Tooth and alveolar bone reconstruction method, equipment and medium based on CBCT

Country Status (1)

Country Link
CN (1) CN115205469A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115601509A (en) * 2022-11-11 2023-01-13 四川大学(Cn) Extraction method of standardized alveolar bone arch form
CN115661141A (en) * 2022-12-14 2023-01-31 上海牙典医疗器械有限公司 Tooth and alveolar bone segmentation method and system based on CBCT image
CN115688461A (en) * 2022-11-11 2023-02-03 四川大学 Clustering-based device and method for evaluating abnormal degree of arch state of dental arch and alveolar bone
CN115830287A (en) * 2023-02-20 2023-03-21 汉斯夫(杭州)医学科技有限公司 Tooth point cloud fusion method, equipment and medium based on laser oral scanning and CBCT reconstruction
CN115880286A (en) * 2023-02-16 2023-03-31 极限人工智能有限公司 Method, system, medium and electronic device for intelligently planning and recommending oral implant
CN115953583A (en) * 2023-03-15 2023-04-11 山东大学 Tooth segmentation method and system based on iterative boundary optimization and deep learning
CN117095018A (en) * 2023-10-16 2023-11-21 北京朗视仪器股份有限公司 Multi-class tooth segmentation method and device based on CBCT image

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105741288A (en) * 2016-01-29 2016-07-06 北京正齐口腔医疗技术有限公司 Tooth image segmentation method and apparatus
WO2017218040A1 (en) * 2016-06-17 2017-12-21 Carestream Health, Inc. Method and system for 3d cephalometric analysis
CN109903396A (en) * 2019-03-20 2019-06-18 洛阳中科信息产业研究院(中科院计算技术研究所洛阳分所) A kind of tooth three-dimensional model automatic division method based on surface parameterization
US20200327382A1 (en) * 2019-04-15 2020-10-15 Noblis, Inc. Adapting pre-trained classification algorithms
CN111932518A (en) * 2020-08-12 2020-11-13 杭州深睿博联科技有限公司 Deep learning panoramic dental film focus detection and segmentation method and device
CN113344950A (en) * 2021-07-28 2021-09-03 北京朗视仪器股份有限公司 CBCT image tooth segmentation method combining deep learning with point cloud semantics
CN114638852A (en) * 2022-02-25 2022-06-17 汉斯夫(杭州)医学科技有限公司 Jaw bone and soft tissue identification and reconstruction method, device and medium based on CBCT image
CN114758121A (en) * 2022-03-04 2022-07-15 杭州隐捷适生物科技有限公司 CBCT alveolar bone segmentation system and method based on deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105741288A (en) * 2016-01-29 2016-07-06 北京正齐口腔医疗技术有限公司 Tooth image segmentation method and apparatus
WO2017218040A1 (en) * 2016-06-17 2017-12-21 Carestream Health, Inc. Method and system for 3d cephalometric analysis
CN109903396A (en) * 2019-03-20 2019-06-18 洛阳中科信息产业研究院(中科院计算技术研究所洛阳分所) A kind of tooth three-dimensional model automatic division method based on surface parameterization
US20200327382A1 (en) * 2019-04-15 2020-10-15 Noblis, Inc. Adapting pre-trained classification algorithms
CN111932518A (en) * 2020-08-12 2020-11-13 杭州深睿博联科技有限公司 Deep learning panoramic dental film focus detection and segmentation method and device
CN113344950A (en) * 2021-07-28 2021-09-03 北京朗视仪器股份有限公司 CBCT image tooth segmentation method combining deep learning with point cloud semantics
CN114638852A (en) * 2022-02-25 2022-06-17 汉斯夫(杭州)医学科技有限公司 Jaw bone and soft tissue identification and reconstruction method, device and medium based on CBCT image
CN114758121A (en) * 2022-03-04 2022-07-15 杭州隐捷适生物科技有限公司 CBCT alveolar bone segmentation system and method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUNYUN YANG等: "《Accurate and automatic tooth image segmentation model with deep convolutional neural networks and level set method》", 《NEUROCOMPUTING》 *
刘世伟等: "基于局部高斯分布拟合的牙齿锥形束计算机断层图像分割方法", 《生物医学工程学杂志》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115601509A (en) * 2022-11-11 2023-01-13 四川大学(Cn) Extraction method of standardized alveolar bone arch form
CN115688461A (en) * 2022-11-11 2023-02-03 四川大学 Clustering-based device and method for evaluating abnormal degree of arch state of dental arch and alveolar bone
CN115661141A (en) * 2022-12-14 2023-01-31 上海牙典医疗器械有限公司 Tooth and alveolar bone segmentation method and system based on CBCT image
CN115880286A (en) * 2023-02-16 2023-03-31 极限人工智能有限公司 Method, system, medium and electronic device for intelligently planning and recommending oral implant
CN115880286B (en) * 2023-02-16 2023-06-27 极限人工智能有限公司 Method, system, medium and electronic equipment for intelligently planning and recommending oral implant
CN115830287A (en) * 2023-02-20 2023-03-21 汉斯夫(杭州)医学科技有限公司 Tooth point cloud fusion method, equipment and medium based on laser oral scanning and CBCT reconstruction
CN115830287B (en) * 2023-02-20 2023-12-12 汉斯夫(杭州)医学科技有限公司 Tooth point cloud fusion method, device and medium based on laser mouth scanning and CBCT reconstruction
CN115953583A (en) * 2023-03-15 2023-04-11 山东大学 Tooth segmentation method and system based on iterative boundary optimization and deep learning
CN117095018A (en) * 2023-10-16 2023-11-21 北京朗视仪器股份有限公司 Multi-class tooth segmentation method and device based on CBCT image
CN117095018B (en) * 2023-10-16 2023-12-22 北京朗视仪器股份有限公司 Multi-class tooth segmentation method and device based on CBCT image

Similar Documents

Publication Publication Date Title
CN115205469A (en) Tooth and alveolar bone reconstruction method, equipment and medium based on CBCT
KR102581685B1 (en) Classification and 3D modeling of 3D oral and maxillofacial structures using deep learning
US11464467B2 (en) Automated tooth localization, enumeration, and diagnostic system and method
JP7412334B2 (en) Automatic classification and classification method for 3D tooth data using deep learning methods
BR112020012292A2 (en) automated prediction of 3d root format using deep learning methods
US11443423B2 (en) System and method for constructing elements of interest (EoI)-focused panoramas of an oral complex
US10991091B2 (en) System and method for an automated parsing pipeline for anatomical localization and condition classification
Xia et al. Individual tooth segmentation from CT images scanned with contacts of maxillary and mandible teeth
US20220084267A1 (en) Systems and Methods for Generating Quick-Glance Interactive Diagnostic Reports
WO2022213654A1 (en) Ultrasonic image segmentation method and apparatus, terminal device, and storage medium
CN112785609B (en) CBCT tooth segmentation method based on deep learning
CN114638852A (en) Jaw bone and soft tissue identification and reconstruction method, device and medium based on CBCT image
KR20200058316A (en) Automatic tracking method of cephalometric point of dental head using dental artificial intelligence technology and service system
US20220122261A1 (en) Probabilistic Segmentation of Volumetric Images
US20220358740A1 (en) System and Method for Alignment of Volumetric and Surface Scan Images
US20220361992A1 (en) System and Method for Predicting a Crown and Implant Feature for Dental Implant Planning
CN115761226A (en) Oral cavity image segmentation identification method and device, electronic equipment and storage medium
CN114037665A (en) Mandibular neural tube segmentation method, mandibular neural tube segmentation device, electronic apparatus, and storage medium
Xie et al. Automatic Individual Tooth Segmentation in Cone-Beam Computed Tomography Based on Multi-Task CNN and Watershed Transform
US20230252748A1 (en) System and Method for a Patch-Loaded Multi-Planar Reconstruction (MPR)
US20230298272A1 (en) System and Method for an Automated Surgical Guide Design (SGD)
US20220351813A1 (en) Method and apparatus for training automatic tooth charting systems
CN117152507B (en) Tooth health state detection method, device, equipment and storage medium
US20230051400A1 (en) System and Method for Fusion of Volumetric and Surface Scan Images
US20230419631A1 (en) Guided Implant Surgery Planning System and Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20230811

AD01 Patent right deemed abandoned