CN112614127A - Interactive three-dimensional CBCT tooth image segmentation algorithm based on end-to-end - Google Patents

Interactive three-dimensional CBCT tooth image segmentation algorithm based on end-to-end Download PDF

Info

Publication number
CN112614127A
CN112614127A CN202011632774.8A CN202011632774A CN112614127A CN 112614127 A CN112614127 A CN 112614127A CN 202011632774 A CN202011632774 A CN 202011632774A CN 112614127 A CN112614127 A CN 112614127A
Authority
CN
China
Prior art keywords
dimensional
tooth
image
loss function
image segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011632774.8A
Other languages
Chinese (zh)
Inventor
左飞飞
殷金磊
李晓芸
王亚杰
吴宏新
张文宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Largev Instrument Corp ltd
Original Assignee
Largev Instrument Corp ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Largev Instrument Corp ltd filed Critical Largev Instrument Corp ltd
Priority to CN202011632774.8A priority Critical patent/CN112614127A/en
Publication of CN112614127A publication Critical patent/CN112614127A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an end-to-end-based interactive three-dimensional CBCT dental image segmentation algorithm, which comprises the following steps: inputting a three-dimensional CBCT tooth image, and marking teeth needing to be segmented in the three-dimensional CBCT tooth image by a user; step two, acquiring a tooth three-dimensional ROI image; step three, establishing an end-to-end model of three-dimensional CBCT tooth image segmentation, wherein deep level features of the tooth three-dimensional ROI image obtained in the step two are extracted by utilizing a deep learning neural network CNN, solving is carried out by utilizing a three-dimensional Random Walk algorithm, tooth characteristic priori knowledge is added into a loss function, and the deep learning neural network CNN is optimized to obtain the optimized end-to-end model; and step four, inputting the three-dimensional CBCT tooth image to the end-to-end model to obtain a tooth image segmentation result.

Description

Interactive three-dimensional CBCT tooth image segmentation algorithm based on end-to-end
Technical Field
The invention relates to the field of image processing, in particular to an end-to-end-based interactive three-dimensional CBCT dental image segmentation algorithm.
Background
The core business orthodontics and implants of the high-speed development of dentistry both need to carry out accurate three-dimensional modeling on the inside of an oral cavity and draw and clarify the positions of blood vessels and the thickness of a dental jaw bone in the oral cavity, so that as the permeability of orthodontics and implants business is improved, an oral clinic cannot leave CBCT (Cone beam CT for short CBCT, namely Cone beam CT) in the future. There are various processing methods for the oral CBCT image, wherein the Random Walk algorithm is an interactive segmentation algorithm. The method has wide application in 2D image segmentation. The algorithm has a problem in terms of a large amount of calculation if it is used to directly segment three-dimensional data. For example, for 512 x 512 CT data, the order of the L matrix in this method is about 134000000. And is a sparse 7-diagonal matrix. Storing only this matrix would consume 3.5GB of memory. It is more difficult to solve the equations.
Disclosure of Invention
In order to solve the technical problem, the invention provides an end-to-end-based interactive three-dimensional CBCT dental image segmentation algorithm, which comprises the following steps:
inputting a three-dimensional CBCT tooth image, and marking teeth needing to be segmented in the three-dimensional CBCT tooth image by a user;
step two, acquiring a tooth three-dimensional ROI image;
step three, establishing an end-to-end model of three-dimensional CBCT tooth image segmentation, wherein deep level features of the tooth three-dimensional ROI image obtained in the step two are extracted by utilizing a deep learning neural network CNN, solving is carried out by utilizing a three-dimensional Random Walk algorithm, tooth characteristic priori knowledge is added into a loss function, and the deep learning neural network CNN is optimized to obtain the optimized end-to-end model;
and step four, inputting the three-dimensional CBCT tooth image to the end-to-end model to obtain a tooth image segmentation result.
Has the advantages that:
the invention extracts the deep level characteristics of the CBCT data by using the CNN through an end-to-end tooth three-dimensional CBCT image segmentation method combining a deep learning (CNN) of supervised learning and a Random Walk (Random Walk) algorithm, and then solves by using the Random Walk algorithm, thereby improving the efficiency of segmenting the three-dimensional image.
Drawings
FIG. 1, cross-sectional designations;
FIG. 2 is marked sagittal of the plane;
FIG. 3 coronal plane labeling;
FIG. 4 the deep learning neural network of the present invention (3D U-Net neural network);
FIG. 5 is an end-to-end model architecture of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, rather than all embodiments, and all other embodiments obtained by a person skilled in the art based on the embodiments of the present invention belong to the protection scope of the present invention without creative efforts.
The invention provides a three-dimensional CBCT interactive dental image segmentation algorithm, which is an end-to-end three-dimensional CBCT dental image segmentation method combining a deep learning neural network (CNN) and a Random Walk (Random Walk) algorithm of supervised learning. The method comprises the following steps:
inputting a three-dimensional CBCT tooth image, and marking teeth needing to be segmented in the three-dimensional CBCT tooth image by a user;
step two, acquiring a tooth three-dimensional ROI image;
step three, establishing an end-to-end model for three-dimensional CBCT tooth image segmentation, wherein deep level features of the tooth three-dimensional ROI image obtained in the step two are extracted by utilizing a deep learning neural network CNN, solving is carried out by utilizing a three-dimensional Random Walk algorithm, tooth characteristic knowledge is added into a loss function, and the deep learning neural network CNN is optimized to obtain the optimized end-to-end model;
and step four, inputting the three-dimensional CBCT tooth image to the end-to-end model to obtain a tooth image segmentation result.
According to an embodiment of the present invention, the first step specifically includes: firstly, marking a tooth region needing to be segmented out by a user in a transverse plane, a sagittal plane or a coronal plane of a three-dimensional CBCT tooth image by using lines or points, wherein the mark is used as a foreground label, and the marked region around the tooth is marked as a background label, and the marked voxel point set is { V } Vm}. As shown in fig. 1, the tooth region to be segmented and the background region are marked on the cross section in the three-dimensional CBCT tooth image; as shown in fig. 2, the sagittal plane in the three-dimensional CBCT dental image marks the dental region to be segmented, as well as the background region; as shown in fig. 3, the coronal plane in the three-dimensional CBCT dental image marks the dental region to be segmented, as well as the background region.
According to an embodiment of the present invention, the second step includes: the ROI area in the image is determined by the coordinates of the marker points, the minimum and maximum (X) of the X-axis coordinates in the set of marked voxel points are obtainedmin,Xmax) Minimum and maximum values of the Y-axis coordinate (Y)min,Ymax) And the minimum and maximum values of the Z-axis coordinate (Z)min,Zmax) (ii) a ROI area is [ X ]min-n:Xmax+n,Ymin-n:Ymax+n,Zmin-n:Zmax+n]Thus, an ROI image is obtained, where n is an empirical value and is the pixel width corresponding to the actual tooth size. According to one embodiment of the present invention, when the pixel size is 0.25mm, n may take 60.
According to the embodiment of the invention, the third step is to establish an end-to-end model of the three-dimensional CBCT dental image segmentation, as shown in fig. 5, wherein the deep-level features of the dental three-dimensional ROI image obtained in the second step are extracted by using a deep learning neural network CNN, the solution is performed by using a three-dimensional Random Walk algorithm (after the result diffusion step), and tooth characteristic prior knowledge is added to a loss function to optimize the deep learning neural network CNN, so as to obtain an optimized end-to-end model;
extracting the deep level features of the tooth three-dimensional ROI image acquired in the step two by using a deep learning neural network (CNN), wherein the deep level features comprise:
step (3.1), firstly, taking the ROI image as an input image, building a convolution-deconvolution network by taking a three-dimensional convolution deep learning neural network (CNN) as a frame, and mapping the ROI image onto an undirected edge weighted graph by the convolution-deconvolution network; according to an embodiment of the present invention, as shown in FIG. 4, a 3D U-Net network may be employed, which includes a convolutional layer, a max-pooling layer, a fully-connected layer, a deconvolution layer, an activation function layer, and a final output layer.
Step (3.2) converting the undirected edge weighted graph into a graph Laplacian matrix, calculating the probability of the unmarked point set X label through a random walk algorithm, adding the probability into a loss function by combining the characteristic knowledge of teeth as a regularization weight penalty to obtain a new loss function, and satisfying a basic loss function L through the new loss function0The optimal deep learning neural network parameters are solved, and the end-to-end model is optimized.
In the step (3.2), the undirected edge weighted graph is converted into a graph laplacian matrix, where the undirected edge weighted graph G is (V, E), V is a point in the graph, and E is an edge in the graph; point v of adjacency in graph Gi,vjE.g. V, where the point Vi、vjEdge e betweeni,jE, said edge Ei,jHaving an edge weight weI.e. wi,j(ii) a The order of the graph Laplace matrix L is about 216000, and the memory is occupied by about 5.7 MB; the graph laplacian matrix L is:
Figure BDA0002875363810000041
point vk∈Va,VaIs at point viA set of points having an adjacency;
the point set V in the undirected edge weighted graph is divided into two parts, VmSet of voxel points labeled for user, VuIs an unlabeled set of voxel points; considering the sequence of voxel points, with marked voxel points above L and unmarked below, then the laplace matrix is divided into two blocks:
Figure BDA0002875363810000042
Lmis a matrix of marked voxel points, LuIs a matrix of unmarked points; b is a transformation matrix.
Suppose Zi,1Is a point ViProbability of marking as foreground, defining one dimension as | VmMarker matrix Z of 1mAnd one dimension is | VuAn unlabeled matrix Z of | × 1u(ii) a By a random walk algorithm, i.e. a minimization function:
E(X)=XTLX (3)
the probability of an unmarked point set X label is obtained, a sparse linear system can be obtained, and when the system is transmitted in the forward direction, the equation is rapidly solved by using a conjugate gradient method:
LuZu=-BTZm (4)
the minimization of the loss function l of the model with respect to the weight parameter θ in the deep neural network is defined as:
Figure BDA0002875363810000043
wherein
Figure BDA0002875363810000044
Is a real label; i is a three-dimensional input image;
then the gradient update when the entire model propagates backwards is:
Figure BDA0002875363810000045
adding the characteristic knowledge about teeth in the CBCT image into a loss function as a regularization weight penalty to obtain a new loss function:
Lnew=L0+∑i∈Pfi(θ) (7)
wherein L is0For the basis loss function, p is the quantity of the characteristic knowledge and can be one or more, f (·) is a regularization function, and theta is a network weight parameter; by a new loss function LnewTo satisfy the basic loss function L0And a balance of minimizing and maximizing consistency with desired property knowledge.
According to the embodiment of the invention, for example, the tooth characteristic knowledge is that in the three-dimensional CBCT tooth image, the gray level in the tooth area is higher than that in the background area; and using L2 regularization as a regularization function to obtain a weight penalty term:
Figure BDA0002875363810000046
wherein, VBIs a collection of background voxels, WBIs set as weight of background pixel points, lambda is adjustment complexity parameter, and input data X ^ Hu is obtained by using characteristic knowledge of teeth, Hu is gray value of unmarked point, VB={Xi< threshold }; i.e. of the input data XThe size is dependent on the grey value of the CT image, and VBIs a set of voxel points in the input data that are below a predetermined threshold; through the deep learning neural network, the forward propagation formula y of each layer is sigma (X, W), and V is knownBThe weight value of multiplication is WB(ii) a The sigma is a propagation function, W is a weight, and for background points, the weight is WB(ii) a By adding weight punishment to the voxels with low gray value, the network generalization is improved, and the probability of segmenting the voxels with low gray value into foregrounds is reduced, and through the formulas (5), (7) and (8), the final loss function minimization of the parameter theta in the network is defined as:
Figure BDA0002875363810000051
wherein N is the number of test data.
According to an embodiment of the present invention, optionally, the basis loss function includes: cross entropy loss, Dice loss, or Focal loss, and network training may use any of the following three loss functions:
cross entropy loss function:
Figure BDA0002875363810000052
where | V | is the number of unlabeled voxel points,
Figure BDA0002875363810000053
as a genuine label, ZiIs a predicted label;
dice loss function:
Figure BDA0002875363810000054
focal loss function:
Figure BDA0002875363810000055
where α ∈ [0, 1] adjusts the balance of positive and negative samples, and γ ∈ {0, 1, 2, 3.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but various changes may be apparent to those skilled in the art, and it is intended that all inventive concepts utilizing the inventive concepts set forth herein be protected without departing from the spirit and scope of the present invention as defined and limited by the appended claims.

Claims (10)

1. An end-to-end based interactive three-dimensional CBCT dental image segmentation algorithm is characterized by comprising the following steps:
inputting a three-dimensional CBCT tooth image, and marking teeth needing to be segmented in the three-dimensional CBCT tooth image by a user;
step two, acquiring a tooth three-dimensional ROI image;
step three, establishing an end-to-end model of three-dimensional CBCT tooth image segmentation, wherein deep level features of the tooth three-dimensional ROI image obtained in the step two are extracted by utilizing a deep learning neural network CNN, solving is carried out by utilizing a three-dimensional Random Walk algorithm, tooth characteristic priori knowledge is added into a loss function, and the deep learning neural network CNN is optimized to obtain the optimized end-to-end model;
and step four, inputting the three-dimensional CBCT tooth image to the end-to-end model to obtain a tooth image segmentation result.
2. The end-to-end interactive three-dimensional CBCT dental image segmentation algorithm of claim 1, wherein the first step specifically comprises:
firstly, marking the tooth area needing to be segmented out by using lines or points on the transverse section, the sagittal plane or the coronal plane of a user in a three-dimensional CBCT tooth image, and using the marks as foreground labelsAnd labeling the marked tooth periphery region with a set of labeled voxel points as a background label { V }m}。
3. The end-to-end interactive three-dimensional CBCT dental image segmentation algorithm of claim 1, wherein the second step comprises:
the ROI area in the image is determined by the coordinates of the marker points, the minimum and maximum (X) of the X-axis coordinates in the set of marked voxel points are obtainedmin,Xmax) Minimum and maximum values of the Y-axis coordinate (Y)min,Ymax) And the minimum and maximum values of the Z-axis coordinate (Z)min,Zmax) (ii) a ROI area is [ X ]min-n:Xmax+n,Ymin-n:Ymax+n,Zmin-n:Zmax+n]Thus, an ROI image is obtained, where n is the pixel width corresponding to the actual tooth size.
4. The end-to-end interactive three-dimensional CBCT dental image segmentation algorithm as claimed in claim 1, wherein the step three of extracting the deep features of the dental three-dimensional ROI image obtained in the step two by using a deep learning neural network (CNN) comprises:
step (3.1), firstly, taking the ROI image as an input image, building a convolution-deconvolution network by taking a three-dimensional convolution deep learning neural network (CNN) as a frame, generating a deep level feature of the ROI by the convolution-deconvolution network, and mapping the feature onto an undirected edge weighted graph;
step (3.2) converting the undirected edge weighted graph into a graph Laplacian matrix, calculating the probability of the X label of the unmarked point set by a random walk algorithm, adding the probability into a loss function by combining tooth characteristic prior knowledge as regularization weight penalty to obtain a new loss function, and satisfying a basic loss function L by the new loss function0The optimal deep learning neural network parameters are solved, and the end-to-end model is optimized.
5. The end-to-end interactive three-dimensional CBCT dental image segmentation algorithm as claimed in claim 4, wherein in the step (3.2), the undirected edge weighted graph is converted into a graph Laplace matrix, wherein the undirected edge weighted graph G is (V, E), V is a point in the graph, and E is an edge in the graph; point v of adjacency in graph Gi,vjE.g. V, where the point Vi、vjEdge e betweeni,jE, said edge Ei,jHaving an edge weight weI.e. wi,j(ii) a The graph laplacian matrix L:
Figure FDA0002875363800000021
point vk∈Va,VaIs at point viA set of points having an adjacency;
the point set V in the undirected edge weighted graph is divided into two parts, VmSet of voxel points labeled for user, VuIs an unlabeled set of voxel points; considering the sequence of voxel points, with marked voxel points above L and unmarked below, then the laplace matrix is divided into two blocks:
Figure FDA0002875363800000022
Lmis a matrix of marked voxel points, LuIs a matrix of unmarked points; b is a transformation matrix.
6. The end-to-end interactive three-dimensional CBCT dental image segmentation algorithm according to claim 5,
suppose Zi,1Is a point ViProbability of marking as foreground, defining one dimension as | VmMarker matrix Z of 1mAnd one dimension is | VuI x 1 ofMark matrix Zu(ii) a By a random walk algorithm, i.e. a minimization function:
E(X)=XTLX (3)
the probability of an unmarked point set X label is obtained, a sparse linear system can be obtained, and when the system is transmitted in the forward direction, the equation is rapidly solved by using a conjugate gradient method:
LuZu=-BTZm (4)
the minimization of the loss function l of the model with respect to the weight parameter θ in the deep neural network is defined as:
Figure FDA0002875363800000023
wherein
Figure FDA0002875363800000024
Is a real label; i is three-dimensional input data;
then the gradient update when the entire model propagates backwards is:
Figure FDA0002875363800000031
7. the interactive three-dimensional CBCT dental image segmentation algorithm based on end-to-end according to claim 1,
adding the prior knowledge about the tooth characteristics in the CBCT image into a loss function as a regularization weight penalty to obtain a new loss function:
Lnew=L0+∑i∈Pfi(θ) (7)
wherein L is0As a basis loss function, p is the amount of prior knowledge, f (-) is a regularization function, and θ is a network weight parameter; by a new loss function LnewTo satisfy the basic loss function L0Minimization of and consistency with desired characteristic knowledgeThe balance is maximized.
8. The interactive three-dimensional CBCT dental image segmentation algorithm based on end-to-end according to claim 1,
the priori knowledge of the teeth is that in the three-dimensional CBCT tooth image, the gray level in a tooth area is higher than the gray level in a background area; and using L2 regularization as a regularization function to obtain a weight penalty term:
Figure FDA0002875363800000032
wherein, VBIs a collection of background voxels, WBIs set as weight of background pixel points, lambda is adjustment complexity parameter, and input data X ^ Hu is obtained by using characteristic knowledge of teeth, Hu is gray value of unmarked point, VB={Xi< threshold }; i.e. the size of the input data X is dependent on the size of the CT image grey values, and VBIs a set of voxel points in the input data that are below a predetermined threshold; through the deep learning neural network, the forward propagation formula y of each layer is sigma (X, W), and V is knownBThe weight value of multiplication is WB(ii) a The sigma is a propagation function, W is a weight, and for background points, the weight is WB(ii) a By adding weight punishment to the voxels with low gray value, the network generalization is improved, and the probability of segmenting the voxels with low gray value into foregrounds is reduced, and through the formulas (5), (7) and (8), the final loss function minimization of the parameter theta in the network is defined as:
Figure FDA0002875363800000033
wherein N is the number of test data.
9. The end-to-end interactive three-dimensional CBCT dental image segmentation algorithm of claim 7,
the basis loss function includes: cross entropy loss, Dice loss, or Focal loss, and network training may use any of the following three loss functions:
cross entropy loss function:
Figure FDA0002875363800000034
where | V | is the number of unlabeled voxel points,
Figure FDA0002875363800000041
as a genuine label, ZiIs a predicted label;
dice loss function:
Figure FDA0002875363800000042
focal loss function:
Figure FDA0002875363800000043
where α ∈ [0, 1] adjusts the balance of positive and negative samples, and γ ∈ {0, 1, 2, 3.
10. The end-to-end interactive three-dimensional CBCT dental image segmentation algorithm of claim 7, wherein the network is a 3D U-Net network, comprising convolutional layers, max-pooling layers, full-link layers, anti-convolutional layers, activation function layers, and final output layers.
CN202011632774.8A 2020-12-31 2020-12-31 Interactive three-dimensional CBCT tooth image segmentation algorithm based on end-to-end Pending CN112614127A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011632774.8A CN112614127A (en) 2020-12-31 2020-12-31 Interactive three-dimensional CBCT tooth image segmentation algorithm based on end-to-end

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011632774.8A CN112614127A (en) 2020-12-31 2020-12-31 Interactive three-dimensional CBCT tooth image segmentation algorithm based on end-to-end

Publications (1)

Publication Number Publication Date
CN112614127A true CN112614127A (en) 2021-04-06

Family

ID=75252947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011632774.8A Pending CN112614127A (en) 2020-12-31 2020-12-31 Interactive three-dimensional CBCT tooth image segmentation algorithm based on end-to-end

Country Status (1)

Country Link
CN (1) CN112614127A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313722A (en) * 2021-06-10 2021-08-27 浙江传媒学院 Tooth root image interactive annotation method
CN113344950A (en) * 2021-07-28 2021-09-03 北京朗视仪器股份有限公司 CBCT image tooth segmentation method combining deep learning with point cloud semantics
CN113516784A (en) * 2021-07-27 2021-10-19 四川九洲电器集团有限责任公司 Tooth segmentation modeling method and device
CN114241173A (en) * 2021-12-09 2022-03-25 电子科技大学 Tooth CBCT image three-dimensional segmentation method and system
CN114757960A (en) * 2022-06-15 2022-07-15 汉斯夫(杭州)医学科技有限公司 Tooth segmentation and reconstruction method based on CBCT image and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957066A (en) * 2016-04-22 2016-09-21 北京理工大学 CT image liver segmentation method and system based on automatic context model
CN107203998A (en) * 2016-03-18 2017-09-26 北京大学 A kind of method that denture segmentation is carried out to pyramidal CT image
US20200320685A1 (en) * 2017-10-02 2020-10-08 Promaton Holding B.V. Automated classification and taxonomy of 3d teeth data using deep learning methods
CN111968120A (en) * 2020-07-15 2020-11-20 电子科技大学 Tooth CT image segmentation method for 3D multi-feature fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203998A (en) * 2016-03-18 2017-09-26 北京大学 A kind of method that denture segmentation is carried out to pyramidal CT image
CN105957066A (en) * 2016-04-22 2016-09-21 北京理工大学 CT image liver segmentation method and system based on automatic context model
US20200320685A1 (en) * 2017-10-02 2020-10-08 Promaton Holding B.V. Automated classification and taxonomy of 3d teeth data using deep learning methods
CN111968120A (en) * 2020-07-15 2020-11-20 电子科技大学 Tooth CT image segmentation method for 3D multi-feature fusion

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313722A (en) * 2021-06-10 2021-08-27 浙江传媒学院 Tooth root image interactive annotation method
CN113313722B (en) * 2021-06-10 2023-09-12 浙江传媒学院 Interactive labeling method for tooth root images
CN113516784A (en) * 2021-07-27 2021-10-19 四川九洲电器集团有限责任公司 Tooth segmentation modeling method and device
CN113516784B (en) * 2021-07-27 2023-05-23 四川九洲电器集团有限责任公司 Tooth segmentation modeling method and device
CN113344950A (en) * 2021-07-28 2021-09-03 北京朗视仪器股份有限公司 CBCT image tooth segmentation method combining deep learning with point cloud semantics
CN114241173A (en) * 2021-12-09 2022-03-25 电子科技大学 Tooth CBCT image three-dimensional segmentation method and system
CN114241173B (en) * 2021-12-09 2023-03-21 电子科技大学 Tooth CBCT image three-dimensional segmentation method and system
CN114757960A (en) * 2022-06-15 2022-07-15 汉斯夫(杭州)医学科技有限公司 Tooth segmentation and reconstruction method based on CBCT image and storage medium

Similar Documents

Publication Publication Date Title
CN112614127A (en) Interactive three-dimensional CBCT tooth image segmentation algorithm based on end-to-end
CN109903396B (en) Tooth three-dimensional model automatic segmentation method based on curved surface parameterization
CN110544264B (en) Temporal bone key anatomical structure small target segmentation method based on 3D deep supervision mechanism
CN107203998B (en) Method for carrying out dentition segmentation on cone beam CT image
CN108665463A (en) A kind of cervical cell image partition method generating network based on confrontation type
CN110689564B (en) Dental arch line drawing method based on super-pixel clustering
CN105389821B (en) It is a kind of that the medical image cutting method being combined is cut based on cloud model and figure
CN110555852B (en) Single tooth based on gray histogram and dental pulp segmentation method thereof
CN114066871B (en) Method for training new coronal pneumonia focus area segmentation model
CN113223010A (en) Method and system for fully automatically segmenting multiple tissues of oral cavity image
CN115953534A (en) Three-dimensional reconstruction method and device, electronic equipment and computer-readable storage medium
CN111724389B (en) Method, device, storage medium and computer equipment for segmenting CT image of hip joint
CN113344950A (en) CBCT image tooth segmentation method combining deep learning with point cloud semantics
Cui et al. Toothpix: Pixel-level tooth segmentation in panoramic x-ray images based on generative adversarial networks
CN116152500A (en) Full-automatic tooth CBCT image segmentation method based on deep learning
Du et al. Mandibular canal segmentation from CBCT image using 3D convolutional neural network with scSE attention
Caliskan et al. Three-dimensional modeling in medical image processing by using fractal geometry
CN117011318A (en) Tooth CT image three-dimensional segmentation method, system, equipment and medium
CN111986216A (en) RSG liver CT image interactive segmentation algorithm based on neural network improvement
CN114677516B (en) Automatic oral mandibular tube segmentation method based on deep neural network
CN109741360B (en) Bone joint segmentation method, device, terminal and readable medium
CN114758073A (en) Oral cavity digital system based on RGBD input and flexible registration
Jin et al. Oral Cone Beam Computed Tomography Images Segmentation Based On Multi-view Fusion
CN110930391A (en) Method, device and equipment for realizing medical image auxiliary diagnosis based on VggNet network model and storage medium
RU2783364C1 (en) Device for creation of multidimensional virtual images of human respiratory organs and method for creation of volumetric images, using device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100084 room a800b, 8th floor, Tsinghua Tongfang building, Tsinghua garden, Haidian District, Beijing

Applicant after: Beijing Langshi Instrument Co.,Ltd.

Address before: 100084 a8008b, 8th floor, Tsinghua Tongfang building, Tsinghua garden, Haidian District, Beijing

Applicant before: LARGEV INSTRUMENT Corp.,Ltd.