CN114897786A - Automatic extraction method for mandible neural tube - Google Patents
Automatic extraction method for mandible neural tube Download PDFInfo
- Publication number
- CN114897786A CN114897786A CN202210388461.5A CN202210388461A CN114897786A CN 114897786 A CN114897786 A CN 114897786A CN 202210388461 A CN202210388461 A CN 202210388461A CN 114897786 A CN114897786 A CN 114897786A
- Authority
- CN
- China
- Prior art keywords
- region
- slice
- neural tube
- centroid
- row
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 210000000276 neural tube Anatomy 0.000 title claims abstract description 103
- 210000004373 mandible Anatomy 0.000 title claims abstract description 17
- 238000000605 extraction Methods 0.000 title claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 14
- 230000011218 segmentation Effects 0.000 claims abstract description 10
- 210000002455 dental arch Anatomy 0.000 claims description 19
- 238000012935 Averaging Methods 0.000 claims description 9
- 230000009467 reduction Effects 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 4
- 238000002591 computed tomography Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 239000004053 dental implant Substances 0.000 abstract description 5
- 230000008569 process Effects 0.000 abstract description 3
- 238000001356 surgical procedure Methods 0.000 abstract description 3
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 239000007943 implant Substances 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000002698 mandibular nerve Anatomy 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30036—Dental; Teeth
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention provides an automatic extraction method of a mandible neural tube, which comprises the following steps: generating a sequence of slice images along the arch curve; cutting out a small image comprising a neural tube, performing semantic segmentation, and extracting a neural tube region to obtain a filtered small image; reducing the filtered small picture into a slice picture to obtain a reduced slice picture; determining the centroid of the neural tube region in the post-reduction slice image; the neural tube exit region is identified and split, yielding the centroid of the upper sub-region and the centroid of the lower sub-region, respectively. The centroids are sorted and cubic spline interpolation is performed, generating the final neural tube. The invention provides an automatic extraction method of a mandible neural tube, which replaces the existing process of manually marking the neural tube by a dentist, thereby improving the efficiency of dental implant surgery, identifying the accuracy of the neural tube and reducing the requirements on the experience of the dentist.
Description
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to an automatic extraction method of a mandible neural tube.
Background
At present, with the continuous improvement of living standard of people, the dental implant operation is adopted by more and more patients. Since the dental implant operation requires implanting an implant in the oral cavity of a patient, special attention needs to be paid to whether the implant is pressed to the nerve tube of the mandible of the patient, otherwise unexpected damage is caused to the patient. The prerequisite for judging whether the implant is pressed to the neural tube of the mandible is that the neural tube in the mandible needs to be accurately identified, the existing method depends on manual marking by dentists, and the problems are as follows: (1) manual labeling wastes dentist labeling time; (2) the experience level of the dentist is very demanding, and the dentist who is not trained professionally has the problems of difficulty in identifying the neural tube and inaccurate labeling.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an automatic extraction method of a mandible neural tube, which can effectively solve the problems.
The technical scheme adopted by the invention is as follows:
the invention provides an automatic extraction method of a mandible neural tube, which comprises the following steps:
step S1, generating a dental arch curve;
step S2, generating slice images from beginning to end along the dental arch curve in the direction perpendicular to the dental arch curveA sequence; wherein, the slice image sequence comprises n slice images, which are respectively: slice P 1 Slice image P 2 …, slice P n (ii) a Slice P 1 Slice image P 2 …, slice P n Respectively corresponding to n sampling points from the beginning to the end of the dental arch curve; the image sizes of the respective slice images are the same and are: a is 1 *b 1 (ii) a Wherein, a 1 Is the slice width; b 1 Is the slice height;
step S3, cutting out a small image including the neural tube at the same internal area position of each slice image;
thus, for the slice P 1 Slice image P 2 …, slice P n After cutting, respectively obtaining small pictures Q 1 Small picture Q 2 …, panel Q n (ii) a The images of the respective panels are the same in size and are: a is 2 *b 2 (ii) a Wherein, a 2 Is a small image width; b 2 Is a small picture height;
step S4, for each small picture Q i Performing semantic segmentation on each small image Q i A plurality of three-dimensional connected domains are identified, thereby obtaining a small graph Q after semantic segmentation i (ii) a Wherein i is 1,2, …, n;
step S5, the small graph Q after semantic division i Analyzing the three-dimensional connected domain, and reserving the three-dimensional connected domain with the maximum number of voxels as the extracted neural tube region C i Filtering other three-dimensional connected domains to obtain a filtered small graph Q i ;
The filtered small picture Q i And reverting to the cut slice P in step S3 i The reduced slice is obtained, and is represented as: reduced section view W i ;
Step S6, after reduction, the slice image W i In the neural tube region C has been identified i Determining the neural tube region C i Center of mass O i And obtain the centroid O i Slice image W after reduction i Abscissa x of (1) i And ordinate y i (ii) a Therefore, for n restored slice images, n centroids are obtained in total;
step S7, reducing the slice images W i Neural canal region C i And (3) comparing the circularity of the neural tube region, and selecting a reduced section image with the circularity of the neural tube region smaller than a circularity threshold value, wherein the reduced section image is represented as: reduced section view W e ;
Thus, the slice W after reduction e Neural canal region C e Judging the area of the neural tube outlet;
reduced section view W e The centroid of is centroid O e By the center of mass O e The horizontal line is a dividing line, and the neural tube region C e Split into two sub-regions, respectively: an upper sub-region and a lower sub-region; respectively calculating to obtain the mass center O of the upper sub-region e1 And the centroid O of the lower subregion e2 ;
At step S8, the centroid O is removed from the n centroids identified at step S6 e Plus the centroid O of the upper subregion e1 And the centroid O of the lower subregion e2 (ii) a Thus obtaining n +1 centroids, and the abscissa and the ordinate of each centroid are obtained;
according to the sequence of the slice images of the n +1 centroids, the n +1 centroids are identified in the three-dimensional coordinate system according to the plane coordinates of the n +1 centroids, and cubic spline interpolation is carried out on the n +1 centroids, so that the final neural tube is generated.
Preferably, step S2 specifically includes:
performing CT scanning from head to tail along the dental arch curve in the direction vertical to the dental arch curve to obtain an original slice sequence;
for each original slice in the original slice sequence, obtaining the slice image by a bilinear interpolation method according to the following formula;
v(x,y)=ax+by+cxy+d
wherein:
x and y represent coordinates at which bilinear interpolation is to be carried out, and v (x and y) represents a pixel value of a (x and y) position of a generated slice image after interpolation;
the four coefficients a, b, c and d are calculated by pixel values of x and y pixel points at four nearest points of the original slice.
Preferably, in step S7, the centroid O is used e The horizontal line is a dividing line, and the neural tube region C e Splitting into two subregions by the following specific method:
1) for the neural tube region C e The Row numbers Row of all the pixel points are averaged to obtain the centroid O e The abscissa meanX; for the neural tube region C e The column numbers Col of all the pixel points are averaged to obtain the centroid O e The ordinate mean of (A), from which the neural tube region C is calculated e Center of mass O e The coordinates of (a):
for the neural tube region C e The Row numbers Row of all the pixel points are averaged to obtain the centroid O e The abscissa meanX; for the neural tube region C e The column numbers Col of all the pixel points are averaged to obtain the centroid O e Ordinate mean of (d); the formula used is as follows:
[meanX,meanY]=mean(RoW,Col)
2) in the neural canal region C e Inside, all pixel points with the row number larger than the meanX form an upper sub-region upRegion; all the pixel points with the line numbers smaller than the meanX form a lower sub-region down region, and the formula is as follows:
upRegion=Row>meanX
downRegion=Row<meanX
3) averaging the Row numbers Row of all the pixel points of the upper subregion upRegion to obtain the centroid O of the upper subregion e1 The abscissa upcenter x; the column numbers Col of all the pixel points of the upper sub-region upRegion are averaged to obtain the mass center O of the upper sub-region el The ordinate upcenter;
averaging the Row numbers Row of all pixel points of the lower sub-region down region to obtain the mass center O of the lower sub-region e2 Down center x; averaging the column numbers Col of all pixel points of the lower sub-region down region to obtain the mass center O of the lower sub-region e2 Down center y of (d); the formula is expressed as follows:
[upCenterX,upCenterY]=mean(upRegion(Row,Col))
[downCenterX,downCenterY]=mean(downRegion(Row,Col))
thereby obtaining the centroid O of the upper subregion e1 And the centroid O of the lower subregion e2 The coordinates of (a).
The automatic extraction method of the mandible neural tube provided by the invention has the following advantages:
the invention provides an automatic extraction method of a mandible neural tube, which replaces the existing process of manually marking the neural tube by a dentist, thereby improving the efficiency of dental implant surgery, identifying the accuracy of the neural tube and reducing the requirements on the experience of the dentist.
Drawings
Fig. 1 is a schematic flow chart of an automatic extraction method of a mandibular nerve tube provided by the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a method for automatically extracting a neural tube of a mandible, which comprises the following steps with reference to fig. 1:
step S1, generating a dental arch curve;
step S2, generating a slice image sequence from head to tail along the dental arch curve in the direction perpendicular to the dental arch curve; wherein, the slice image sequence comprises n slice images, which are respectively: slice P 1 Slice image P 2 ,., section P n (ii) a Slice P 1 Slice image P 2 ,., section map P n Respectively corresponding to n sampling points from the beginning to the end of the dental arch curve; the image sizes of the respective slice images are the same and are: a is 1 *b 1 (ii) a Wherein, a 1 Is the slice width; b is a mixture of 1 Is the slice height;
step S2 specifically includes:
performing CT scanning from head to tail along the dental arch curve in the direction vertical to the dental arch curve to obtain an original slice sequence;
for each original slice in the original slice sequence, obtaining the slice image by a bilinear interpolation method according to the following formula;
v(x,y)=ax+by+cxy+d
wherein:
x and y represent coordinates at which bilinear interpolation is to be carried out, and v (x and y) represents a pixel value of a (x and y) position of a generated slice image after interpolation;
the four coefficients a, b, c and d are calculated by pixel values of x and y pixel points at four nearest points of the original slice.
Step S3, cutting out a small image including the neural tube at the same internal area position of each slice image;
for example, determining four vertex coordinates of ABCD; then, in each section image, four vertexes of the ABCD are positioned, and a rectangular area surrounded by the four vertexes of the ABCD is cut off to obtain a small image.
The main purposes of the steps are as follows: because the section size is large and the neural tube area is small, if the whole section is used for subsequent semantic segmentation, the neural tube is difficult to accurately identify and segment. Therefore, the invention cuts each slice image, so that the cut image mainly comprises a neural tube.
Thus, for the slice P 1 Slice image P 2 …, slice P n After cutting, respectively obtaining small pictures Q 1 Small picture Q 2 …, panel Q n (ii) a The images of the respective panels are the same in size and are: a is 2 *b 2 (ii) a Wherein, a 2 Is a small image width; b 2 Is a small picture height;
step S4, for each small picture Q i Performing semantic segmentation on each small image Q i A plurality of three-dimensional connected domains are identified, thereby obtaining a small graph Q after semantic segmentation i (ii) a Wherein i is 1,2, …, n;
for example, semantic segmentation is performed using VggNet that is already trained.
Step S5, the small graph Q after semantic division i Analyzing the three-dimensional connected domain, and reserving the three-dimensional connected domain with the maximum number of voxels as the extracted neural tube region C i Filtering other three-dimensional connected domains to obtain a filtered small graph Q i ;
Specifically, analysis of a three-dimensional connected domain is performed on a neural tube region segmented by VggNet, and then filtering of the neural tube region is performed according to the number of voxels in the three-dimensional connected domain. Thus, the filtered panel Q i A communicating domain comprising the neural tube.
The filtered small picture Q i And reverting to the slice P clipped in step S3 i The reduced slice is obtained, and is represented as: reduced section view W i ;
Step S6, after reduction, the slice image W i In the neural tube region C has been identified i Determining the neural tube region C i Center of mass O i And obtain the centroid O i Slice image W after reduction i Abscissa x of (1) i And ordinate y i (ii) a Therefore, for n restored slice images, n centroids are obtained in total;
step S7, reducing the slice images W i Neural canal region C i And (3) comparing the circularity of the neural tube region, and selecting a reduced section image with the circularity of the neural tube region smaller than a circularity threshold value, wherein the reduced section image is represented as: reduced section view W e ;
Thus, the slice W after reduction e Neural canal region C e Judging the area of the neural tube outlet;
reduced section view W e The centroid of is centroid O e By the centroid O e The horizontal line is a dividing line, and the neural tube region C e Split into two sub-regions, respectively: an upper sub-region and a lower sub-region; respectively calculating to obtain the mass center O of the upper sub-region e1 And the centroid O of the lower subregion e2 ;
Specifically, the neural tube exit region is generally elliptical; while other areas of the neural tube are generally circular. Therefore, to achieve accurate extraction of the neural tube, the present invention identifies and specially treats the neural tube exit region. And splitting the judged neural tube outlet area into two subregions, and respectively calculating the mass center of the neural tube pixel composition area for the two subregions.
In this step, the centroid O is used e The horizontal line is a dividing line, and the neural tube region C e Splitting into two subregions by the following specific method:
1) for the neural tube region C e The Row numbers Row of all the pixel points are averaged to obtain the centroid O e The abscissa meanX; for the neural tube region C e The column numbers Col of all the pixel points are averaged to obtain the centroid O e The ordinate mean of (A), from which the neural tube region C is calculated e Center of mass O e The coordinates of (a):
for the neural tube region C e The Row numbers Row of all the pixel points are averaged to obtain the centroid O e The abscissa meanX; for the neural tube region C e The column numbers Col of all the pixel points are averaged to obtain the centroid O e Ordinate mean of (d); the formula used is as follows:
[meanX,meanY]=mean(Row,Col)
2) in the neural canal region C e Inside, all pixel points with the row number larger than the meanX form an upper sub-region upRegion; all the pixel points with the line numbers smaller than the meanX form a lower sub-region down region, and the formula is as follows:
upRegion=Row>meanX
downRegion=Row<meanX
3) averaging the Row numbers Row of all the pixel points of the upper subregion upRegion to obtain the centroid O of the upper subregion e1 The abscissa upcenter x; the column numbers Col of all the pixel points of the upper sub-region upRegion are averaged to obtain the mass center O of the upper sub-region e1 The ordinate upcenter;
averaging the Row numbers Row of all pixel points of the lower sub-region down region to obtain the lower partPartial region centroid O e2 Down center x; averaging the column numbers Col of all pixel points of the lower sub-region down region to obtain the mass center O of the lower sub-region e2 Down center y of (d); the formula is expressed as follows:
[upCenterX,upCenterY]=mean(upRegion(Row,Col))
[downCenterX,downCenterY]=mean(downRegion(Row,Col))
thereby obtaining the centroid O of the upper subregion e1 And the centroid O of the lower subregion e2 The coordinates of (a).
At step S8, the centroid O is removed from the n centroids identified at step S6 e Plus the centroid O of the upper subregion e1 And the centroid O of the lower subregion e2 (ii) a Thus obtaining n +1 centroids, and the abscissa and the ordinate of each centroid are obtained;
according to the sequence of the slice images of the n +1 centroids, the n +1 centroids are identified in the three-dimensional coordinate system according to the plane coordinates of the n +1 centroids, and cubic spline interpolation is carried out on the n +1 centroids, so that the final neural tube is generated.
The invention provides an automatic extraction method of a mandible neural tube, which comprises the following main steps: firstly, generating a slice image sequence vertical to a dental arch line, then cutting the slice image to enable the cut view to mainly comprise a neural tube part, then carrying out semantic segmentation based on vgnet on the cut view to segment a region comprising a real neural tube and some pseudoneural tubes in the slice image, then carrying out three-dimensional connected domain analysis on all the segmented regions considered as the neural tubes to segment the real neural tube in the slice image, then converting the segmented neural tubes into the slice image sequence, and carrying out centroid extraction. At the outlet of the neural tube, since the curvature of the neural tube is involved, the segmented neural tube needs to be split into two regions, the centroids of the two regions are recorded, and cubic spline interpolation is performed between the centroid points, so as to generate the final neural tube. Therefore, the whole method has the advantages of simple implementation, strong real-time performance, high robustness, high identification precision and the like.
The invention provides an automatic extraction method of a mandible neural tube, which replaces the existing process of manually marking the neural tube by a dentist, thereby improving the efficiency of dental implant surgery, identifying the accuracy of the neural tube and reducing the requirements on the experience of the dentist.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and improvements can be made without departing from the principle of the present invention, and such modifications and improvements should also be considered within the scope of the present invention.
Claims (3)
1. An automatic extraction method for a neural tube of a mandible is characterized by comprising the following steps:
step S1, generating a dental arch curve;
step S2, generating a slice image sequence from head to tail along the dental arch curve in the direction perpendicular to the dental arch curve; wherein, the slice image sequence comprises n slice images, which are respectively: slice P 1 Slice image P 2 …, slice P n (ii) a Slice P 1 Slice image P 2 …, slice P n Respectively corresponding to n sampling points from the beginning to the end of the dental arch curve; the image sizes of the respective slice images are the same and are: a is 1 *b 1 (ii) a Wherein, a 1 Is the slice width; b 1 Is the slice height;
step S3, cutting out a small image including the neural tube at the same internal area position of each slice image;
thus, for the slice P 1 Slice image P 2 …, slice P n After cutting, respectively obtaining small pictures Q 1 Small picture Q 2 …, panel Q n (ii) a The images of the respective panels are the same in size and are: a is 2 *b 2 (ii) a Wherein, a 2 Is a small image width; b 2 Is a small picture height;
step S4, for each small pictureQ i Performing semantic segmentation on each small image Q i A plurality of three-dimensional connected domains are identified, thereby obtaining a small graph Q after semantic segmentation i (ii) a Wherein i is 1,2, …, n;
step S5, the small graph Q after semantic division i Analyzing the three-dimensional connected domain, and reserving the three-dimensional connected domain with the maximum number of voxels as the extracted neural tube region C i Filtering out other three-dimensional connected domains to obtain a filtered small graph Q i ;
The filtered small picture Q i And reverting to the slice P clipped in step S3 i The reduced slice is obtained, and is represented as: reduced section view W i ;
Step S6, after reduction, the slice image W i In the neural tube region C has been identified i Determining the neural tube region C i Center of mass O i And obtaining a centroid O i Slice image W after reduction i Abscissa x of (1) i And ordinate y i (ii) a Therefore, for n restored slice images, n centroids are obtained in total;
step S7, reducing the slice images W i Neural canal region C i And (3) comparing the circularity of the neural tube region, and selecting a reduced section picture with the circularity of the neural tube region smaller than a circularity threshold value, wherein the reduced section picture is represented as: reduced section view W e ;
Thus, the slice W after reduction e Neural canal region C e Judging the area of the neural tube outlet;
reduced section view W e Is the centroid O e By the centroid O e The horizontal line is a dividing line, and the neural tube region C e Split into two sub-regions, respectively: an upper sub-region and a lower sub-region; respectively calculating to obtain the mass center O of the upper sub-region e1 And the centroid O of the lower subregion e2 ;
Step S8, removing the centroid O from the n centroids identified in step S6 e Plus the centroid O of the upper subregion e1 And the centroid O of the lower subregion e2 (ii) a Thereby obtaining n +1 centroids in total, each centroidThe abscissa and ordinate of (a) have been obtained;
according to the sequence of the slice images of the n +1 centroids, the n +1 centroids are identified in the three-dimensional coordinate system according to the plane coordinates of the n +1 centroids, and cubic spline interpolation is carried out on the n +1 centroids, so that the final neural tube is generated.
2. The method for automatically extracting neural tube of mandible as claimed in claim 1, wherein step S2 is specifically:
performing CT scanning along the dental arch curve from the beginning to the end in the direction vertical to the dental arch curve to obtain an original slice sequence;
for each original slice in the original slice sequence, obtaining the slice image by a bilinear interpolation method according to the following formula;
v(x,y)=ax+by+cxy+d
wherein:
x and y represent coordinates at which bilinear interpolation is to be carried out, and v (x and y) represents a pixel value of a (x and y) position of a generated slice image after interpolation;
the four coefficients a, b, c and d are calculated by pixel values of x and y pixel points at four nearest points of the original slice.
3. The method of claim 1, wherein in step S7, the center of mass O is used e The horizontal line is a dividing line, and the neural tube region C e Splitting into two subregions by the following specific method:
1) for the neural tube region C e The Row numbers Row of all the pixel points are averaged to obtain the centroid O e The abscissa meanX; for the neural tube region C e The column numbers Col of all the pixel points are averaged to obtain the centroid O e The ordinate mean of (A), from which the neural tube region C is calculated e Center of mass O e The coordinates of (a):
for the neural tube region C e The Row numbers Row of all the pixel points are averaged to obtain the centroid O e The abscissa meanX; for the neural tube region C e IncludedThe column numbers Col of all the pixel points are averaged to obtain the centroid O e Ordinate mean of (d); the formula used is as follows:
[meanX,meanY]=mean(Row,Col)
2) in the neural canal region C e Inside, all pixel points with the row number larger than the meanX form an upper sub-region upRegion; all the pixel points with the line numbers smaller than the meanX form a lower sub-region down region, and the formula is as follows:
upRegion=Row>meanX
downRegion=Row<meanX
3) averaging the Row numbers Row of all the pixel points of the upper subregion upRegion to obtain the centroid O of the upper subregion e1 The abscissa upcenter x; the column numbers Col of all the pixel points of the upper sub-region upRegion are averaged to obtain the mass center O of the upper sub-region e1 The ordinate upcenter;
averaging the Row numbers Row of all pixel points of the lower sub-region down region to obtain the mass center O of the lower sub-region e2 Down center x; averaging the column numbers Col of all pixel points of the lower sub-region down region to obtain the mass center O of the lower sub-region e2 Down center y of (d); the formula is expressed as follows:
[upCenterX,upCenterY]=mean(upRegion(Row,Col))
[downCenterX,downCenterY]mean (dow, Col)) thus yields the centroid O of the upper sub-region e1 And the centroid O of the lower subregion e2 The coordinates of (a).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210388461.5A CN114897786B (en) | 2022-04-13 | 2022-04-13 | Automatic extraction method of mandibular nerve tube |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210388461.5A CN114897786B (en) | 2022-04-13 | 2022-04-13 | Automatic extraction method of mandibular nerve tube |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114897786A true CN114897786A (en) | 2022-08-12 |
CN114897786B CN114897786B (en) | 2024-04-16 |
Family
ID=82717095
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210388461.5A Active CN114897786B (en) | 2022-04-13 | 2022-04-13 | Automatic extraction method of mandibular nerve tube |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114897786B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110895816A (en) * | 2019-10-14 | 2020-03-20 | 广州医科大学附属口腔医院(广州医科大学羊城医院) | Method for measuring alveolar bone grinding amount before mandibular bone planting plan operation |
KR20210092974A (en) * | 2020-01-17 | 2021-07-27 | 오스템임플란트 주식회사 | Method for creating nerve tube line and dental implant surgery planning device therefor |
CN113643446A (en) * | 2021-08-11 | 2021-11-12 | 北京朗视仪器股份有限公司 | Automatic marking method and device for mandibular neural tube and electronic equipment |
CN113907903A (en) * | 2021-09-03 | 2022-01-11 | 张志宏 | Design method for implant position in edentulous area by using artificial intelligence technology |
-
2022
- 2022-04-13 CN CN202210388461.5A patent/CN114897786B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110895816A (en) * | 2019-10-14 | 2020-03-20 | 广州医科大学附属口腔医院(广州医科大学羊城医院) | Method for measuring alveolar bone grinding amount before mandibular bone planting plan operation |
KR20210092974A (en) * | 2020-01-17 | 2021-07-27 | 오스템임플란트 주식회사 | Method for creating nerve tube line and dental implant surgery planning device therefor |
CN113643446A (en) * | 2021-08-11 | 2021-11-12 | 北京朗视仪器股份有限公司 | Automatic marking method and device for mandibular neural tube and electronic equipment |
CN113907903A (en) * | 2021-09-03 | 2022-01-11 | 张志宏 | Design method for implant position in edentulous area by using artificial intelligence technology |
Non-Patent Citations (1)
Title |
---|
杨晓莉;邹德荣;: "下颌双神经管的研究进展", 口腔颌面外科杂志, no. 05, 28 October 2015 (2015-10-28) * |
Also Published As
Publication number | Publication date |
---|---|
CN114897786B (en) | 2024-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9439610B2 (en) | Method for teeth segmentation and alignment detection in CBCT volume | |
CN109377534B (en) | Nonlinear oral cavity CT panoramic image synthesis method capable of automatically sampling thickness detection | |
CN111415419B (en) | Method and system for making tooth restoration model based on multi-source image | |
CN113223010B (en) | Method and system for multi-tissue full-automatic segmentation of oral cavity image | |
CN112120810A (en) | Three-dimensional data generation method of tooth orthodontic concealed appliance | |
CN112102495B (en) | Dental arch surface generation method based on CBCT image | |
CN111260672B (en) | Method for guiding and segmenting teeth by using morphological data | |
CN106846346B (en) | Method for rapidly extracting pelvis outline of sequence CT image based on key frame mark | |
Poonsri et al. | Teeth segmentation from dental x-ray image by template matching | |
CN108269294B (en) | Oral CBCT image information analysis method and system | |
CN110610198A (en) | Mask RCNN-based automatic oral CBCT image mandibular neural tube identification method | |
WO2024046400A1 (en) | Tooth model generation method and apparatus, and electronic device and storage medium | |
CN116309302A (en) | Extraction method of key points of skull lateral position slice | |
CN110619646B (en) | Single tooth extraction method based on panorama | |
CN106846314B (en) | Image segmentation method based on postoperative cornea OCT image data | |
CN114897786A (en) | Automatic extraction method for mandible neural tube | |
Tong et al. | Landmarking of cephalograms using a microcomputer system | |
CN113786262B (en) | Preparation method of dental crown lengthening operation guide plate | |
EP4296944A1 (en) | Method for segmenting computed tomography image of teeth | |
US20220358740A1 (en) | System and Method for Alignment of Volumetric and Surface Scan Images | |
CN107564094B (en) | Tooth model feature point automatic identification algorithm based on local coordinates | |
CN112164075B (en) | Segmentation method for maxillary sinus membrane morphology change | |
CN109993754B (en) | Method and system for skull segmentation from images | |
CN114723765B (en) | Automatic extraction method of dental archwire | |
CN112927225A (en) | Wisdom tooth growth state auxiliary detection system based on artificial intelligence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |