CN114897786A - Automatic extraction method for mandible neural tube - Google Patents

Automatic extraction method for mandible neural tube Download PDF

Info

Publication number
CN114897786A
CN114897786A CN202210388461.5A CN202210388461A CN114897786A CN 114897786 A CN114897786 A CN 114897786A CN 202210388461 A CN202210388461 A CN 202210388461A CN 114897786 A CN114897786 A CN 114897786A
Authority
CN
China
Prior art keywords
region
slice
neural tube
centroid
row
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210388461.5A
Other languages
Chinese (zh)
Other versions
CN114897786B (en
Inventor
祝胜山
汪阳
房鹤
崔小飞
田忠正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Fengzhun Robot Technology Co ltd
Original Assignee
Sichuan Fengzhun Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Fengzhun Robot Technology Co ltd filed Critical Sichuan Fengzhun Robot Technology Co ltd
Priority to CN202210388461.5A priority Critical patent/CN114897786B/en
Publication of CN114897786A publication Critical patent/CN114897786A/en
Application granted granted Critical
Publication of CN114897786B publication Critical patent/CN114897786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides an automatic extraction method of a mandible neural tube, which comprises the following steps: generating a sequence of slice images along the arch curve; cutting out a small image comprising a neural tube, performing semantic segmentation, and extracting a neural tube region to obtain a filtered small image; reducing the filtered small picture into a slice picture to obtain a reduced slice picture; determining the centroid of the neural tube region in the post-reduction slice image; the neural tube exit region is identified and split, yielding the centroid of the upper sub-region and the centroid of the lower sub-region, respectively. The centroids are sorted and cubic spline interpolation is performed, generating the final neural tube. The invention provides an automatic extraction method of a mandible neural tube, which replaces the existing process of manually marking the neural tube by a dentist, thereby improving the efficiency of dental implant surgery, identifying the accuracy of the neural tube and reducing the requirements on the experience of the dentist.

Description

Automatic extraction method for mandible neural tube
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to an automatic extraction method of a mandible neural tube.
Background
At present, with the continuous improvement of living standard of people, the dental implant operation is adopted by more and more patients. Since the dental implant operation requires implanting an implant in the oral cavity of a patient, special attention needs to be paid to whether the implant is pressed to the nerve tube of the mandible of the patient, otherwise unexpected damage is caused to the patient. The prerequisite for judging whether the implant is pressed to the neural tube of the mandible is that the neural tube in the mandible needs to be accurately identified, the existing method depends on manual marking by dentists, and the problems are as follows: (1) manual labeling wastes dentist labeling time; (2) the experience level of the dentist is very demanding, and the dentist who is not trained professionally has the problems of difficulty in identifying the neural tube and inaccurate labeling.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an automatic extraction method of a mandible neural tube, which can effectively solve the problems.
The technical scheme adopted by the invention is as follows:
the invention provides an automatic extraction method of a mandible neural tube, which comprises the following steps:
step S1, generating a dental arch curve;
step S2, generating slice images from beginning to end along the dental arch curve in the direction perpendicular to the dental arch curveA sequence; wherein, the slice image sequence comprises n slice images, which are respectively: slice P 1 Slice image P 2 …, slice P n (ii) a Slice P 1 Slice image P 2 …, slice P n Respectively corresponding to n sampling points from the beginning to the end of the dental arch curve; the image sizes of the respective slice images are the same and are: a is 1 *b 1 (ii) a Wherein, a 1 Is the slice width; b 1 Is the slice height;
step S3, cutting out a small image including the neural tube at the same internal area position of each slice image;
thus, for the slice P 1 Slice image P 2 …, slice P n After cutting, respectively obtaining small pictures Q 1 Small picture Q 2 …, panel Q n (ii) a The images of the respective panels are the same in size and are: a is 2 *b 2 (ii) a Wherein, a 2 Is a small image width; b 2 Is a small picture height;
step S4, for each small picture Q i Performing semantic segmentation on each small image Q i A plurality of three-dimensional connected domains are identified, thereby obtaining a small graph Q after semantic segmentation i (ii) a Wherein i is 1,2, …, n;
step S5, the small graph Q after semantic division i Analyzing the three-dimensional connected domain, and reserving the three-dimensional connected domain with the maximum number of voxels as the extracted neural tube region C i Filtering other three-dimensional connected domains to obtain a filtered small graph Q i
The filtered small picture Q i And reverting to the cut slice P in step S3 i The reduced slice is obtained, and is represented as: reduced section view W i
Step S6, after reduction, the slice image W i In the neural tube region C has been identified i Determining the neural tube region C i Center of mass O i And obtain the centroid O i Slice image W after reduction i Abscissa x of (1) i And ordinate y i (ii) a Therefore, for n restored slice images, n centroids are obtained in total;
step S7, reducing the slice images W i Neural canal region C i And (3) comparing the circularity of the neural tube region, and selecting a reduced section image with the circularity of the neural tube region smaller than a circularity threshold value, wherein the reduced section image is represented as: reduced section view W e
Thus, the slice W after reduction e Neural canal region C e Judging the area of the neural tube outlet;
reduced section view W e The centroid of is centroid O e By the center of mass O e The horizontal line is a dividing line, and the neural tube region C e Split into two sub-regions, respectively: an upper sub-region and a lower sub-region; respectively calculating to obtain the mass center O of the upper sub-region e1 And the centroid O of the lower subregion e2
At step S8, the centroid O is removed from the n centroids identified at step S6 e Plus the centroid O of the upper subregion e1 And the centroid O of the lower subregion e2 (ii) a Thus obtaining n +1 centroids, and the abscissa and the ordinate of each centroid are obtained;
according to the sequence of the slice images of the n +1 centroids, the n +1 centroids are identified in the three-dimensional coordinate system according to the plane coordinates of the n +1 centroids, and cubic spline interpolation is carried out on the n +1 centroids, so that the final neural tube is generated.
Preferably, step S2 specifically includes:
performing CT scanning from head to tail along the dental arch curve in the direction vertical to the dental arch curve to obtain an original slice sequence;
for each original slice in the original slice sequence, obtaining the slice image by a bilinear interpolation method according to the following formula;
v(x,y)=ax+by+cxy+d
wherein:
x and y represent coordinates at which bilinear interpolation is to be carried out, and v (x and y) represents a pixel value of a (x and y) position of a generated slice image after interpolation;
the four coefficients a, b, c and d are calculated by pixel values of x and y pixel points at four nearest points of the original slice.
Preferably, in step S7, the centroid O is used e The horizontal line is a dividing line, and the neural tube region C e Splitting into two subregions by the following specific method:
1) for the neural tube region C e The Row numbers Row of all the pixel points are averaged to obtain the centroid O e The abscissa meanX; for the neural tube region C e The column numbers Col of all the pixel points are averaged to obtain the centroid O e The ordinate mean of (A), from which the neural tube region C is calculated e Center of mass O e The coordinates of (a):
for the neural tube region C e The Row numbers Row of all the pixel points are averaged to obtain the centroid O e The abscissa meanX; for the neural tube region C e The column numbers Col of all the pixel points are averaged to obtain the centroid O e Ordinate mean of (d); the formula used is as follows:
[meanX,meanY]=mean(RoW,Col)
2) in the neural canal region C e Inside, all pixel points with the row number larger than the meanX form an upper sub-region upRegion; all the pixel points with the line numbers smaller than the meanX form a lower sub-region down region, and the formula is as follows:
upRegion=Row>meanX
downRegion=Row<meanX
3) averaging the Row numbers Row of all the pixel points of the upper subregion upRegion to obtain the centroid O of the upper subregion e1 The abscissa upcenter x; the column numbers Col of all the pixel points of the upper sub-region upRegion are averaged to obtain the mass center O of the upper sub-region el The ordinate upcenter;
averaging the Row numbers Row of all pixel points of the lower sub-region down region to obtain the mass center O of the lower sub-region e2 Down center x; averaging the column numbers Col of all pixel points of the lower sub-region down region to obtain the mass center O of the lower sub-region e2 Down center y of (d); the formula is expressed as follows:
[upCenterX,upCenterY]=mean(upRegion(Row,Col))
[downCenterX,downCenterY]=mean(downRegion(Row,Col))
thereby obtaining the centroid O of the upper subregion e1 And the centroid O of the lower subregion e2 The coordinates of (a).
The automatic extraction method of the mandible neural tube provided by the invention has the following advantages:
the invention provides an automatic extraction method of a mandible neural tube, which replaces the existing process of manually marking the neural tube by a dentist, thereby improving the efficiency of dental implant surgery, identifying the accuracy of the neural tube and reducing the requirements on the experience of the dentist.
Drawings
Fig. 1 is a schematic flow chart of an automatic extraction method of a mandibular nerve tube provided by the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a method for automatically extracting a neural tube of a mandible, which comprises the following steps with reference to fig. 1:
step S1, generating a dental arch curve;
step S2, generating a slice image sequence from head to tail along the dental arch curve in the direction perpendicular to the dental arch curve; wherein, the slice image sequence comprises n slice images, which are respectively: slice P 1 Slice image P 2 ,., section P n (ii) a Slice P 1 Slice image P 2 ,., section map P n Respectively corresponding to n sampling points from the beginning to the end of the dental arch curve; the image sizes of the respective slice images are the same and are: a is 1 *b 1 (ii) a Wherein, a 1 Is the slice width; b is a mixture of 1 Is the slice height;
step S2 specifically includes:
performing CT scanning from head to tail along the dental arch curve in the direction vertical to the dental arch curve to obtain an original slice sequence;
for each original slice in the original slice sequence, obtaining the slice image by a bilinear interpolation method according to the following formula;
v(x,y)=ax+by+cxy+d
wherein:
x and y represent coordinates at which bilinear interpolation is to be carried out, and v (x and y) represents a pixel value of a (x and y) position of a generated slice image after interpolation;
the four coefficients a, b, c and d are calculated by pixel values of x and y pixel points at four nearest points of the original slice.
Step S3, cutting out a small image including the neural tube at the same internal area position of each slice image;
for example, determining four vertex coordinates of ABCD; then, in each section image, four vertexes of the ABCD are positioned, and a rectangular area surrounded by the four vertexes of the ABCD is cut off to obtain a small image.
The main purposes of the steps are as follows: because the section size is large and the neural tube area is small, if the whole section is used for subsequent semantic segmentation, the neural tube is difficult to accurately identify and segment. Therefore, the invention cuts each slice image, so that the cut image mainly comprises a neural tube.
Thus, for the slice P 1 Slice image P 2 …, slice P n After cutting, respectively obtaining small pictures Q 1 Small picture Q 2 …, panel Q n (ii) a The images of the respective panels are the same in size and are: a is 2 *b 2 (ii) a Wherein, a 2 Is a small image width; b 2 Is a small picture height;
step S4, for each small picture Q i Performing semantic segmentation on each small image Q i A plurality of three-dimensional connected domains are identified, thereby obtaining a small graph Q after semantic segmentation i (ii) a Wherein i is 1,2, …, n;
for example, semantic segmentation is performed using VggNet that is already trained.
Step S5, the small graph Q after semantic division i Analyzing the three-dimensional connected domain, and reserving the three-dimensional connected domain with the maximum number of voxels as the extracted neural tube region C i Filtering other three-dimensional connected domains to obtain a filtered small graph Q i
Specifically, analysis of a three-dimensional connected domain is performed on a neural tube region segmented by VggNet, and then filtering of the neural tube region is performed according to the number of voxels in the three-dimensional connected domain. Thus, the filtered panel Q i A communicating domain comprising the neural tube.
The filtered small picture Q i And reverting to the slice P clipped in step S3 i The reduced slice is obtained, and is represented as: reduced section view W i
Step S6, after reduction, the slice image W i In the neural tube region C has been identified i Determining the neural tube region C i Center of mass O i And obtain the centroid O i Slice image W after reduction i Abscissa x of (1) i And ordinate y i (ii) a Therefore, for n restored slice images, n centroids are obtained in total;
step S7, reducing the slice images W i Neural canal region C i And (3) comparing the circularity of the neural tube region, and selecting a reduced section image with the circularity of the neural tube region smaller than a circularity threshold value, wherein the reduced section image is represented as: reduced section view W e
Thus, the slice W after reduction e Neural canal region C e Judging the area of the neural tube outlet;
reduced section view W e The centroid of is centroid O e By the centroid O e The horizontal line is a dividing line, and the neural tube region C e Split into two sub-regions, respectively: an upper sub-region and a lower sub-region; respectively calculating to obtain the mass center O of the upper sub-region e1 And the centroid O of the lower subregion e2
Specifically, the neural tube exit region is generally elliptical; while other areas of the neural tube are generally circular. Therefore, to achieve accurate extraction of the neural tube, the present invention identifies and specially treats the neural tube exit region. And splitting the judged neural tube outlet area into two subregions, and respectively calculating the mass center of the neural tube pixel composition area for the two subregions.
In this step, the centroid O is used e The horizontal line is a dividing line, and the neural tube region C e Splitting into two subregions by the following specific method:
1) for the neural tube region C e The Row numbers Row of all the pixel points are averaged to obtain the centroid O e The abscissa meanX; for the neural tube region C e The column numbers Col of all the pixel points are averaged to obtain the centroid O e The ordinate mean of (A), from which the neural tube region C is calculated e Center of mass O e The coordinates of (a):
for the neural tube region C e The Row numbers Row of all the pixel points are averaged to obtain the centroid O e The abscissa meanX; for the neural tube region C e The column numbers Col of all the pixel points are averaged to obtain the centroid O e Ordinate mean of (d); the formula used is as follows:
[meanX,meanY]=mean(Row,Col)
2) in the neural canal region C e Inside, all pixel points with the row number larger than the meanX form an upper sub-region upRegion; all the pixel points with the line numbers smaller than the meanX form a lower sub-region down region, and the formula is as follows:
upRegion=Row>meanX
downRegion=Row<meanX
3) averaging the Row numbers Row of all the pixel points of the upper subregion upRegion to obtain the centroid O of the upper subregion e1 The abscissa upcenter x; the column numbers Col of all the pixel points of the upper sub-region upRegion are averaged to obtain the mass center O of the upper sub-region e1 The ordinate upcenter;
averaging the Row numbers Row of all pixel points of the lower sub-region down region to obtain the lower partPartial region centroid O e2 Down center x; averaging the column numbers Col of all pixel points of the lower sub-region down region to obtain the mass center O of the lower sub-region e2 Down center y of (d); the formula is expressed as follows:
[upCenterX,upCenterY]=mean(upRegion(Row,Col))
[downCenterX,downCenterY]=mean(downRegion(Row,Col))
thereby obtaining the centroid O of the upper subregion e1 And the centroid O of the lower subregion e2 The coordinates of (a).
At step S8, the centroid O is removed from the n centroids identified at step S6 e Plus the centroid O of the upper subregion e1 And the centroid O of the lower subregion e2 (ii) a Thus obtaining n +1 centroids, and the abscissa and the ordinate of each centroid are obtained;
according to the sequence of the slice images of the n +1 centroids, the n +1 centroids are identified in the three-dimensional coordinate system according to the plane coordinates of the n +1 centroids, and cubic spline interpolation is carried out on the n +1 centroids, so that the final neural tube is generated.
The invention provides an automatic extraction method of a mandible neural tube, which comprises the following main steps: firstly, generating a slice image sequence vertical to a dental arch line, then cutting the slice image to enable the cut view to mainly comprise a neural tube part, then carrying out semantic segmentation based on vgnet on the cut view to segment a region comprising a real neural tube and some pseudoneural tubes in the slice image, then carrying out three-dimensional connected domain analysis on all the segmented regions considered as the neural tubes to segment the real neural tube in the slice image, then converting the segmented neural tubes into the slice image sequence, and carrying out centroid extraction. At the outlet of the neural tube, since the curvature of the neural tube is involved, the segmented neural tube needs to be split into two regions, the centroids of the two regions are recorded, and cubic spline interpolation is performed between the centroid points, so as to generate the final neural tube. Therefore, the whole method has the advantages of simple implementation, strong real-time performance, high robustness, high identification precision and the like.
The invention provides an automatic extraction method of a mandible neural tube, which replaces the existing process of manually marking the neural tube by a dentist, thereby improving the efficiency of dental implant surgery, identifying the accuracy of the neural tube and reducing the requirements on the experience of the dentist.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and improvements can be made without departing from the principle of the present invention, and such modifications and improvements should also be considered within the scope of the present invention.

Claims (3)

1. An automatic extraction method for a neural tube of a mandible is characterized by comprising the following steps:
step S1, generating a dental arch curve;
step S2, generating a slice image sequence from head to tail along the dental arch curve in the direction perpendicular to the dental arch curve; wherein, the slice image sequence comprises n slice images, which are respectively: slice P 1 Slice image P 2 …, slice P n (ii) a Slice P 1 Slice image P 2 …, slice P n Respectively corresponding to n sampling points from the beginning to the end of the dental arch curve; the image sizes of the respective slice images are the same and are: a is 1 *b 1 (ii) a Wherein, a 1 Is the slice width; b 1 Is the slice height;
step S3, cutting out a small image including the neural tube at the same internal area position of each slice image;
thus, for the slice P 1 Slice image P 2 …, slice P n After cutting, respectively obtaining small pictures Q 1 Small picture Q 2 …, panel Q n (ii) a The images of the respective panels are the same in size and are: a is 2 *b 2 (ii) a Wherein, a 2 Is a small image width; b 2 Is a small picture height;
step S4, for each small pictureQ i Performing semantic segmentation on each small image Q i A plurality of three-dimensional connected domains are identified, thereby obtaining a small graph Q after semantic segmentation i (ii) a Wherein i is 1,2, …, n;
step S5, the small graph Q after semantic division i Analyzing the three-dimensional connected domain, and reserving the three-dimensional connected domain with the maximum number of voxels as the extracted neural tube region C i Filtering out other three-dimensional connected domains to obtain a filtered small graph Q i
The filtered small picture Q i And reverting to the slice P clipped in step S3 i The reduced slice is obtained, and is represented as: reduced section view W i
Step S6, after reduction, the slice image W i In the neural tube region C has been identified i Determining the neural tube region C i Center of mass O i And obtaining a centroid O i Slice image W after reduction i Abscissa x of (1) i And ordinate y i (ii) a Therefore, for n restored slice images, n centroids are obtained in total;
step S7, reducing the slice images W i Neural canal region C i And (3) comparing the circularity of the neural tube region, and selecting a reduced section picture with the circularity of the neural tube region smaller than a circularity threshold value, wherein the reduced section picture is represented as: reduced section view W e
Thus, the slice W after reduction e Neural canal region C e Judging the area of the neural tube outlet;
reduced section view W e Is the centroid O e By the centroid O e The horizontal line is a dividing line, and the neural tube region C e Split into two sub-regions, respectively: an upper sub-region and a lower sub-region; respectively calculating to obtain the mass center O of the upper sub-region e1 And the centroid O of the lower subregion e2
Step S8, removing the centroid O from the n centroids identified in step S6 e Plus the centroid O of the upper subregion e1 And the centroid O of the lower subregion e2 (ii) a Thereby obtaining n +1 centroids in total, each centroidThe abscissa and ordinate of (a) have been obtained;
according to the sequence of the slice images of the n +1 centroids, the n +1 centroids are identified in the three-dimensional coordinate system according to the plane coordinates of the n +1 centroids, and cubic spline interpolation is carried out on the n +1 centroids, so that the final neural tube is generated.
2. The method for automatically extracting neural tube of mandible as claimed in claim 1, wherein step S2 is specifically:
performing CT scanning along the dental arch curve from the beginning to the end in the direction vertical to the dental arch curve to obtain an original slice sequence;
for each original slice in the original slice sequence, obtaining the slice image by a bilinear interpolation method according to the following formula;
v(x,y)=ax+by+cxy+d
wherein:
x and y represent coordinates at which bilinear interpolation is to be carried out, and v (x and y) represents a pixel value of a (x and y) position of a generated slice image after interpolation;
the four coefficients a, b, c and d are calculated by pixel values of x and y pixel points at four nearest points of the original slice.
3. The method of claim 1, wherein in step S7, the center of mass O is used e The horizontal line is a dividing line, and the neural tube region C e Splitting into two subregions by the following specific method:
1) for the neural tube region C e The Row numbers Row of all the pixel points are averaged to obtain the centroid O e The abscissa meanX; for the neural tube region C e The column numbers Col of all the pixel points are averaged to obtain the centroid O e The ordinate mean of (A), from which the neural tube region C is calculated e Center of mass O e The coordinates of (a):
for the neural tube region C e The Row numbers Row of all the pixel points are averaged to obtain the centroid O e The abscissa meanX; for the neural tube region C e IncludedThe column numbers Col of all the pixel points are averaged to obtain the centroid O e Ordinate mean of (d); the formula used is as follows:
[meanX,meanY]=mean(Row,Col)
2) in the neural canal region C e Inside, all pixel points with the row number larger than the meanX form an upper sub-region upRegion; all the pixel points with the line numbers smaller than the meanX form a lower sub-region down region, and the formula is as follows:
upRegion=Row>meanX
downRegion=Row<meanX
3) averaging the Row numbers Row of all the pixel points of the upper subregion upRegion to obtain the centroid O of the upper subregion e1 The abscissa upcenter x; the column numbers Col of all the pixel points of the upper sub-region upRegion are averaged to obtain the mass center O of the upper sub-region e1 The ordinate upcenter;
averaging the Row numbers Row of all pixel points of the lower sub-region down region to obtain the mass center O of the lower sub-region e2 Down center x; averaging the column numbers Col of all pixel points of the lower sub-region down region to obtain the mass center O of the lower sub-region e2 Down center y of (d); the formula is expressed as follows:
[upCenterX,upCenterY]=mean(upRegion(Row,Col))
[downCenterX,downCenterY]mean (dow, Col)) thus yields the centroid O of the upper sub-region e1 And the centroid O of the lower subregion e2 The coordinates of (a).
CN202210388461.5A 2022-04-13 2022-04-13 Automatic extraction method of mandibular nerve tube Active CN114897786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210388461.5A CN114897786B (en) 2022-04-13 2022-04-13 Automatic extraction method of mandibular nerve tube

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210388461.5A CN114897786B (en) 2022-04-13 2022-04-13 Automatic extraction method of mandibular nerve tube

Publications (2)

Publication Number Publication Date
CN114897786A true CN114897786A (en) 2022-08-12
CN114897786B CN114897786B (en) 2024-04-16

Family

ID=82717095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210388461.5A Active CN114897786B (en) 2022-04-13 2022-04-13 Automatic extraction method of mandibular nerve tube

Country Status (1)

Country Link
CN (1) CN114897786B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110895816A (en) * 2019-10-14 2020-03-20 广州医科大学附属口腔医院(广州医科大学羊城医院) Method for measuring alveolar bone grinding amount before mandibular bone planting plan operation
KR20210092974A (en) * 2020-01-17 2021-07-27 오스템임플란트 주식회사 Method for creating nerve tube line and dental implant surgery planning device therefor
CN113643446A (en) * 2021-08-11 2021-11-12 北京朗视仪器股份有限公司 Automatic marking method and device for mandibular neural tube and electronic equipment
CN113907903A (en) * 2021-09-03 2022-01-11 张志宏 Design method for implant position in edentulous area by using artificial intelligence technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110895816A (en) * 2019-10-14 2020-03-20 广州医科大学附属口腔医院(广州医科大学羊城医院) Method for measuring alveolar bone grinding amount before mandibular bone planting plan operation
KR20210092974A (en) * 2020-01-17 2021-07-27 오스템임플란트 주식회사 Method for creating nerve tube line and dental implant surgery planning device therefor
CN113643446A (en) * 2021-08-11 2021-11-12 北京朗视仪器股份有限公司 Automatic marking method and device for mandibular neural tube and electronic equipment
CN113907903A (en) * 2021-09-03 2022-01-11 张志宏 Design method for implant position in edentulous area by using artificial intelligence technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨晓莉;邹德荣;: "下颌双神经管的研究进展", 口腔颌面外科杂志, no. 05, 28 October 2015 (2015-10-28) *

Also Published As

Publication number Publication date
CN114897786B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
US9439610B2 (en) Method for teeth segmentation and alignment detection in CBCT volume
CN109377534B (en) Nonlinear oral cavity CT panoramic image synthesis method capable of automatically sampling thickness detection
CN111415419B (en) Method and system for making tooth restoration model based on multi-source image
CN113223010B (en) Method and system for multi-tissue full-automatic segmentation of oral cavity image
CN112120810A (en) Three-dimensional data generation method of tooth orthodontic concealed appliance
CN112102495B (en) Dental arch surface generation method based on CBCT image
CN111260672B (en) Method for guiding and segmenting teeth by using morphological data
CN106846346B (en) Method for rapidly extracting pelvis outline of sequence CT image based on key frame mark
Poonsri et al. Teeth segmentation from dental x-ray image by template matching
CN108269294B (en) Oral CBCT image information analysis method and system
CN110610198A (en) Mask RCNN-based automatic oral CBCT image mandibular neural tube identification method
WO2024046400A1 (en) Tooth model generation method and apparatus, and electronic device and storage medium
CN116309302A (en) Extraction method of key points of skull lateral position slice
CN110619646B (en) Single tooth extraction method based on panorama
CN106846314B (en) Image segmentation method based on postoperative cornea OCT image data
CN114897786A (en) Automatic extraction method for mandible neural tube
Tong et al. Landmarking of cephalograms using a microcomputer system
CN113786262B (en) Preparation method of dental crown lengthening operation guide plate
EP4296944A1 (en) Method for segmenting computed tomography image of teeth
US20220358740A1 (en) System and Method for Alignment of Volumetric and Surface Scan Images
CN107564094B (en) Tooth model feature point automatic identification algorithm based on local coordinates
CN112164075B (en) Segmentation method for maxillary sinus membrane morphology change
CN109993754B (en) Method and system for skull segmentation from images
CN114723765B (en) Automatic extraction method of dental archwire
CN112927225A (en) Wisdom tooth growth state auxiliary detection system based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant