CN114897786B - Automatic extraction method of mandibular nerve tube - Google Patents

Automatic extraction method of mandibular nerve tube Download PDF

Info

Publication number
CN114897786B
CN114897786B CN202210388461.5A CN202210388461A CN114897786B CN 114897786 B CN114897786 B CN 114897786B CN 202210388461 A CN202210388461 A CN 202210388461A CN 114897786 B CN114897786 B CN 114897786B
Authority
CN
China
Prior art keywords
slice
centroid
region
row
neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210388461.5A
Other languages
Chinese (zh)
Other versions
CN114897786A (en
Inventor
祝胜山
汪阳
房鹤
崔小飞
田忠正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Fengzhun Robot Technology Co ltd
Original Assignee
Sichuan Fengzhun Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Fengzhun Robot Technology Co ltd filed Critical Sichuan Fengzhun Robot Technology Co ltd
Priority to CN202210388461.5A priority Critical patent/CN114897786B/en
Publication of CN114897786A publication Critical patent/CN114897786A/en
Application granted granted Critical
Publication of CN114897786B publication Critical patent/CN114897786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides an automatic extraction method of mandibular nerve tubes, which comprises the following steps: generating a sequence of slice images along a dental arch curve; cutting out a small image comprising the neural tube, carrying out semantic segmentation, and extracting a neural tube region to obtain a filtered small image; reducing the filtered small image to a slice image to obtain a reduced slice image; in the post-reduction slice, determining a centroid of the neural tube region; and identifying the neural tube outlet region, and splitting to obtain the mass center of the upper sub-region and the mass center of the lower sub-region respectively. The centroids are ordered and cubic spline interpolation is performed to generate the final neural tube. The invention provides an automatic extraction method of mandibular nerve tubes, which replaces the existing process of manually marking nerve tubes by dentists, thereby improving the efficiency of dental implantation operation, improving the accuracy of identifying the nerve tubes, reducing the requirements on dentist experience, and the whole method has the advantages of simple implementation, strong instantaneity, high robustness, high identification precision and the like.

Description

Automatic extraction method of mandibular nerve tube
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to an automatic extraction method of mandibular nerve tubes.
Background
At present, with the continuous improvement of the living standard of people, dental implant surgery is adopted by more and more patients. Since the dental implant surgery requires implantation of an implant in the mouth of a patient, special attention needs to be paid to whether the implant is pressed against the mandibular nerve canal of the patient, which would otherwise cause unexpected injury to the patient. The precondition for judging whether the implant is pressed to the mandibular nerve tube is that the nerve tube in the mandible needs to be accurately identified, the existing practice is to rely on dentists to carry out manual labeling, and the following problems are caused: (1) manual labeling wastes dentist labeling time; (2) The experience level of dentists is high, and for dentists who are not trained, the problems of difficulty in identifying nerve tubes and inaccurate labeling exist.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides an automatic extraction method of mandibular nerve tubes, which can effectively solve the problems.
The technical scheme adopted by the invention is as follows:
the invention provides an automatic extraction method of mandibular nerve tubes, which comprises the following steps:
step S1, generating a dental arch curve;
s2, generating a slice diagram sequence from beginning to end along the dental arch curve in the direction perpendicular to the dental arch curve; wherein, the slice diagram sequence comprises n Zhang Qiepian diagrams, which are respectively: slice diagram P 1 Slice diagram P 2 …, section view P n The method comprises the steps of carrying out a first treatment on the surface of the Slice diagram P 1 Slice diagram P 2 …, section view P n Respectively corresponding to n sampling points from the beginning to the end of the dental arch curve; the image sizes of the slice images are the same, and the slice images are: a, a 1 *b 1 The method comprises the steps of carrying out a first treatment on the surface of the Wherein a is 1 Is the slice width; b 1 Is the slice height;
s3, cutting out a small image comprising a neural tube at the same inner area position of each slice image;
thus, for slice P 1 Slice diagram P 2 …, section view P n After cutting, respectively obtaining small figures Q 1 Panel Q 2 …, panel Q n The method comprises the steps of carrying out a first treatment on the surface of the Each of which is provided withZhang Xiaotu, all of which are the same in size: a, a 2 *b 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein a is 2 Is the width of the small graph; b 2 Is the height of the small graph;
step S4, for each small image Q i Semantic segmentation is performed on each small graph Q i A plurality of three-dimensional connected domains are identified, thereby obtaining a semantically segmented small image Q i The method comprises the steps of carrying out a first treatment on the surface of the Wherein i=1, 2, …, n;
step S5, for the semantically segmented small graph Q i Analyzing the three-dimensional connected domain, and reserving the three-dimensional connected domain with the largest number of voxels as the extracted neural tube region C i Filtering out other three-dimensional connected domains to obtain a filtered small graph Q i
The filtered small image Q i Restoring to the cut slice P in the step S3 i Is a reduced slice, expressed as: restored slice view W i
Step S6, slicing the image W after reduction i In which the neural area C has been identified i Determination of neural area C i Centroid O of (2) i And obtain centroid O i Slice view W after reduction i X is the abscissa in (x) i And the ordinate y i The method comprises the steps of carrying out a first treatment on the surface of the Thus, for the n Zhang Haiyuan post-slice, n centroids are obtained altogether;
step S7, for each restored slice diagram W i Is the neural area C of (1) i And (3) selecting a reduced slice diagram with the roundness of the neural tube region smaller than the roundness threshold value, wherein the slice diagram is expressed as follows: restored slice view W e
Therefore, the restored slice view W e Is the neural area C of (1) e Judging as a neural tube outlet region;
restored slice view W e Is the centroid of centroid O e By centroid O e The horizontal line is a dividing line, which divides the neural tube region C e Split into two sub-regions, respectively: an upper sub-region and a lower sub-region; respectively calculating mass centers O of the upper subareas e1 And centroid O of lower subregion e2
Step S8, removing centroid O from n centroids identified in step S6 e Plus centroid O of upper subregion e1 And centroid O of lower subregion e2 The method comprises the steps of carrying out a first treatment on the surface of the Thereby obtaining n+1 centroids in total, the abscissa and the ordinate of each centroid having been obtained;
in the order of the slice diagram of n+1 centroids, n+1 centroids are identified in a three-dimensional coordinate system according to plane coordinates of the n+1 centroids, and cubic spline interpolation is performed on the n+1 centroids, so that a final neural tube is generated.
Preferably, step S2 specifically includes:
CT scanning is carried out from beginning to end along the dental arch curve in the direction perpendicular to the dental arch curve, so as to obtain an original slice sequence;
for each original slice in the original slice sequence, a bilinear interpolation method is adopted, and the slice diagram is obtained according to the following formula;
v(x,y)=ax+by+cxy+d
wherein:
x, y represents the coordinates of the bilinear interpolation to be performed, and v (x, y) represents the pixel value of the (x, y) position of the slice generated after interpolation;
the four coefficients a, b, c, d are coefficients calculated from the pixel values of the x, y pixels at the four nearest points of the original slice.
Preferably, in step S7, the centroid O e The horizontal line is a dividing line, which divides the neural tube region C e Split into two sub-regions, the specific method is as follows:
1) For the neural area C e The Row numbers Row of all the included pixel points are averaged to obtain a centroid O e Is the abscissa meanX of (2); for the neural area C e Column numbers Col of all the included pixel points are averaged to obtain a centroid O e From which the neural tube region C is calculated e Centroid O of (2) e Coordinates of (c):
for the neural area C e The Row numbers Row of all the included pixel points are averaged to obtain a centroid O e Is the abscissa meanX of (2); for the neural area C e Including what isColumn numbers Col with pixel points are averaged to obtain a centroid O e Is the ordinate means of (2); the formula used is as follows:
[meanX,meanY]=mean(RoW,Col)
2) In the nerve region C e Inside, all pixel points with the line numbers larger than meanX form an upper sub-area upRegion; all pixel points with line numbers smaller than meanX form a lower subregion downRegion, and the formula is as follows:
upRegion=Row>meanX
downRegion=Row<meanX
3) Averaging the Row numbers Row of all pixel points of the upper subarea upRegion to obtain the centroid O of the upper subarea e1 Is the abscissa of the (d) upcenter x; averaging the column numbers Col of all pixel points of the upper subarea upRegion to obtain the centroid O of the upper subarea el Is the ordinate of up center y;
averaging the Row numbers Row of all pixel points of the lower subarea downRegion to obtain a centroid O of the lower subarea e2 Is downcenter x; averaging column numbers Col of all pixel points of the lower subarea down region to obtain a centroid O of the lower subarea e2 Is the ordinate down center y of (c); the formula is as follows:
[upCenterX,upCenterY]=mean(upRegion(Row,Col))
[downCenterX,downCenterY]=mean(downRegion(Row,Col))
thereby obtaining centroid O of upper subregion e1 And centroid O of lower subregion e2 Is defined by the coordinates of (a).
The automatic extraction method of the mandibular nerve tube provided by the invention has the following advantages:
the invention provides an automatic extraction method of mandibular nerve tubes, which replaces the existing process of manually marking nerve tubes by dentists, thereby improving the efficiency of dental implantation operation, improving the accuracy of identifying the nerve tubes, reducing the requirements on dentist experience, and the whole method has the advantages of simple implementation, strong instantaneity, high robustness, high identification precision and the like.
Drawings
Fig. 1 is a schematic flow chart of an automatic extraction method of mandibular nerve tubes.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects solved by the invention more clear, the invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The invention provides an automatic extraction method of mandibular nerve tubes, which refers to fig. 1 and comprises the following steps:
step S1, generating a dental arch curve;
s2, generating a slice diagram sequence from beginning to end along the dental arch curve in the direction perpendicular to the dental arch curve; wherein, the slice diagram sequence comprises n Zhang Qiepian diagrams, which are respectively: slice diagram P 1 Slice diagram P 2 ,., slice P n The method comprises the steps of carrying out a first treatment on the surface of the Slice diagram P 1 Slice diagram P 2 ,., slice P n Respectively corresponding to n sampling points from the beginning to the end of the dental arch curve; the image sizes of the slice images are the same, and the slice images are: a, a 1 *b 1 The method comprises the steps of carrying out a first treatment on the surface of the Wherein a is 1 Is the slice width; b 1 Is the slice height;
the step S2 specifically comprises the following steps:
CT scanning is carried out from beginning to end along the dental arch curve in the direction perpendicular to the dental arch curve, so as to obtain an original slice sequence;
for each original slice in the original slice sequence, a bilinear interpolation method is adopted, and the slice diagram is obtained according to the following formula;
v(x,y)=ax+by+cxy+d
wherein:
x, y represents the coordinates of the bilinear interpolation to be performed, and v (x, y) represents the pixel value of the (x, y) position of the slice generated after interpolation;
the four coefficients a, b, c, d are coefficients calculated from the pixel values of the x, y pixels at the four nearest points of the original slice.
S3, cutting out a small image comprising a neural tube at the same inner area position of each slice image;
for example, four vertex coordinates of ABCD are determined; and then, in each slice diagram, positioning four vertexes of the ABCD, and cutting off a rectangular area surrounded by the four vertexes of the ABCD to obtain a small diagram.
The main purpose of the method is as follows: because the slice is large in size and the nerve tube is small in area, if the whole slice is used for subsequent semantic segmentation, it is difficult to accurately identify and segment the nerve tube. Therefore, the invention cuts each slice image, so that the cut image mainly comprises neural tubes.
Thus, for slice P 1 Slice diagram P 2 …, section view P n After cutting, respectively obtaining small figures Q 1 Panel Q 2 …, panel Q n The method comprises the steps of carrying out a first treatment on the surface of the The image sizes of the small pictures are the same, and the small pictures are: a, a 2 *b 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein a is 2 Is the width of the small graph; b 2 Is the height of the small graph;
step S4, for each small image Q i Semantic segmentation is performed on each small graph Q i A plurality of three-dimensional connected domains are identified, thereby obtaining a semantically segmented small image Q i The method comprises the steps of carrying out a first treatment on the surface of the Wherein i=1, 2, …, n;
for example, semantic segmentation is performed using VggNet that has been trained.
Step S5, for the semantically segmented small graph Q i Analyzing the three-dimensional connected domain, and reserving the three-dimensional connected domain with the largest number of voxels as the extracted neural tube region C i Filtering out other three-dimensional connected domains to obtain a filtered small graph Q i
Specifically, analysis of three-dimensional connected domain is performed on the neural tube region divided by vgnet, and then filtering is performed on the neural tube region according to the number of the pixels in the three-dimensional connected domain. Thus, filtered panel Q i Comprising a communication domain of a neural tube.
The filtered small image Q i Restoring to the cut slice P in the step S3 i Is also obtained by corresponding positions ofThe original slice diagram is expressed as: restored slice view W i
Step S6, slicing the image W after reduction i In which the neural area C has been identified i Determination of neural area C i Centroid O of (2) i And obtain centroid O i Slice view W after reduction i X is the abscissa in (x) i And the ordinate y i The method comprises the steps of carrying out a first treatment on the surface of the Thus, for the n Zhang Haiyuan post-slice, n centroids are obtained altogether;
step S7, for each restored slice diagram W i Is the neural area C of (1) i And (3) selecting a reduced slice diagram with the roundness of the neural tube region smaller than the roundness threshold value, wherein the slice diagram is expressed as follows: restored slice view W e
Therefore, the restored slice view W e Is the neural area C of (1) e Judging as a neural tube outlet region;
restored slice view W e Is the centroid of centroid O e By centroid O e The horizontal line is a dividing line, which divides the neural tube region C e Split into two sub-regions, respectively: an upper sub-region and a lower sub-region; respectively calculating mass centers O of the upper subareas e1 And centroid O of lower subregion e2
Specifically, the nerve tube exit area is generally oval; while other areas of the nerve conduit are generally circular. Thus, to achieve accurate extraction of the nerve tube, the present invention identifies and specifically treats the area of the nerve tube exit. Splitting the judged neural tube outlet region into two sub-regions, and calculating the mass centers of the neural tube pixel composition regions for the two sub-regions respectively.
In this step, the centroid O e The horizontal line is a dividing line, which divides the neural tube region C e Split into two sub-regions, the specific method is as follows:
1) For the neural area C e The Row numbers Row of all the included pixel points are averaged to obtain a centroid O e Is the abscissa meanX of (2); for the neural area C e Column numbers Col of all the included pixel points are averaged to obtain a centroid O e Is sitting on (1)The target means, from which the neural tube region C is calculated e Centroid O of (2) e Coordinates of (c):
for the neural area C e The Row numbers Row of all the included pixel points are averaged to obtain a centroid O e Is the abscissa meanX of (2); for the neural area C e Column numbers Col of all the included pixel points are averaged to obtain a centroid O e Is the ordinate means of (2); the formula used is as follows:
[meanX,meanY]=mean(Row,Col)
2) In the nerve region C e Inside, all pixel points with the line numbers larger than meanX form an upper sub-area upRegion; all pixel points with line numbers smaller than meanX form a lower subregion downRegion, and the formula is as follows:
upRegion=Row>meanX
downRegion=Row<meanX
3) Averaging the Row numbers Row of all pixel points of the upper subarea upRegion to obtain the centroid O of the upper subarea e1 Is the abscissa of the (d) upcenter x; averaging the column numbers Col of all pixel points of the upper subarea upRegion to obtain the centroid O of the upper subarea e1 Is the ordinate of up center y;
averaging the Row numbers Row of all pixel points of the lower subarea downRegion to obtain a centroid O of the lower subarea e2 Is downcenter x; averaging column numbers Col of all pixel points of the lower subarea down region to obtain a centroid O of the lower subarea e2 Is the ordinate down center y of (c); the formula is as follows:
[upCenterX,upCenterY]=mean(upRegion(Row,Col))
[downCenterX,downCenterY]=mean(downRegion(Row,Col))
thereby obtaining centroid O of upper subregion e1 And centroid O of lower subregion e2 Is defined by the coordinates of (a).
Step S8, removing centroid O from n centroids identified in step S6 e Plus centroid O of upper subregion e1 And centroid O of lower subregion e2 The method comprises the steps of carrying out a first treatment on the surface of the Thereby obtaining n+1 centroids in total, each centroidThe abscissa and the ordinate have been obtained;
in the order of the slice diagram of n+1 centroids, n+1 centroids are identified in a three-dimensional coordinate system according to plane coordinates of the n+1 centroids, and cubic spline interpolation is performed on the n+1 centroids, so that a final neural tube is generated.
The invention provides an automatic extraction method of mandibular nerve tubes, which mainly comprises the following steps: firstly, generating a slice diagram sequence perpendicular to dental archwires, then cutting the slice diagram so that the cut view mainly comprises nerve tube parts, then carrying out semantic segmentation based on vggnet on the cut view, so as to segment real nerve tubes and some pseudo nerve tube areas in the slice diagram, then carrying out analysis of three-dimensional connected areas on all the segmented areas considered as the nerve tubes, so as to segment the real nerve tubes in the slice diagram, then converting the segmented nerve tubes into the slice diagram sequence, and extracting the mass centers. At the nerve tube exit, because of the curved trend of the nerve tube, the segmented nerve tube needs to be split into two areas, the centroids of the two areas are recorded respectively, and then spline interpolation is performed three times between the centroid points, so that the final nerve tube is generated. Therefore, the method has the advantages of simple implementation, strong real-time performance, high robustness, high identification precision and the like.
The invention provides an automatic extraction method of mandibular nerve tubes, which replaces the existing process of manually marking nerve tubes by dentists, thereby improving the efficiency of dental implantation operation, improving the accuracy of identifying the nerve tubes, reducing the requirements on dentist experience, and the whole method has the advantages of simple implementation, strong instantaneity, high robustness, high identification precision and the like.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which is also intended to be covered by the present invention.

Claims (3)

1. An automatic extraction method of mandibular nerve tubes is characterized by comprising the following steps:
step S1, generating a dental arch curve;
s2, generating a slice diagram sequence from beginning to end along the dental arch curve in the direction perpendicular to the dental arch curve; wherein, the slice diagram sequence comprises n Zhang Qiepian diagrams, which are respectively: slice diagram P 1 Slice diagram P 2 …, section view P n The method comprises the steps of carrying out a first treatment on the surface of the Slice diagram P 1 Slice diagram P 2 …, section view P n Respectively corresponding to n sampling points from the beginning to the end of the dental arch curve; the image sizes of the slice images are the same, and the slice images are: a, a 1 *b 1 The method comprises the steps of carrying out a first treatment on the surface of the Wherein a is 1 Is the slice width; b 1 Is the slice height;
s3, cutting out a small image comprising a neural tube at the same inner area position of each slice image;
thus, for slice P 1 Slice diagram P 2 …, section view P n After cutting, respectively obtaining small figures Q 1 Panel Q 2 …, panel Q n The method comprises the steps of carrying out a first treatment on the surface of the The image sizes of the small pictures are the same, and the small pictures are: a, a 2 *b 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein a is 2 Is the width of the small graph; b 2 Is the height of the small graph;
step S4, for each small image Q i Semantic segmentation is performed on each small graph Q i A plurality of three-dimensional connected domains are identified, thereby obtaining a semantically segmented small image Q i The method comprises the steps of carrying out a first treatment on the surface of the Wherein i=1, 2, …, n;
step S5, for the semantically segmented small graph Q i Analyzing the three-dimensional connected domain, and reserving the three-dimensional connected domain with the largest number of voxels as the extracted neural tube region C i Filtering out other three-dimensional connected domains to obtain a filtered small graph Q i
The filtered small image Q i Restoring to the cut slice P in the step S3 i Is a reduced slice, expressed as: restored slice view W i
Step S6, slicing the image W after reduction i In (3) has been identified as a nerve tubeRegion C i Determination of neural area C i Centroid O of (2) i And obtain centroid O i Slice view W after reduction i X is the abscissa in (x) i And the ordinate y i The method comprises the steps of carrying out a first treatment on the surface of the Thus, for the n Zhang Haiyuan post-slice, n centroids are obtained altogether;
step S7, for each restored slice diagram W i Is the neural area C of (1) i And (3) selecting a reduced slice diagram with the roundness of the neural tube region smaller than the roundness threshold value, wherein the slice diagram is expressed as follows: restored slice view W e
Therefore, the restored slice view W e Is the neural area C of (1) e Judging as a neural tube outlet region;
restored slice view W e Is the centroid of centroid O e By centroid O e The horizontal line is a dividing line, which divides the neural tube region C e Split into two sub-regions, respectively: an upper sub-region and a lower sub-region; respectively calculating mass centers O of the upper subareas e1 And centroid O of lower subregion e2
Step S8, removing centroid O from n centroids identified in step S6 e Plus centroid O of upper subregion e1 And centroid O of lower subregion e2 The method comprises the steps of carrying out a first treatment on the surface of the Thereby obtaining n+1 centroids in total, the abscissa and the ordinate of each centroid having been obtained;
in the order of the slice diagram of n+1 centroids, n+1 centroids are identified in a three-dimensional coordinate system according to plane coordinates of the n+1 centroids, and cubic spline interpolation is performed on the n+1 centroids, so that a final neural tube is generated.
2. The automatic extraction method of mandibular nerve tubes according to claim 1, wherein step S2 specifically comprises:
CT scanning is carried out from beginning to end along the dental arch curve in the direction perpendicular to the dental arch curve, so as to obtain an original slice sequence;
for each original slice in the original slice sequence, a bilinear interpolation method is adopted, and the slice diagram is obtained according to the following formula;
v(x,y)=ax+by+cxy+d
wherein:
x, y represents the coordinates of the bilinear interpolation to be performed, and v (x, y) represents the pixel value of the (x, y) position of the slice generated after interpolation;
the four coefficients a, b, c, d are coefficients calculated from the pixel values of the x, y pixels at the four nearest points of the original slice.
3. The method according to claim 1, wherein in step S7, the centroid O is used e The horizontal line is a dividing line, which divides the neural tube region C e Split into two sub-regions, the specific method is as follows:
1) For the neural area C e The Row numbers Row of all the included pixel points are averaged to obtain a centroid O e Is the abscissa meanX of (2); for the neural area C e Column numbers Col of all the included pixel points are averaged to obtain a centroid O e From which the neural tube region C is calculated e Centroid O of (2) e Coordinates of (c):
for the neural area C e The Row numbers Row of all the included pixel points are averaged to obtain a centroid O e Is the abscissa meanX of (2); for the neural area C e Column numbers Col of all the included pixel points are averaged to obtain a centroid O e Is the ordinate means of (2); the formula used is as follows:
[meanX,meanY]=mean(Row,Col)
2) In the nerve region C e Inside, all pixel points with the line numbers larger than meanX form an upper sub-area upRegion; all pixel points with line numbers smaller than meanX form a lower subregion downRegion, and the formula is as follows:
upRegion=Row>meanX
downRegion=Row<meanX
3) Averaging the Row numbers Row of all pixel points of the upper subarea upRegion to obtain the centroid O of the upper subarea e1 Is the abscissa of the (d) upcenter x; averaging the column numbers Col of all pixel points of the upper subregion upRegion to obtain an upper partCentroid O of subregion e1 Is the ordinate of up center y;
averaging the Row numbers Row of all pixel points of the lower subarea downRegion to obtain a centroid O of the lower subarea e2 Is downcenter x; averaging column numbers Col of all pixel points of the lower subarea down region to obtain a centroid O of the lower subarea e2 Is the ordinate down center y of (c); the formula is as follows:
[upCenterX,upCenterY]=mean(upRegion(Row,Col))
[downCenterX,downCenterY]=mean (down region (Row, col)) thereby yielding centroid O of upper subregion e1 And centroid O of lower subregion e2 Is defined by the coordinates of (a).
CN202210388461.5A 2022-04-13 2022-04-13 Automatic extraction method of mandibular nerve tube Active CN114897786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210388461.5A CN114897786B (en) 2022-04-13 2022-04-13 Automatic extraction method of mandibular nerve tube

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210388461.5A CN114897786B (en) 2022-04-13 2022-04-13 Automatic extraction method of mandibular nerve tube

Publications (2)

Publication Number Publication Date
CN114897786A CN114897786A (en) 2022-08-12
CN114897786B true CN114897786B (en) 2024-04-16

Family

ID=82717095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210388461.5A Active CN114897786B (en) 2022-04-13 2022-04-13 Automatic extraction method of mandibular nerve tube

Country Status (1)

Country Link
CN (1) CN114897786B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110895816A (en) * 2019-10-14 2020-03-20 广州医科大学附属口腔医院(广州医科大学羊城医院) Method for measuring alveolar bone grinding amount before mandibular bone planting plan operation
KR20210092974A (en) * 2020-01-17 2021-07-27 오스템임플란트 주식회사 Method for creating nerve tube line and dental implant surgery planning device therefor
CN113643446A (en) * 2021-08-11 2021-11-12 北京朗视仪器股份有限公司 Automatic marking method and device for mandibular neural tube and electronic equipment
CN113907903A (en) * 2021-09-03 2022-01-11 张志宏 Design method for implant position in edentulous area by using artificial intelligence technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110895816A (en) * 2019-10-14 2020-03-20 广州医科大学附属口腔医院(广州医科大学羊城医院) Method for measuring alveolar bone grinding amount before mandibular bone planting plan operation
KR20210092974A (en) * 2020-01-17 2021-07-27 오스템임플란트 주식회사 Method for creating nerve tube line and dental implant surgery planning device therefor
CN113643446A (en) * 2021-08-11 2021-11-12 北京朗视仪器股份有限公司 Automatic marking method and device for mandibular neural tube and electronic equipment
CN113907903A (en) * 2021-09-03 2022-01-11 张志宏 Design method for implant position in edentulous area by using artificial intelligence technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
下颌双神经管的研究进展;杨晓莉;邹德荣;;口腔颌面外科杂志;20151028(第05期);全文 *

Also Published As

Publication number Publication date
CN114897786A (en) 2022-08-12

Similar Documents

Publication Publication Date Title
US11020206B2 (en) Tooth segmentation based on anatomical edge information
JP7386215B2 (en) Method and device for removal of dental mesh orthodontic appliances
US9886748B2 (en) Alignment of mixed-modality data sets for reduction and removal of imaging artifacts
EP2560572B1 (en) Reduction and removal of artifacts from a three-dimensional dental x-ray data set using surface scan information
CN109377534B (en) Nonlinear oral cavity CT panoramic image synthesis method capable of automatically sampling thickness detection
CN112105315A (en) Automatic ectopic tooth detection based on scanning
CN112102495B (en) Dental arch surface generation method based on CBCT image
CN105608747B (en) A method of from three-dimensional dentistry conical beam CT extracting data panorama sketch
CN113223010B (en) Method and system for multi-tissue full-automatic segmentation of oral cavity image
CN112120810A (en) Three-dimensional data generation method of tooth orthodontic concealed appliance
CN111260672B (en) Method for guiding and segmenting teeth by using morphological data
CN110889850B (en) CBCT tooth image segmentation method based on central point detection
KR102054210B1 (en) Method and system for automatic generation of panoramic image using dental ct
CN108269294B (en) Oral CBCT image information analysis method and system
CN110619646B (en) Single tooth extraction method based on panorama
CN116309302A (en) Extraction method of key points of skull lateral position slice
CN114897786B (en) Automatic extraction method of mandibular nerve tube
CN106846314B (en) Image segmentation method based on postoperative cornea OCT image data
Tong et al. Landmarking of cephalograms using a microcomputer system
US20220358740A1 (en) System and Method for Alignment of Volumetric and Surface Scan Images
EP4296944A1 (en) Method for segmenting computed tomography image of teeth
CN115908361A (en) Method for identifying decayed tooth of oral panoramic film
JPWO2020136243A5 (en)
CN114723765B (en) Automatic extraction method of dental archwire
CN113786262A (en) Preparation method of dental crown lengthening operation guide plate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant