CN116152238A - Temporal-mandibular joint gap area automatic measurement method based on deep learning - Google Patents

Temporal-mandibular joint gap area automatic measurement method based on deep learning Download PDF

Info

Publication number
CN116152238A
CN116152238A CN202310411589.3A CN202310411589A CN116152238A CN 116152238 A CN116152238 A CN 116152238A CN 202310411589 A CN202310411589 A CN 202310411589A CN 116152238 A CN116152238 A CN 116152238A
Authority
CN
China
Prior art keywords
area
image
gap area
temporomandibular joint
joint gap
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310411589.3A
Other languages
Chinese (zh)
Other versions
CN116152238B (en
Inventor
李小囡
张倩
董瑞
李静
刘之洋
杨东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
STOMATOLOGICAL HOSPITAL TIANJIN MEDICAL UNIVERSITY
Nankai University
Original Assignee
STOMATOLOGICAL HOSPITAL TIANJIN MEDICAL UNIVERSITY
Nankai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by STOMATOLOGICAL HOSPITAL TIANJIN MEDICAL UNIVERSITY, Nankai University filed Critical STOMATOLOGICAL HOSPITAL TIANJIN MEDICAL UNIVERSITY
Priority to CN202310411589.3A priority Critical patent/CN116152238B/en
Publication of CN116152238A publication Critical patent/CN116152238A/en
Application granted granted Critical
Publication of CN116152238B publication Critical patent/CN116152238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a temporal-mandibular joint gap area automatic measurement method based on deep learning, which realizes the segmentation of temporal-mandibular joints and the automatic measurement of gap areas thereof, and relates to the field of medical image analysis. According to the invention, the VNet neural network is utilized for image segmentation, sagittal plane screening and positioning of the region to be detected are carried out on the basis of the segmentation map, the front and rear areas of the temporomandibular joint gap of the selected section are measured, the corresponding measurement regions are marked in the CBCT image and the segmentation map, the problems of automatic measurement and marking of the temporomandibular joint gap area are solved, and the problems of working efficiency of stomatologists and manual error segmentation and measurement caused by clinical experience are solved to a certain extent by realizing image segmentation and automatic measurement.

Description

Temporal-mandibular joint gap area automatic measurement method based on deep learning
Technical Field
The invention belongs to the field of medical image analysis, and particularly relates to an automatic temporal-mandibular joint gap area measurement method based on deep learning.
Background
CBCT is a short term for Cone beam CT, i.e., cone beam CT, is a Cone beam projection computed tomography apparatus, and its principle is that an X-ray generator makes a ring DR (digital projection) around the projection with a low amount of radiation (typically, bulb current around 10 milliamps). Compared with the traditional spiral CT, the CBCT has the advantages of small radiation dosage, high spatial resolution, quick three-dimensional reconstruction, capability of providing accurate image data, relatively low examination cost and equipment acquisition cost, small occupied area, small space and the like, is increasingly widely applied to oral clinic, becomes an important auxiliary examination tool in the field of stomatology, needs a doctor with abundant clinical experience and background knowledge to manually segment and measure CBCT images in medical analysis, and is widely used as an auxiliary diagnosis method by utilizing a deep learning technology to segment and automatically measure medical images along with the development of scientific technology, so that the workload of the doctor can be greatly reduced to a certain extent, and errors caused by individual differences can be reduced.
The deep learning technology, especially the convolutional neural network can capture the characteristic relation between the input and the output, and the high-level semantic and the low-level semantic are obtained through layer-by-layer extraction, and the VNet neural network can fuse the high-level semantic information and the low-level semantic information through jump connection, so that the deep learning technology has a good segmentation effect.
The temporomandibular joint disorder is an important branch of osteoarthritis, the most commonly used inspection method at present is to acquire an oral cavity CBCT image, but the temporomandibular joint disorder is not obvious in early clinical symptoms and is not easy to find, and by combining the characteristics of large workload and difficult measurement of temporomandibular joint gap area, the automatic measurement by using a computer is helpful for early detection and treatment of symptoms.
Disclosure of Invention
Based on the problems, the invention provides an automatic temporomandibular joint gap area measurement method based on deep learning, which is a medical image segmentation technology based on the deep learning and realizes the segmentation of CBCT images and the automatic measurement of the front and rear areas of the temporomandibular joint gap.
The automatic temporomandibular joint gap area measuring method based on deep learning is used for dividing CBCT images and measuring to obtain the anterior gap area and the posterior gap area of the temporomandibular joint gap, and specifically comprises the following steps:
s0, acquiring an original CBCT image;
step S1, preprocessing the original CBCT image acquired in the step S0;
s2, inputting the image obtained in the step S1 into a VNet neural network to generate a three-dimensional segmentation map of the left temporomandibular joint gap;
s3, selecting the sagittal plane direction of the three-dimensional segmentation map obtained in the step S2 to obtain a segmentation map sagittal plane;
s4, screening and filtering the segmented image sagittal plane obtained in the step S3 to obtain an alternative tangent plane image;
step S5, carrying out region division on the alternative section image obtained in the step S4, and determining a region to be measured of the front gap area and a region to be measured of the back gap area;
and S6, performing area measurement on the area to be measured obtained in the step S5 to obtain the anterior gap area and the posterior gap area of the temporomandibular joint gap.
Further, in the step S2, the VNet neural network is trained by:
s21, acquiring an original CBCT image;
s22, preprocessing the original CBCT image acquired in the S21;
s23, carrying out data enhancement on the CBCT image preprocessed in the S22;
s24, constructing a VNet neural network;
s25, training the VNet neural network constructed in the S24 by adopting the image enhanced by the data in the S23.
Further, the preprocessing is to sequentially perform resampling, hu value truncation and zs-core normalization on the image.
Further, the data enhancement is to sequentially perform image rotation, image scaling, elastic deformation, coordinate axis overturning and intensity change on the image.
Furthermore, the VNet neural network is formed by sequentially connecting four encoders and four decoders, the encoders adopt rolling and downsampling operations, the decoders adopt rolling and upsampling operations, and semantic information of the encoders is transferred into a network structure of the decoders through jump connection.
Further, training the VNet neural network by using a Dice loss function and a CE loss function, where a loss function formula formed by the Dice loss function and the CE loss function is as follows:
Figure SMS_1
(1)
wherein ,
L Dice a Dice loss function for real data and predicted data;
L CE a cross entropy loss function for real data and predicted data;
n is the number of samples;
p is real data;
q is prediction data;
c is the number of detection categories.
Further, the parameters of the VNet neural network are trained by adopting an RMSprop optimizer, the initial learning rate of the RMSprop optimizer is 0.0001, and the learning rate is expressed as follows:
Figure SMS_2
(2)
wherein ,
lr' is the updated learning rate;
lr init is the initial learning rate;
epochis the current round;
MAX_EPOCHis the largest training round.
Further, in the step S4, the filtering for the sagittal plane of the segmentation map specifically includes:
s41, searching the highest point P at the top of the condyloid process in a segmented image sagittal plane, translating the ordinate of the P point downwards by 13 pixel points to obtain P ', translating the P ' leftwards by 20 pixel points to obtain P1, translating the P ' rightwards by 20 pixel points to obtain P2, taking the pixel points in a line segment P1P2 as reference points, recording the number of the pixel points belonging to the condyloid process in the reference points as the width of the top of the condyloid process, and screening out the segmented image sagittal plane with the width larger than 25 pixels;
s42, performing one-hot coding on the sagittal plane of the segmentation map screened in the S41 to obtain a single-channel map of the background, the condyloid process, the temporal bone and the external auditory canal;
s43, searching a normal contour in each single-channel chart by setting an upper limit value and a lower limit value of a contour area and an upper limit value and a lower limit value of a contour perimeter in each single-channel chart;
s44, carrying out contour searching, filtering and filling on the normal contour in each single-channel image to obtain a normal image of each single channel;
s45, combining the normal images of each single channel obtained in the S44 into a complete segmentation image sagittal plane serving as an alternative section image.
Further, in the step S5, the area division of the candidate slice image specifically includes the following steps:
s51, searching the lowest point of the external auditory canal and the lowest point on the left side of the temporal bone on the alternative section image, wherein a connecting line of the two lowest points intersects with the condyloid process at two points, the two points form a line segment, and the midpoint of the line segment is used as a foot drop;
s52, making a vertical line at the position of the foot, wherein the vertical line divides the area on the upper side of the connecting line into two half areas;
s53, respectively making trisection lines of two half areas, wherein the trisection lines, the condyloid process and the temporal bone in each half area enclose joint gaps into three closed areas;
and S54, taking the central areas of the three closed areas in the half area close to the temporal bone as the area to be measured of the anterior gap area, and taking the central areas of the three closed areas in the half area close to the external auditory meatus as the area to be measured of the posterior gap area.
Further, in the step S6, the area measurement is performed on the area to be measured, specifically:
s61, according to the area to be measured of the anterior gap area obtained in the step S5, making a minimum circumscribed rectangle, calculating the number of pixels in the rectangle as the number of points of the anterior gap area of the joint gap, and multiplying the area of a single pixel to obtain the anterior gap area of the temporomandibular joint gap, wherein the calculation formula is as follows:
S front part =N Front part *S Pixel arrangement (3)
wherein ,
S front part Is the anterior gap area of the temporomandibular joint space,
N front part The number of pixels in the circumscribed rectangle is the smallest for the region to be measured of the front gap area,
S pixel arrangement Is the area of a single pixel point;
s62, according to the area to be measured of the back gap area obtained in the step S5, making a minimum circumscribed rectangle, calculating the number of pixels in the rectangle as the number of points of the back gap area of the joint gap, and multiplying the area of a single pixel to obtain the back gap area of the temporomandibular joint gap, wherein the calculation formula is as follows:
S rear part (S) =N Rear part (S) *S Pixel arrangement (4)
wherein ,
S rear part (S) Is the posterior gap area of the temporomandibular joint space,
N rear part (S) The number of pixels in the bounding rectangle is the smallest for the region to be measured of the back gap area,
S pixel arrangement Is the area of a single pixel.
Compared with the prior art, the application has the following beneficial effects:
1. by constructing and training the VNet neural network, the automatic segmentation of the segmented graph sagittal plane is realized, and the efficiency and the precision of manually extracting the segmented graph sagittal plane are greatly improved;
2. the segmented graph sagittal plane automatically extracted by the VNet neural network is further segmented and divided, sagittal plane screening and positioning of the region to be measured are carried out on the basis of the segmented graph, the problems of automatic measurement and labeling of the space area of the temporomandibular joint are solved, and finally the region to be measured of the anterior space area and the region to be measured of the posterior space area are determined, so that the problems of working efficiency of stomatologists and manual erroneous segmentation and measurement caused by clinical experience are solved to a certain extent.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following more particular description of embodiments of the present invention, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, and not constitute a limitation to the invention. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 shows a flow chart of a temporal mandibular joint gap area automatic measurement method based on deep learning;
FIG. 2 shows a CBCT image segmentation result diagram of one embodiment, wherein FIG. 2 (a) is a cross-sectional view, FIG. 2 (b) is a sagittal view, and FIG. 2 (c) is a coronal view;
FIG. 3 illustrates a flow diagram for screening a segmentation map sagittal plane, according to one embodiment;
fig. 4 shows a schematic view of a temporomandibular joint gap measurement region of one embodiment;
fig. 5 shows a temporomandibular joint measurement result diagram of an embodiment, fig. 5 (a) is a partial enlarged view of a temporomandibular joint gap anterior gap area in the divided diagram, fig. 5 (b) is a partial enlarged view of a temporomandibular joint gap posterior gap area in the divided diagram, fig. 5 (c) is a complete diagram of a temporomandibular joint gap in the divided diagram, and fig. 5 (d) is an labeling diagram of a temporomandibular joint gap.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present invention and not all embodiments of the present invention, and it should be understood that the present invention is not limited by the example embodiments described herein. Based on the embodiments of the invention described in the present application, all other embodiments that a person skilled in the art would have without inventive effort shall fall within the scope of the invention.
According to the method, the VNet is utilized for image segmentation, the section with the largest front and rear areas of the joint gap can be found out from the label graph or the predictive graph of the temporomandibular joint on the basis of the segmentation graph, the area is measured, and the CBCT image and the segmentation graph of the section are marked to be used as an output result together.
The utility model provides a temporal mandibular joint clearance area automatic measurement method based on degree of depth study, is used for cut apart CBCT image and measure and obtain temporal mandibular joint clearance's anterior clearance area and back clearance area, and fig. 1 is temporal mandibular joint clearance area automatic measurement method flow chart based on degree of depth study specifically includes the following steps:
and S0, acquiring an original CBCT image.
CBCT is cone beam projection computer recombination tomographic imaging equipment, the principle is that an X-ray generator makes annular digital projection around a projection body with lower radiation quantity; the data obtained in the intersection after multiple digital projection around the projection body are recombined in a computer to obtain a three-dimensional image, and compared with the traditional spiral CT, the CBCT has the advantages of small radiation dose, high spatial resolution, quick three-dimensional reconstruction, capability of providing accurate image data, relatively low examination cost and equipment acquisition cost, small occupied area, small space and the like, and is increasingly widely applied to oral clinic, so that the CBCT becomes an important auxiliary examination tool in the field of stomatology.
And step S1, preprocessing the original CBCT image acquired in the step S0.
The preprocessing comprises the steps of sequentially resampling an image, cutting Hu values, and performing zs-core normalization processing to eliminate irrelevant information in the image, recover useful real information, enhance the detectability of relevant information and simplify data to the greatest extent, thereby improving the reliability of feature extraction, image segmentation, matching and identification.
And S2, inputting the image obtained in the step S1 into a VNet neural network to generate a three-dimensional segmentation map of the left temporomandibular joint gap.
The VNet neural network is formed by sequentially connecting four encoders and four decoders, wherein the encoders adopt rolling and downsampling operations, the decoders adopt rolling and upsampling operations, and semantic information of the encoders is transferred into a network structure of the decoders through jump connection.
The VNet neural network used in the invention is trained by the following steps:
s21, acquiring an original CBCT image;
s22, preprocessing the original CBCT image acquired in the S21;
s23, carrying out data enhancement on the CBCT image preprocessed in the S22;
s24, constructing a VNet neural network;
s25, training the VNet neural network constructed in the S24 by adopting the image enhanced by the data in the S23.
The data enhancement is that the image is sequentially subjected to image rotation, image scaling, elastic deformation, coordinate axis overturning and intensity change, and the data enhancement is that the data amount of the medical image is relatively small, for example, the image is rotated by one angle, and the matrix input into the computer is quite different, so that the fundamental purpose is to increase the diversity of the data, and the problem of fitting is caused in a neural network with less data. In the detection process, a trained model is used, so that the direct detection is performed, and a part of detailed information can be lost due to data enhancement.
The VNet neural network adopts the Dice loss function and the CE loss function to form a loss function, and the formula is as follows:
Figure SMS_3
(1)
wherein ,
L Dice the method has the advantages that as the Dice loss function of the real data and the predicted data, the similarity degree of the real value and the predicted value is used as an index, and the problem of unbalanced prospect and background of the medical image can be well solved;
L CE the cross entropy loss function of the real data and the predicted data is well solved and used independently
Figure SMS_4
Unstable training;
n is the number of samples;
p is real data;
q is prediction data;
c is the number of detection categories, and in the present invention, detection category c=4 is: external auditory meatus, temporal bone, condyloid process and background.
Training the parameters of the VNet neural network by adopting an RMSprop optimizer, wherein the initial learning rate of the RMSprop optimizer is 0.0001, and the learning rate is as follows:
Figure SMS_5
(2)
wherein ,
lr' is the learning rate after the update,
lr init for the initial rate of learning to be the same,
epochfor the current round of the run,
MAX_EPOCHis the largest training round.
The adoption of the VNet neural network to segment the CBCT image is specifically as follows: the encoder extracts the features by using a residual error module formed by the double convolution layers and then performs downsampling operation, and the decoder performs feature fusion on the features by using the residual error module formed by the double convolution layers and then performs image size recovery by upsampling operation.
In one embodiment, the image is segmented using a VNet neural network, the sliding window size used in the image segmentation using the VNet neural network is 96 x 96, the encoder extracts the characteristics by using a residual error module formed by double convolution layers, and then performs downsampling operation, and the downsampling operation is repeated four times, wherein the size ratio of the downsampled size to the original image is [16:8:4:2:1]; the decoder utilizes a residual error module formed by double convolution layers to perform feature fusion on the features, and then performs up-sampling operation to restore the size of the image, and the operation is repeated four times.
And S3, selecting the sagittal plane direction of the three-dimensional segmentation map obtained in the step S2, and obtaining the sagittal plane of the segmentation map.
The three-dimensional segmentation map generated by the VNet neural network comprises:
cross section: dividing the human body into an upper part and a lower part, and a plane parallel to the ground; sagittal plane: a plane dividing the body into left and right parts; coronal plane: a plane dividing the body into front and rear parts along the left-right direction of the body; fig. 2 shows a CBCT image segmentation result diagram of an embodiment, in which fig. 2 (a) is a cross-sectional view, fig. 2 (b) is a sagittal view, and fig. 2 (c) is a coronal view. The position relationship of the condyloid process, the temporal bone and the external auditory canal can be clearly obtained from the sagittal plane graph, so that the sagittal plane graph is selected for the next analysis.
And S4, screening and filtering the segmented image sagittal plane obtained in the step S3 to obtain an alternative tangent plane image. FIG. 3 illustrates a flow diagram for screening a segmentation map sagittal plane, according to one embodiment.
The method specifically comprises the following steps:
s41, searching the highest point P at the top of the condyloid process in a segmented image sagittal plane, translating the ordinate of the P point downwards by 13 pixel points to obtain P ', translating the P ' leftwards by 20 pixel points to obtain P1, translating the P ' rightwards by 20 pixel points to obtain P2, taking the pixel points in a line segment P1P2 as reference points, recording the number of the pixel points belonging to the condyloid process in the reference points as the width of the top of the condyloid process, and screening out the segmented image sagittal plane with the width larger than 25 pixels;
s42, performing one-hot coding on the sagittal plane of the segmentation map screened in the S41 to obtain a single-channel map of the background, the condyloid process, the temporal bone and the external auditory canal;
s43, searching a normal contour in each single-channel chart by setting an upper limit value and a lower limit value of a contour area and an upper limit value and a lower limit value of a contour perimeter in each single-channel chart;
specifically, setting an upper limit value and a lower limit value of the contour area in each single-channel chart, and filtering out contours with contour areas larger than the upper limit value and contour areas smaller than the lower limit value;
setting an upper limit value and a lower limit value of the contour perimeter in each single-channel chart, and filtering out contours with the contour perimeter larger than the upper limit value and the contour perimeter smaller than the lower limit value;
s44, carrying out contour searching, filtering and filling on the normal contour in each single-channel image to obtain a normal image of each single channel;
s45, combining the normal images of each single channel obtained in the S44 into a complete segmentation image sagittal plane serving as an alternative section image.
The single-channel images can be selected to be placed under the same coordinate system, and then the next region division can be performed.
Specifically, among the numerous sagittal planes of the image, the present invention selects the sagittal plane with the largest temporomandibular joint gap area for measurement, and uses the segmentation chart sagittal plane shown in fig. 3 for screening the sagittal plane: cutting the sagittal plane of the three-dimensional segmentation map, and independently sending each obtained two-dimensional sagittal plane into a tangent plane screening module, wherein in order not to conflict with a coordinate system in the three-dimensional map, a transverse direction is defined as an x direction, and a longitudinal direction is defined as a y direction in the joint area automatic measurement part; firstly, searching the highest point P at the top of the condyloid process, translating the y coordinate of the P point downwards by 13 pixel points to obtain y ', defining the y' at the moment as the reference height of the top of the condyloid process, and screening and judging the sagittal plane by utilizing the width of the top of the condyloid process at the height; and taking twenty points respectively at the left and right sides of the abscissa of the P point as reference points at the y' height, recording the points belonging to the condyloid process in the reference points as the width of the condyloid process top, filtering out the tangent planes of which the width of the condyloid process top is smaller than 25 pixels and the tangent planes of which the width is greater than or equal to 25 pixels as the tangent planes to be measured.
And S5, carrying out region division on the alternative section image obtained in the step S4, and determining a region to be measured of the front gap area and a region to be measured of the back gap area.
The method specifically comprises the following steps:
s51, searching the lowest point of the external auditory canal and the lowest point on the left side of the temporal bone on the alternative section image, wherein a connecting line of the two lowest points intersects with the condyloid process at two points, the two points form a line segment, and the midpoint of the line segment is used as a foot drop;
s52, making a vertical line at the position of the foot, wherein the vertical line divides the area on the upper side of the connecting line into two half areas;
s53, respectively making trisection lines of two half areas, wherein the trisection lines, the condyloid process and the temporal bone in each half area enclose joint gaps into three closed areas;
and S54, taking the central areas of the three closed areas in the half area close to the temporal bone as the area to be measured of the anterior gap area, and taking the central areas of the three closed areas in the half area close to the external auditory meatus as the area to be measured of the posterior gap area.
Fig. 4 shows a schematic diagram of a temporomandibular joint gap measurement area according to an embodiment, in which a measurement area of the temporomandibular joint gap is found in a segmentation map and an area measurement is performed, where area (1) is the temporomandibular joint gap anterior gap area and area (2) is the temporomandibular joint posterior gap area, and the sagittal plane with the largest temporomandibular joint gap area in the segmentation map is selected as the two-dimensional image to be measured.
And S6, performing area measurement on the area to be measured obtained in the step S5 to obtain the anterior gap area and the posterior gap area of the temporomandibular joint gap.
The method comprises the following steps:
s61, according to the area to be measured of the anterior gap area obtained in the step S5, making a minimum circumscribed rectangle, calculating the number of pixels in the rectangle as the number of points of the anterior gap area of the joint gap, and multiplying the area of a single pixel to obtain the anterior gap area of the temporomandibular joint gap, wherein the calculation formula is as follows:
S front part =N Front part *S Pixel arrangement (3)
wherein ,
S front part Is the anterior gap area of the temporomandibular joint space,
N front part The number of pixels in the circumscribed rectangle is the smallest for the region to be measured of the front gap area,
S pixel arrangement Is the area of a single pixel point;
s62, according to the area to be measured of the back gap area obtained in the step S5, making a minimum circumscribed rectangle, calculating the number of pixels in the rectangle as the number of points of the back gap area of the joint gap, and multiplying the area of a single pixel to obtain the back gap area of the temporomandibular joint gap, wherein the calculation formula is as follows:
S rear part (S) =N Rear part (S) *S Pixel arrangement (4)
wherein ,
S rear part (S) Is the posterior gap area of the temporomandibular joint space,
N rear part (S) The number of pixels in the bounding rectangle is the smallest for the region to be measured of the back gap area,
S pixel arrangement Is the area of a single pixel.
Calculating the anterior and posterior areas of the temporomandibular joint gap of each section to be measured, selecting the section which can be measured and has the largest measurement result as the finally selected measurement section, and outputting the measurement result, wherein the area measurement result is shown in fig. 5:
carrying out one-hot image single-hot encoding on an input image, searching the outline of an object in the single-hot image in each one-hot image, preprocessing the obtained outline, filtering out the areas of the object with overlarge area, overlong outline and overlarge outline, and filling internal holes in the filtered image to obtain a preprocessed image; selecting a measuring range in the preprocessed image, specifically, finding a locating point, calculating the slope and the longitudinal intercept of a locating point connecting line to represent a straight line, sequentially calculating the slope and the longitudinal intercept of a vertical line and a three-average line, calculating the intersection point of the three-average line and a condyloid process and temporal bone contour line, calculating an external rectangle of the anterior and posterior area of the temporomandibular joint gap by using the intersection point, counting the number of background points in the rectangle, namely the number of points of the joint gap area, and multiplying the square conversion of pixels to obtain the real physical area of the device.
Marking the measured region in the CBCT image and the segmentation map, outputting the marking map and the measurement result as a picture, wherein slice in fig. 5 is the measured sagittal plane, namely the sagittal plane with the largest temporomandibular joint gap area; area_1 is the anterior area of the temporomandibular joint space, 110/6.875mm 2 This side area is shown to comprise 110 pixels with a physical area of 6.875mm square, area_2 being the posterior area of the temporomandibular joint space, 132/8.25mm 2 Indicating that the side area contains 132 pixel points with a physical area of 8.25 square millimeters; fig. 5 (a) and fig. 5 (b) in the output image are respectively enlarged partial display views of the anterior and posterior areas of the temporomandibular joint gap in the labeled segmentation map, fig. 5 (c) is a complete view of the temporomandibular joint gap in the segmentation map, and fig. 5 (d) is a label view of the key points, auxiliary lines and measurement areas of the temporomandibular joint gap in the CBCT image.
In summary, the method of the invention utilizes VNet to carry out image segmentation, carries out sagittal plane screening and positioning of the region to be detected on the basis of a segmentation map, measures the front and rear areas of the temporomandibular joint gap of the selected section, marks corresponding measurement regions in the CBCT image and the segmentation map, and solves the problems of automatic measurement and marking of the temporomandibular joint gap area. By realizing image segmentation and automatic measurement, the problem of work efficiency of stomatologists and the problem of manual erroneous segmentation and measurement caused by clinical experience are solved to a certain extent.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present invention thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims. In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The foregoing description is merely illustrative of specific embodiments of the present invention and the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the scope of the present invention. The protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. The automatic temporomandibular joint gap area measuring method based on deep learning is characterized by comprising the following steps of:
s0, acquiring an original CBCT image;
step S1, preprocessing the original CBCT image acquired in the step S0;
s2, inputting the image obtained in the step S1 into a VNet neural network to generate a three-dimensional segmentation map of the left temporomandibular joint gap;
s3, selecting the sagittal plane direction of the three-dimensional segmentation map obtained in the step S2 to obtain a segmentation map sagittal plane;
s4, screening and filtering the segmented image sagittal plane obtained in the step S3 to obtain an alternative tangent plane image;
step S5, carrying out region division on the alternative section image obtained in the step S4, and determining a region to be measured of the front gap area and a region to be measured of the back gap area;
and S6, performing area measurement on the area to be measured obtained in the step S5 to obtain the anterior gap area and the posterior gap area of the temporomandibular joint gap.
2. The method for automatically measuring the temporomandibular joint gap area based on deep learning according to claim 1, wherein in the step S2, the VNet neural network is trained by:
s21, acquiring an original CBCT image;
s22, preprocessing the original CBCT image acquired in the S21;
s23, carrying out data enhancement on the CBCT image preprocessed in the S22;
s24, constructing a VNet neural network;
s25, training the VNet neural network constructed in the S24 by adopting the image enhanced by the data in the S23.
3. The automatic temporomandibular joint gap area measurement method based on deep learning according to claim 2, wherein the preprocessing is sequentially resampling, hu value truncation, zs-core normalization processing on the image.
4. The automatic temporomandibular joint gap area measuring method based on deep learning according to claim 2, wherein the data enhancement is to sequentially perform image rotation, image scaling, elastic deformation, coordinate axis flipping, and intensity change on the image.
5. The automatic temporomandibular joint gap area measuring method based on deep learning according to claim 2, wherein the VNet neural network is composed of four encoders and four decoders connected in sequence, the encoders employ convolution and downsampling operations, the decoders employ convolution and upsampling operations, and semantic information of the encoders is transferred to a network structure of the decoders through jump connection.
6. The automatic temporomandibular joint gap area measurement method based on deep learning according to claim 5, wherein the VNet neural network is trained by using a Dice loss function and a CE loss function, and a loss function formula formed by the Dice loss function and the CE loss function is as follows:
Figure QLYQS_1
(1)
wherein ,
L Dice a Dice loss function for real data and predicted data;
L CE a cross entropy loss function for real data and predicted data;
n is the number of samples;
p is real data;
q is prediction data;
c is the number of detection categories.
7. The automatic temporomandibular joint gap area measurement method based on deep learning according to claim 5, wherein parameters of the VNet neural network are trained by using an RMSprop optimizer, the RMSprop optimizer has an initial learning rate of 0.0001, and the learning rate has the following formula:
Figure QLYQS_2
(2)
wherein ,
lr' is the updated learning rate;
lr init is the initial learning rate;
epochis the current round;
MAX_EPOCHis the largest training round.
8. The method for automatically measuring the temporomandibular joint gap area based on deep learning according to claim 1, wherein in the step S4, the filtering of the sagittal plane of the segmentation map specifically includes:
s41, searching the highest point P at the top of the condyloid process in a segmented image sagittal plane, translating the ordinate of the P point downwards by 13 pixel points to obtain P ', translating the P ' leftwards by 20 pixel points to obtain P1, translating the P ' rightwards by 20 pixel points to obtain P2, taking the pixel points in a line segment P1P2 as reference points, recording the number of the pixel points belonging to the condyloid process in the reference points as the width of the top of the condyloid process, and screening out the segmented image sagittal plane with the width larger than 25 pixels;
s42, performing one-hot coding on the sagittal plane of the segmentation map screened in the S41 to obtain a single-channel map of the background, the condyloid process, the temporal bone and the external auditory canal;
s43, searching a normal contour in each single-channel chart by setting an upper limit value and a lower limit value of a contour area and an upper limit value and a lower limit value of a contour perimeter in each single-channel chart;
s44, carrying out contour searching, filtering and filling on the normal contour in each single-channel image to obtain a normal image of each single channel;
s45, combining the normal images of each single channel obtained in the S44 into a complete segmentation image sagittal plane serving as an alternative section image.
9. The automatic temporomandibular joint gap area measurement method based on deep learning according to claim 1, wherein in the step S5, the area division of the candidate slice image specifically includes the following steps:
s51, searching the lowest point of the external auditory canal and the lowest point on the left side of the temporal bone on the alternative section image, wherein a connecting line of the two lowest points intersects with the condyloid process at two points, the two points form a line segment, and the midpoint of the line segment is used as a foot drop;
s52, making a vertical line at the position of the foot, wherein the vertical line divides the area on the upper side of the connecting line into two half areas;
s53, respectively making trisection lines of two half areas, wherein the trisection lines, the condyloid process and the temporal bone in each half area enclose joint gaps into three closed areas;
and S54, taking the central areas of the three closed areas in the half area close to the temporal bone as the area to be measured of the anterior gap area, and taking the central areas of the three closed areas in the half area close to the external auditory meatus as the area to be measured of the posterior gap area.
10. The automatic temporomandibular joint gap area measurement method according to claim 1, wherein in step S6, the area measurement is performed on the area to be measured, specifically:
s61, according to the area to be measured of the anterior gap area obtained in the step S5, making a minimum circumscribed rectangle, calculating the number of pixels in the rectangle as the number of points of the anterior gap area of the joint gap, and multiplying the area of a single pixel to obtain the anterior gap area of the temporomandibular joint gap, wherein the calculation formula is as follows:
S front part =N Front part *S Pixel arrangement (3)
wherein ,
S front part Is the anterior gap area of the temporomandibular joint space,
N front part The number of pixels in the circumscribed rectangle is the smallest for the region to be measured of the front gap area,
S pixel arrangement Is the area of a single pixel point;
s62, according to the area to be measured of the back gap area obtained in the step S5, making a minimum circumscribed rectangle, calculating the number of pixels in the rectangle as the number of points of the back gap area of the joint gap, and multiplying the area of a single pixel to obtain the back gap area of the temporomandibular joint gap, wherein the calculation formula is as follows:
S rear part (S) =N Rear part (S) *S Pixel arrangement (4)
wherein ,
S rear part (S) Is the posterior gap area of the temporomandibular joint space,
N rear part (S) The number of pixels in the bounding rectangle is the smallest for the region to be measured of the back gap area,
S pixel arrangement Is the area of a single pixel.
CN202310411589.3A 2023-04-18 2023-04-18 Temporal-mandibular joint gap area automatic measurement method based on deep learning Active CN116152238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310411589.3A CN116152238B (en) 2023-04-18 2023-04-18 Temporal-mandibular joint gap area automatic measurement method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310411589.3A CN116152238B (en) 2023-04-18 2023-04-18 Temporal-mandibular joint gap area automatic measurement method based on deep learning

Publications (2)

Publication Number Publication Date
CN116152238A true CN116152238A (en) 2023-05-23
CN116152238B CN116152238B (en) 2023-07-18

Family

ID=86360363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310411589.3A Active CN116152238B (en) 2023-04-18 2023-04-18 Temporal-mandibular joint gap area automatic measurement method based on deep learning

Country Status (1)

Country Link
CN (1) CN116152238B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118039164A (en) * 2024-04-11 2024-05-14 四川大学 Temporomandibular joint bone data processing method and processing terminal based on skull lateral position plate

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040204760A1 (en) * 2001-05-25 2004-10-14 Imaging Therapeutics, Inc. Patient selectable knee arthroplasty devices
US20080101721A1 (en) * 2006-10-25 2008-05-01 Sanyo Electric Co., Ltd. Device and method for image correction, and image shooting apparatus
CN108025155A (en) * 2015-09-23 2018-05-11 瑞思迈有限公司 The patient interface of structure is formed including the sealing with different-thickness
CN108136149A (en) * 2015-09-23 2018-06-08 瑞思迈有限公司 The patient interface of structure is formed including sealing with different thickness
CN110895818A (en) * 2019-10-16 2020-03-20 南京大学 Knee joint contour feature extraction method and device based on deep learning
CN113177915A (en) * 2021-04-20 2021-07-27 中国科学院高能物理研究所 Tibial plateau caster angle measuring method and device and storage medium
CN114017347A (en) * 2021-11-27 2022-02-08 长沙中联泵业股份有限公司 Sectional type multistage centrifugal pump without balancing device
CN115515614A (en) * 2020-02-06 2022-12-23 尤尼根公司 Compositions comprising extracts of alpinia and other plants for improving joint health and treating arthritis
CN115530762A (en) * 2022-10-11 2022-12-30 南京瑞德医疗科技有限公司 CBCT temporomandibular joint automatic positioning method and system
CN115578406A (en) * 2022-12-13 2023-01-06 四川大学 CBCT jaw bone region segmentation method and system based on context fusion mechanism

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040204760A1 (en) * 2001-05-25 2004-10-14 Imaging Therapeutics, Inc. Patient selectable knee arthroplasty devices
US20080101721A1 (en) * 2006-10-25 2008-05-01 Sanyo Electric Co., Ltd. Device and method for image correction, and image shooting apparatus
CN108025155A (en) * 2015-09-23 2018-05-11 瑞思迈有限公司 The patient interface of structure is formed including the sealing with different-thickness
CN108136149A (en) * 2015-09-23 2018-06-08 瑞思迈有限公司 The patient interface of structure is formed including sealing with different thickness
CN110895818A (en) * 2019-10-16 2020-03-20 南京大学 Knee joint contour feature extraction method and device based on deep learning
CN115515614A (en) * 2020-02-06 2022-12-23 尤尼根公司 Compositions comprising extracts of alpinia and other plants for improving joint health and treating arthritis
CN113177915A (en) * 2021-04-20 2021-07-27 中国科学院高能物理研究所 Tibial plateau caster angle measuring method and device and storage medium
CN114017347A (en) * 2021-11-27 2022-02-08 长沙中联泵业股份有限公司 Sectional type multistage centrifugal pump without balancing device
CN115530762A (en) * 2022-10-11 2022-12-30 南京瑞德医疗科技有限公司 CBCT temporomandibular joint automatic positioning method and system
CN115578406A (en) * 2022-12-13 2023-01-06 四川大学 CBCT jaw bone region segmentation method and system based on context fusion mechanism

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118039164A (en) * 2024-04-11 2024-05-14 四川大学 Temporomandibular joint bone data processing method and processing terminal based on skull lateral position plate

Also Published As

Publication number Publication date
CN116152238B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
US11464467B2 (en) Automated tooth localization, enumeration, and diagnostic system and method
US11443423B2 (en) System and method for constructing elements of interest (EoI)-focused panoramas of an oral complex
CN103607951B (en) Image processing apparatus and image processing method
CN107563383A (en) A kind of medical image auxiliary diagnosis and semi-supervised sample generation system
WO2022095612A1 (en) Method and system for extracting carotid artery vessel centerline in magnetic resonance image
CN116152238B (en) Temporal-mandibular joint gap area automatic measurement method based on deep learning
US20220084267A1 (en) Systems and Methods for Generating Quick-Glance Interactive Diagnostic Reports
Beaudet et al. Upper third molar internal structural organization and semicircular canal morphology in Plio-Pleistocene South African cercopithecoids
CN111402254A (en) CT image pulmonary nodule high-performance automatic detection method and device
CN112529909A (en) Tumor image brain region segmentation method and system based on image completion
Sheng et al. Transformer-based deep learning network for tooth segmentation on panoramic radiographs
CN112950595B (en) Human body part segmentation method and system based on SPECT imaging
CN115587977A (en) Alzheimer disease pathological area positioning and classification prediction method
CN110782427A (en) Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution
CN114638852A (en) Jaw bone and soft tissue identification and reconstruction method, device and medium based on CBCT image
Michel et al. Online brain attenuation correction in PET: towards a fully automated data handling in a clinical environment
CN104000618A (en) Breathing movement gating correction technology implemented with ring true photon number gating method
CN115375560B (en) Reconstruction method and system of 3D-DSA image
CN116402756A (en) X-ray film lung disease screening system integrating multi-level characteristics
CN115100306A (en) Four-dimensional cone-beam CT imaging method and device for pancreatic region
US20070053570A1 (en) Image processing method, and computer-readable recording medium in which image processing program is recorded
Chen et al. Automatic and visualized grading of dental caries using deep learning on panoramic radiographs
CN112967295A (en) Image processing method and system based on residual error network and attention mechanism
CN112734740A (en) Method for training target detection model, method for detecting target and device thereof
CN117830317B (en) Automatic orthodontic detection method and system based on image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant