CN112085028A - Tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision - Google Patents
Tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision Download PDFInfo
- Publication number
- CN112085028A CN112085028A CN202010894993.7A CN202010894993A CN112085028A CN 112085028 A CN112085028 A CN 112085028A CN 202010894993 A CN202010894993 A CN 202010894993A CN 112085028 A CN112085028 A CN 112085028A
- Authority
- CN
- China
- Prior art keywords
- feature map
- tooth
- inputting
- disturbance
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 49
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000000605 extraction Methods 0.000 claims abstract description 58
- 238000012549 training Methods 0.000 claims abstract description 8
- 238000010586 diagram Methods 0.000 claims description 41
- 230000004927 fusion Effects 0.000 claims description 17
- 230000004913 activation Effects 0.000 claims description 11
- 238000010606 normalization Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000012545 processing Methods 0.000 description 5
- 238000003745 diagnosis Methods 0.000 description 4
- 210000000988 bone and bone Anatomy 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 208000025157 Oral disease Diseases 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 208000030194 mouth disease Diseases 0.000 description 2
- 208000014151 Stomatognathic disease Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000008719 thickening Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30036—Dental; Teeth
Abstract
The invention discloses a tooth panoramic picture semantic segmentation method based on feature map disturbance and boundary supervision.A tooth panoramic picture is sharpened after the tooth panoramic picture is obtained to obtain a tooth panoramic picture with a clearer tooth boundary, and then a disturbance feature map extraction network after tooth panoramic picture training is used for carrying out feature extraction to obtain a deep disturbance feature map; and finally, respectively inputting the deep disturbance feature map into the trained mask network and boundary network to obtain a tooth region segmentation result and a tooth profile segmentation result. The invention greatly enhances the generalization capability of the network, and enables the trained model to obtain a more reasonable segmentation result by using part of common characteristics in special conditions when meeting the special conditions.
Description
Technical Field
The invention relates to the field of medical image processing, in particular to a tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision.
Background
The shortage of oral medical resources in China mainly shows that the quantity of oral doctors is seriously insufficient, the regional development is unbalanced, and the development kinetic energy of domestic oral medical instruments and equipment is insufficient. The chinese oral industry trend report in 2019 shows that the World Health Organization (WHO) has no evidence for dentists: the recommended value for the population ratio is 1:5000, which rises to 1:2000 for developed countries. At present, the proportion of the population of Chinese dentists is less than 1:8000, which is far lower than the level and WHO recommended value of other countries. In the developed areas of the east in the north, oral medical treatment is developed quickly, the holding amount of doctors is obviously improved, the proportion of the oral doctors to residents in the urban areas of Beijing is about 1:2000, which is similar to that of the developed countries, but the oral medical treatment is only 1:8000 in the suburbs and is 1:20000 or 1:30000 in the west. This imbalance in regional development is a big problem facing our country. In recent years, although the number of oral medical practitioners increases slightly per ten thousand of people, the oral medical service diagnosis and treatment requirements of people are still far from being met. Except for the difference in the number of oral doctors, the overall composition of the oral doctors is lower than the academic level in quality, and according to statistics, about 45% of the oral doctors in the department and the academic levels in 2015 are owned by the oral doctors in China, most of the oral doctors are concentrated in public medical institutions or high-end medical institutions in large and medium cities, and a part of people who do not receive regular oral education or only receive primary oral medical education (the professional level) and are engaged in oral cavity and related medical industries are occupied.
In addition, even in a public oral medical institution with high public confidence, because the amount of patients is far beyond the normal load, the workload of doctors is large, the doctors can only diagnose the patient's chief complaints, and the non-chief complaint oral problems of the patients are often ignored to delay the treatment or omit the treatment. On the other hand, some oral disease problems are easy to be missed or misdiagnosed due to the difference of doctor levels. Therefore, if the whole scene can be judged in advance by means of an Artificial Intelligence (AI) technology and a preliminary diagnosis report can be automatically issued, the efficiency and the accuracy of the diagnosis of the oral diseases can be improved, and missed diagnosis and misdiagnosis can be reduced. The segmentation of the teeth in the panoramic picture is the basis for the detection of all dental diseases.
The patent title, the method and the device for identifying the panoramic permanent teeth based on deep learning have the application number of CN109949319A and the application date of 2019-3-12; the patent describes a method and a device for identifying a panoramic permanent tooth based on deep learning, in which a alveolar bone line segmentation model is used to obtain an alveolar bone line segmentation result, image blocks of a peridental region are cut out from the original panoramic tooth according to the alveolar bone line segmentation result, and finally the image blocks of the peridental region are input into the constant tooth segmentation model based on deep learning to obtain a permanent tooth segmentation result and mark tooth position numbers.
The patent title, a tooth segmentation method based on depth contour perception, a device and computer equipment, with application number of CN110473243A and application date of 2019-8-9; the patent describes a tooth segmentation method, a tooth segmentation device and computer equipment based on depth contour perception, wherein the method comprises the steps of extracting a contour mask from an original mask through morphological processing, thickening the contour mask, using the thickened contour mask as supervision information, training a full convolution network on a preprocessed original tooth image through the full convolution network to minimize a loss function, and obtaining a contour prediction probability map. And then fusing the preprocessed tooth image and the contour prediction probability image, and obtaining a tooth segmentation result image through a U-shaped depth contour perception network which takes an original mask as supervision information after fusion.
In the prior art, on the basis of enhancing the model generalization capability and utilizing the boundary information, the model generalization capability cannot be improved by a targeted strategy, the boundary information is not paid enough attention, and the boundary information and the mask information are not considered on the same position, so that the extracted features have insufficient universality and the segmentation result is poor.
Disclosure of Invention
The application aims to provide a tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision, and solves the problem that the tooth panoramic semantic segmentation method cannot be carried out by utilizing common partial features in special conditions in the prior art when the special conditions are met.
In order to achieve the purpose, the technical scheme of the application is as follows:
a tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision comprises the following steps:
obtaining a tooth panoramic picture, and carrying out sharpening operation on the tooth panoramic picture to obtain a tooth panoramic picture I with a clearer tooth boundary;
inputting the tooth panoramic picture I into the trained disturbance feature map extraction network to obtain a deep disturbance feature map Fdeep;
Deep disturbance feature map FdeepAnd respectively inputting the data to the trained mask network and the trained boundary network to obtain a tooth region segmentation result and a tooth profile segmentation result.
Further, the obtaining a tooth panoramic image, and performing a sharpening operation on the tooth panoramic image to obtain a tooth panoramic image I with a clearer tooth boundary, includes:
inputting original teeth panoramic picture IoriginalAnd carrying out sharpening filtering operation on each tooth panoramic picture by adopting a filter, wherein the kernel of the filter is 3 x 3 to obtain a sharpened tooth panoramic picture I.
Further, inputting the tooth panoramic picture I into a disturbance feature map extraction network after training for feature extraction to obtain a deep disturbance feature map FdeepThe method comprises the following steps:
step 2.1, inputting the tooth panoramic picture I into a simple feature extraction module with convolution kernel size of 3 x 3 to obtain an output feature map F1Of dimension C1×H1×W1;
Step 2.2, feature map F1Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F2Of dimension C2×H2×W2;
Step 2.3, converting the characteristic diagram F2Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F3Of dimension C3×H3×W3;
Step 2.4, converting the characteristic diagram F3Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F4Of dimension C4×H4×W4;
Step 2.5, converting the characteristic diagram F4Inputting the filtered solution into a disturbance feature extraction module with convolution kernel size of 3 x 3 to obtain a deep disturbance feature map FdeepOf dimension C5×H5×W5。
Further, the deep layer perturbation feature map FdeepRespectively input into the trained mask network and boundary network to obtain tooth region segmentation result and tooth contour segmentation result, including,
step 3.1, in a mask network, a deep perturbation feature map FdeepAfter upsampling and F4Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C5+C4Inputting simple feature extraction module to obtain feature diagram UP4Of dimension C4×H4×W4;
Step 3.2, characteristic diagram UP4After upsampling and F3Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C4+C3Inputting simple feature extraction module to obtain feature diagram UP3Of dimension C3×H3×W3;
Step 3.3, UP the characteristic diagram3After upsampling and F2Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C3+C2Inputting simple feature extraction module to obtain feature diagram UP2Of dimension C2×H2×W2;
Step 3.4, characteristic diagram UP2After upsampling and F1Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C2+C1Inputting simple feature extraction module to obtain feature diagram UP1Of dimension C1×H1×W1;
Step 3.5, characteristic diagram UP1Inputting 1 x 1 convolution block to obtain characteristic diagram UP0Dimension of 32 XH1×W1Where 32 represents 32 different teeth, will UP0Each channel of (a) is activated using the following formula, resulting in UP0Multiplying the probability that each pixel point belongs to the tooth area by 255 to obtain the final segmentation result of 32 teeth;
and 3.6, similarly adopting the operations of the step 3.1 to the step 3.5 in the boundary network, and finally outputting a tooth profile segmentation result.
Further, the simple feature extraction module comprises two groups of convolution layers with convolution kernel size of 3 × 3, a batch normalization layer and an activation layer which are connected in series.
Further, the disturbance feature extraction module comprises two groups of convolution layers with convolution kernel size of 3 × 3 connected in series, a feature disturbance operation, a batch normalization layer and an activation layer.
Further, the channel fusion module is configured to combine the lower layer feature map after the upsampling and the present layer feature map according to a channel, and output a feature map with a constant size and a constant number of channels, where the feature map is a sum of the lower layer feature map and the present layer feature map.
Further, the feature disturbance operation realizes disturbance on the feature map by using the following formula;
wherein xiFor inputting the feature map, f (x)i) Andrespectively representing the before-and after-perturbation profiles, miConsisting of 0 and 1, following a bernoulli distribution,ifor controlling the disturbance amplitude, the parameter values are automatically optimized during training, and the multiplication of each point of the matrix is represented.
According to the tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision, on one hand, the feature map is disturbed by using a disturbance feature extraction module in the feature extraction process, so that the disturbed feature map can lack part of feature information, and the neural network learns how to obtain segmentation results by using the feature map with part of feature missing, so that the generalization capability of the network is greatly enhanced, and when a special condition is met, the reasonable segmentation results can still be obtained by using common part of features in the special condition. On the other hand, because of the introduction of the boundary network, the characteristics of the boundary of the divided regions are directly learned through the boundary network, the boundary of the divided regions can be found more easily, and the dividing effect on the conditions of difference in the classes and similarity among the classes is improved.
Drawings
FIG. 1 is a flowchart of a tooth panorama semantic segmentation method based on feature map perturbation and boundary supervision according to the present application;
FIG. 2 is a schematic diagram of a network structure according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a simple feature extraction module architecture of the present application;
FIG. 4 is a schematic structural diagram of a disturbance feature extraction module of the present application;
fig. 5 is a schematic structural diagram of a channel fusion module according to the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a method for semantic segmentation of a tooth panorama based on feature map perturbation and boundary supervision is provided, which comprises:
and step S1, obtaining the tooth panoramic picture, and carrying out sharpening operation on the tooth panoramic picture to obtain the tooth panoramic picture I with clearer tooth boundary.
This application carries out necessary preliminary treatment to the tooth panorama piece that acquires, acquire the tooth panorama piece, pass through the sharpening operation with the tooth panorama piece, obtain the more clear tooth panorama piece I in tooth border, include:
inputting original teeth panoramic picture IoriginalAnd carrying out sharpening filtering operation on each tooth panoramic picture by adopting a filter, wherein the kernel of the filter is 3 x 3 to obtain a sharpened tooth panoramic picture I.
It should be noted that, in the present application, the original tooth panorama may be directly used for subsequent processing without performing a sharpening operation. The filter kernel may also be set to 5 x 5, or 7 x 7, as desired.
Step S2, inputting the tooth panoramic picture I into the disturbance feature map extraction network after training for feature extraction to obtain a deep disturbance feature map Fdeep。
Carrying out feature extraction on a tooth panoramic picture I to obtain a deep disturbance feature map FdeepThe method comprises the following steps:
step 2.1, inputting the tooth panoramic picture I into a simple feature extraction module with convolution kernel size of 3 x 3 to obtain an output feature map F1Of dimension C1×H1×W1;
Step 2.2, feature map F1Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F2Of dimension C2×H2×W2;
Step 2.3, converting the characteristic diagram F2Input to convolution after poolingA disturbance feature extraction module with the kernel size of 3 x 3 obtains a feature map F3Of dimension C3×H3×W3;
Step 2.4, converting the characteristic diagram F3Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F4Of dimension C4×H4×W4;
Step 2.5, converting the characteristic diagram F4Inputting the filtered solution into a disturbance feature extraction module with convolution kernel size of 3 x 3 to obtain a deep disturbance feature map FdeepOf dimension C5×H5×W5。
As shown in fig. 2, the perturbation feature map extraction network of the present application includes a simple feature extraction module (CBR) and a four-layer perturbation feature extraction module (CDBR). In other embodiments, the disturbance feature map extraction network may also adopt other scrambling structures, for example, a structure that three layers of simple feature extraction modules and two layers of disturbance feature extraction modules are sequentially adopted.
Compared with an undisturbed characteristic diagram, the disturbed deep characteristic diagram has more common characteristics, and is beneficial to improving the generalization capability of the network.
The simple feature extraction module, as shown in fig. 3, includes two sets of convolution layers (conv3 × 3) with convolution kernel size of 3 × 3, a batch normalization layer (BN), and an activation layer (ReLU) connected in series.
Firstly, calculating an input feature map through the convolution layer, then carrying out batch normalization and ReLU activation layer processing, then carrying out a second group of convolution layer, batch normalization and ReLU activation layer, and finally outputting the processed feature map.
The perturbation feature extraction module, as shown in fig. 4, includes two sets of convolution layers (conv3 × 3) with convolution kernel size of 3 × 3 connected in series, feature perturbation, batch normalization layer (BN), and activation layer (ReLU).
Firstly, calculating an input feature map through a convolutional layer, then realizing the disturbance of the feature map through feature disturbance operation, then carrying out batch normalization and ReLU activation layer processing, then outputting the processed feature map through a second group of convolutional layers, feature disturbance, batch normalization and ReLU activation layers.
The characteristic disturbance operation realizes disturbance on the characteristic diagram by using the following formula;
wherein xiFor inputting the feature map, f (x)i) Andrespectively representing the before-and after-perturbation profiles, miConsisting of 0 and 1, following a bernoulli distribution,ifor controlling the disturbance amplitude, the parameter values are automatically optimized during training, and the multiplication of each point of the matrix is represented. i denotes the i-th layer of the network, xiA feature map representing the ith layer input.
It should be noted that the convolution kernel size of the simple feature extraction module and the perturbation feature extraction module in the present application is 3 × 3, and may also be set to 5 × 5 or 7 × 7 as needed.
Step S3, deep disturbance feature map FdeepAnd respectively inputting the data to the trained mask network and the trained boundary network to obtain a tooth region segmentation result and a tooth profile segmentation result.
The present application describes a deep perturbation profile FdeepRespectively input into the trained mask network and boundary network to obtain tooth region segmentation result and tooth contour segmentation result, including,
step 3.1, in a mask network, a deep perturbation feature map FdeepAfter upsampling and F4Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C5+C4Inputting simple feature extraction module to obtain feature diagram UP4Of dimension C4×H4×W4;
Step 3.2, characteristic diagram UP4After upsampling and F3Inputting the data into a channel fusion module together to obtain a feature map combined according to channels, wherein the number of the channels isC4+C3Inputting simple feature extraction module to obtain feature diagram UP3Of dimension C3×H3×W3;
Step 3.3, UP the characteristic diagram3After upsampling and F2Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C3+C2Inputting simple feature extraction module to obtain feature diagram UP2Of dimension C2×H2×W2;
Step 3.4, characteristic diagram UP2After upsampling and F1Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C2+C1Inputting simple feature extraction module to obtain feature diagram UP1Of dimension C1×H1×W1;
Step 3.5, characteristic diagram UP1Inputting 1 x 1 convolution block to obtain characteristic diagram UP0Dimension of 32 XH1×W1Where 32 represents 32 different teeth, will UP0Each channel of (a) is activated using the following formula, resulting in UP0Multiplying the probability that each pixel point belongs to the tooth area by 255 to obtain the final segmentation result of 32 teeth;
and 3.6, similarly adopting the operations of the step 3.1 to the step 3.5 in the boundary network, and finally outputting a tooth profile segmentation result.
Wherein sigmoid is an activation function, and e is a constant.
In the present application, the channel fusion module (Copy), as shown in fig. 5, is configured to combine the lower layer feature map after upsampling and the present layer feature map according to channels, and output a feature map with a constant size and a constant number of channels, where the feature map is a sum of the lower layer feature map and the present layer feature map.
Similarly, the convolution kernel size of the simple feature extraction module in this embodiment is 3 × 3, and may be set to 5 × 5 or 7 × 7 as needed.
In the present application, C is the number of channels, H is the height of the picture, W is the width of the picture, and the subscripts of the letters indicate serial numbers to distinguish the dimensions of different feature maps.
According to the method and the device, the characteristics of the boundary of the divided region are directly learned through the boundary network, the boundary of the divided region can be found more easily, and the dividing effect of the intra-class difference and inter-class similarity is improved. For partial images, two parts in the same semantic region may have a large difference in image features, and the two parts are easily recognized as two types of semantic regions, which is called intra-class difference. Similarly, the image features in different semantic regions of the partial image have greater similarity, and the two parts are easily recognized as a semantic region, which is called as inter-class similarity. By learning the boundary information, correct semantic boundaries can be found better, and the segmentation effect when the images meet intra-class difference and inter-class similarity can be improved well.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (8)
1. A tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision is characterized by comprising the following steps:
obtaining a tooth panoramic picture, and carrying out sharpening operation on the tooth panoramic picture to obtain a tooth panoramic picture I with a clearer tooth boundary;
inputting the tooth panoramic picture I into a disturbance feature map extraction network after training for feature extraction to obtain a deep disturbance feature map Fdeep;
Deep disturbance feature map FdeepAnd respectively inputting the data to the trained mask network and the trained boundary network to obtain a tooth region segmentation result and a tooth profile segmentation result.
2. The method for semantic segmentation of tooth panoramic film based on feature map perturbation and boundary supervision as claimed in claim 1, wherein the obtaining of tooth panoramic film and the sharpening of tooth panoramic film to obtain tooth panoramic film I with clearer tooth boundary comprises:
inputting original teeth panoramic picture IoriginalAnd carrying out sharpening filtering operation on each tooth panoramic picture by adopting a filter, wherein the kernel of the filter is 3 x 3 to obtain a sharpened tooth panoramic picture I.
3. The method as claimed in claim 1, wherein the tooth panorama I is input to a trained disturbance feature map extraction network for feature extraction, and a deep disturbance feature map F is obtaineddeepThe method comprises the following steps:
step 2.1, inputting the tooth panoramic picture I into a simple feature extraction module with convolution kernel size of 3 x 3 to obtain an output feature map F1Of dimension C1×H1×W1;
Step 2.2, feature map F1Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F2Of dimension C2×H2×W2;
Step 2.3, converting the characteristic diagram F2Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F3Of dimension C3×H3×W3;
Step 2.4, converting the characteristic diagram F3Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F4Of dimension C4×H4×W4;
Step (ii) of2.5, feature map F4Inputting the filtered solution into a disturbance feature extraction module with convolution kernel size of 3 x 3 to obtain a deep disturbance feature map FdeepOf dimension C5×H5×W5。
4. The method of semantic segmentation of tooth panoramic scene based on feature map perturbation and boundary supervision as claimed in claim 2, wherein the feature map F of deep perturbation is useddeepRespectively input into the trained mask network and boundary network to obtain tooth region segmentation result and tooth contour segmentation result, including,
step 3.1, in a mask network, a deep perturbation feature map FdeepAfter upsampling and F4Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C5+C4Inputting simple feature extraction module to obtain feature diagram UP4Of dimension C4×H4×W4;
Step 3.2, characteristic diagram UP4After upsampling and F3Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C4+C3Inputting simple feature extraction module to obtain feature diagram UP3Of dimension C3×H3×W3;
Step 3.3, UP the characteristic diagram3After upsampling and F2Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C3+C2Inputting simple feature extraction module to obtain feature diagram UP2Of dimension C2×H2×W2;
Step 3.4, characteristic diagram UP2After upsampling and F1Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C2+C1Inputting simple feature extraction module to obtain feature diagram UP1Of dimension C1×H1×W1;
Step 3.5, featureDiagram UP1Inputting 1 x 1 convolution block to obtain characteristic diagram UP0Dimension of 32 XH1×W1Where 32 represents 32 different teeth, will UP0Each channel of (a) is activated using the following formula, resulting in UP0Multiplying the probability that each pixel point belongs to the tooth area by 255 to obtain the final segmentation result of 32 teeth;
and 3.6, similarly adopting the operations of the step 3.1 to the step 3.5 in the boundary network, and finally outputting a tooth profile segmentation result.
5. The method of claim 4, wherein the simple feature extraction module comprises two concatenated convolutional layers with convolutional kernel size of 3 x 3, a batch normalization layer and an activation layer.
6. The method as claimed in claim 3, wherein the perturbation feature extraction module comprises two concatenated convolutional layers with convolutional kernel size of 3 x 3, a feature map perturbation operation, a batch normalization layer and an activation layer.
7. The method as claimed in claim 4, wherein the channel fusion module is configured to combine the upsampled lower layer feature map and the local layer feature map according to channels, and output a feature map with a constant size and a constant number of channels as a sum of the lower layer feature map and the local layer feature map.
8. The method for semantic segmentation of tooth panoramic scene based on feature map perturbation and boundary supervision as claimed in claim 6, wherein the feature map perturbation operation implements perturbation on the feature map by using the following formula;
wherein xiFor inputting the feature map, f (x)i) Andrespectively representing the before-and after-perturbation profiles, miConsisting of 0 and 1, following a bernoulli distribution,ifor controlling the disturbance amplitude, the parameter values are automatically optimized during training, and the multiplication of each point of the matrix is represented.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010894993.7A CN112085028B (en) | 2020-08-31 | 2020-08-31 | Tooth full-scene semantic segmentation method based on feature map disturbance and boundary supervision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010894993.7A CN112085028B (en) | 2020-08-31 | 2020-08-31 | Tooth full-scene semantic segmentation method based on feature map disturbance and boundary supervision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112085028A true CN112085028A (en) | 2020-12-15 |
CN112085028B CN112085028B (en) | 2024-03-12 |
Family
ID=73731245
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010894993.7A Active CN112085028B (en) | 2020-08-31 | 2020-08-31 | Tooth full-scene semantic segmentation method based on feature map disturbance and boundary supervision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112085028B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112750111A (en) * | 2021-01-14 | 2021-05-04 | 浙江工业大学 | Method for identifying and segmenting diseases in tooth panoramic picture |
CN114004831A (en) * | 2021-12-24 | 2022-02-01 | 杭州柳叶刀机器人有限公司 | Method for assisting implant replacement based on deep learning and auxiliary intelligent system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109087327A (en) * | 2018-07-13 | 2018-12-25 | 天津大学 | A kind of thyroid nodule ultrasonic image division method cascading full convolutional neural networks |
CN109829900A (en) * | 2019-01-18 | 2019-05-31 | 创新奇智(北京)科技有限公司 | A kind of steel coil end-face defect inspection method based on deep learning |
CN110473243A (en) * | 2019-08-09 | 2019-11-19 | 重庆邮电大学 | Tooth dividing method, device and computer equipment based on depth profile perception |
US20200175678A1 (en) * | 2018-11-28 | 2020-06-04 | Orca Dental AI Ltd. | Dental image segmentation and registration with machine learning |
-
2020
- 2020-08-31 CN CN202010894993.7A patent/CN112085028B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109087327A (en) * | 2018-07-13 | 2018-12-25 | 天津大学 | A kind of thyroid nodule ultrasonic image division method cascading full convolutional neural networks |
US20200175678A1 (en) * | 2018-11-28 | 2020-06-04 | Orca Dental AI Ltd. | Dental image segmentation and registration with machine learning |
CN109829900A (en) * | 2019-01-18 | 2019-05-31 | 创新奇智(北京)科技有限公司 | A kind of steel coil end-face defect inspection method based on deep learning |
CN110473243A (en) * | 2019-08-09 | 2019-11-19 | 重庆邮电大学 | Tooth dividing method, device and computer equipment based on depth profile perception |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112750111A (en) * | 2021-01-14 | 2021-05-04 | 浙江工业大学 | Method for identifying and segmenting diseases in tooth panoramic picture |
CN112750111B (en) * | 2021-01-14 | 2024-02-06 | 浙江工业大学 | Disease identification and segmentation method in tooth full-view film |
CN114004831A (en) * | 2021-12-24 | 2022-02-01 | 杭州柳叶刀机器人有限公司 | Method for assisting implant replacement based on deep learning and auxiliary intelligent system |
Also Published As
Publication number | Publication date |
---|---|
CN112085028B (en) | 2024-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110223281A (en) | A kind of Lung neoplasm image classification method when in data set containing uncertain data | |
CN107247971B (en) | Intelligent analysis method and system for ultrasonic thyroid nodule risk index | |
CN112085028A (en) | Tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision | |
CN112750111B (en) | Disease identification and segmentation method in tooth full-view film | |
CN112164446B (en) | Medical image report generation method based on multi-network fusion | |
Kong et al. | Automated maxillofacial segmentation in panoramic dental x-ray images using an efficient encoder-decoder network | |
CN112837278B (en) | Tooth full-scene caries identification method based on depth boundary supervision | |
CN110415815A (en) | The hereditary disease assistant diagnosis system of deep learning and face biological information | |
CN114332123A (en) | Automatic caries grading method and system based on panoramic film | |
CN111899250B (en) | Remote disease intelligent diagnosis system based on block chain and medical image | |
Lin et al. | Tooth numbering and condition recognition on dental panoramic radiograph images using CNNs | |
CN107958472A (en) | PET imaging methods, device, equipment and storage medium based on sparse projection data | |
CN114299082A (en) | New coronary pneumonia CT image segmentation method, device and storage medium | |
CN112037212A (en) | Pulmonary tuberculosis DR image identification method based on deep learning | |
CN113221945B (en) | Dental caries identification method based on oral panoramic film and dual attention module | |
Kramarz et al. | New remains of Astraponotus (Mammalia, Astrapotheria) and considerations on astrapothere cranial evolution | |
CN113160151B (en) | Panoramic sheet decayed tooth depth identification method based on deep learning and attention mechanism | |
CN113344867B (en) | Periodontitis absorption degree identification method based on near-middle and far-middle key points | |
Kim et al. | Tooth-Related Disease Detection System Based on Panoramic Images and Optimization Through Automation: Development Study | |
CN116758090A (en) | Medical image segmentation method based on multi-scale subtraction | |
CN115482384A (en) | Visible light OCT image retina layer segmentation method and system | |
CN116362995A (en) | Tooth image restoration method and system based on standard prior | |
CN116205925A (en) | Tooth occlusion wing tooth caries segmentation method based on improved U-Net network | |
CN113763236A (en) | Method for dynamically adjusting facial features of commercial short video according to regions | |
CN113379697A (en) | Color image caries identification method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |