CN112085028A - Tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision - Google Patents

Tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision Download PDF

Info

Publication number
CN112085028A
CN112085028A CN202010894993.7A CN202010894993A CN112085028A CN 112085028 A CN112085028 A CN 112085028A CN 202010894993 A CN202010894993 A CN 202010894993A CN 112085028 A CN112085028 A CN 112085028A
Authority
CN
China
Prior art keywords
feature map
tooth
inputting
disturbance
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010894993.7A
Other languages
Chinese (zh)
Other versions
CN112085028B (en
Inventor
吴福理
张凡
郝鹏翼
陈大千
郑宇祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202010894993.7A priority Critical patent/CN112085028B/en
Publication of CN112085028A publication Critical patent/CN112085028A/en
Application granted granted Critical
Publication of CN112085028B publication Critical patent/CN112085028B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Abstract

The invention discloses a tooth panoramic picture semantic segmentation method based on feature map disturbance and boundary supervision.A tooth panoramic picture is sharpened after the tooth panoramic picture is obtained to obtain a tooth panoramic picture with a clearer tooth boundary, and then a disturbance feature map extraction network after tooth panoramic picture training is used for carrying out feature extraction to obtain a deep disturbance feature map; and finally, respectively inputting the deep disturbance feature map into the trained mask network and boundary network to obtain a tooth region segmentation result and a tooth profile segmentation result. The invention greatly enhances the generalization capability of the network, and enables the trained model to obtain a more reasonable segmentation result by using part of common characteristics in special conditions when meeting the special conditions.

Description

Tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision
Technical Field
The invention relates to the field of medical image processing, in particular to a tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision.
Background
The shortage of oral medical resources in China mainly shows that the quantity of oral doctors is seriously insufficient, the regional development is unbalanced, and the development kinetic energy of domestic oral medical instruments and equipment is insufficient. The chinese oral industry trend report in 2019 shows that the World Health Organization (WHO) has no evidence for dentists: the recommended value for the population ratio is 1:5000, which rises to 1:2000 for developed countries. At present, the proportion of the population of Chinese dentists is less than 1:8000, which is far lower than the level and WHO recommended value of other countries. In the developed areas of the east in the north, oral medical treatment is developed quickly, the holding amount of doctors is obviously improved, the proportion of the oral doctors to residents in the urban areas of Beijing is about 1:2000, which is similar to that of the developed countries, but the oral medical treatment is only 1:8000 in the suburbs and is 1:20000 or 1:30000 in the west. This imbalance in regional development is a big problem facing our country. In recent years, although the number of oral medical practitioners increases slightly per ten thousand of people, the oral medical service diagnosis and treatment requirements of people are still far from being met. Except for the difference in the number of oral doctors, the overall composition of the oral doctors is lower than the academic level in quality, and according to statistics, about 45% of the oral doctors in the department and the academic levels in 2015 are owned by the oral doctors in China, most of the oral doctors are concentrated in public medical institutions or high-end medical institutions in large and medium cities, and a part of people who do not receive regular oral education or only receive primary oral medical education (the professional level) and are engaged in oral cavity and related medical industries are occupied.
In addition, even in a public oral medical institution with high public confidence, because the amount of patients is far beyond the normal load, the workload of doctors is large, the doctors can only diagnose the patient's chief complaints, and the non-chief complaint oral problems of the patients are often ignored to delay the treatment or omit the treatment. On the other hand, some oral disease problems are easy to be missed or misdiagnosed due to the difference of doctor levels. Therefore, if the whole scene can be judged in advance by means of an Artificial Intelligence (AI) technology and a preliminary diagnosis report can be automatically issued, the efficiency and the accuracy of the diagnosis of the oral diseases can be improved, and missed diagnosis and misdiagnosis can be reduced. The segmentation of the teeth in the panoramic picture is the basis for the detection of all dental diseases.
The patent title, the method and the device for identifying the panoramic permanent teeth based on deep learning have the application number of CN109949319A and the application date of 2019-3-12; the patent describes a method and a device for identifying a panoramic permanent tooth based on deep learning, in which a alveolar bone line segmentation model is used to obtain an alveolar bone line segmentation result, image blocks of a peridental region are cut out from the original panoramic tooth according to the alveolar bone line segmentation result, and finally the image blocks of the peridental region are input into the constant tooth segmentation model based on deep learning to obtain a permanent tooth segmentation result and mark tooth position numbers.
The patent title, a tooth segmentation method based on depth contour perception, a device and computer equipment, with application number of CN110473243A and application date of 2019-8-9; the patent describes a tooth segmentation method, a tooth segmentation device and computer equipment based on depth contour perception, wherein the method comprises the steps of extracting a contour mask from an original mask through morphological processing, thickening the contour mask, using the thickened contour mask as supervision information, training a full convolution network on a preprocessed original tooth image through the full convolution network to minimize a loss function, and obtaining a contour prediction probability map. And then fusing the preprocessed tooth image and the contour prediction probability image, and obtaining a tooth segmentation result image through a U-shaped depth contour perception network which takes an original mask as supervision information after fusion.
In the prior art, on the basis of enhancing the model generalization capability and utilizing the boundary information, the model generalization capability cannot be improved by a targeted strategy, the boundary information is not paid enough attention, and the boundary information and the mask information are not considered on the same position, so that the extracted features have insufficient universality and the segmentation result is poor.
Disclosure of Invention
The application aims to provide a tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision, and solves the problem that the tooth panoramic semantic segmentation method cannot be carried out by utilizing common partial features in special conditions in the prior art when the special conditions are met.
In order to achieve the purpose, the technical scheme of the application is as follows:
a tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision comprises the following steps:
obtaining a tooth panoramic picture, and carrying out sharpening operation on the tooth panoramic picture to obtain a tooth panoramic picture I with a clearer tooth boundary;
inputting the tooth panoramic picture I into the trained disturbance feature map extraction network to obtain a deep disturbance feature map Fdeep
Deep disturbance feature map FdeepAnd respectively inputting the data to the trained mask network and the trained boundary network to obtain a tooth region segmentation result and a tooth profile segmentation result.
Further, the obtaining a tooth panoramic image, and performing a sharpening operation on the tooth panoramic image to obtain a tooth panoramic image I with a clearer tooth boundary, includes:
inputting original teeth panoramic picture IoriginalAnd carrying out sharpening filtering operation on each tooth panoramic picture by adopting a filter, wherein the kernel of the filter is 3 x 3 to obtain a sharpened tooth panoramic picture I.
Further, inputting the tooth panoramic picture I into a disturbance feature map extraction network after training for feature extraction to obtain a deep disturbance feature map FdeepThe method comprises the following steps:
step 2.1, inputting the tooth panoramic picture I into a simple feature extraction module with convolution kernel size of 3 x 3 to obtain an output feature map F1Of dimension C1×H1×W1
Step 2.2, feature map F1Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F2Of dimension C2×H2×W2
Step 2.3, converting the characteristic diagram F2Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F3Of dimension C3×H3×W3
Step 2.4, converting the characteristic diagram F3Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F4Of dimension C4×H4×W4
Step 2.5, converting the characteristic diagram F4Inputting the filtered solution into a disturbance feature extraction module with convolution kernel size of 3 x 3 to obtain a deep disturbance feature map FdeepOf dimension C5×H5×W5
Further, the deep layer perturbation feature map FdeepRespectively input into the trained mask network and boundary network to obtain tooth region segmentation result and tooth contour segmentation result, including,
step 3.1, in a mask network, a deep perturbation feature map FdeepAfter upsampling and F4Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C5+C4Inputting simple feature extraction module to obtain feature diagram UP4Of dimension C4×H4×W4
Step 3.2, characteristic diagram UP4After upsampling and F3Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C4+C3Inputting simple feature extraction module to obtain feature diagram UP3Of dimension C3×H3×W3
Step 3.3, UP the characteristic diagram3After upsampling and F2Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C3+C2Inputting simple feature extraction module to obtain feature diagram UP2Of dimension C2×H2×W2
Step 3.4, characteristic diagram UP2After upsampling and F1Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C2+C1Inputting simple feature extraction module to obtain feature diagram UP1Of dimension C1×H1×W1
Step 3.5, characteristic diagram UP1Inputting 1 x 1 convolution block to obtain characteristic diagram UP0Dimension of 32 XH1×W1Where 32 represents 32 different teeth, will UP0Each channel of (a) is activated using the following formula, resulting in UP0Multiplying the probability that each pixel point belongs to the tooth area by 255 to obtain the final segmentation result of 32 teeth;
Figure BDA0002658172390000041
and 3.6, similarly adopting the operations of the step 3.1 to the step 3.5 in the boundary network, and finally outputting a tooth profile segmentation result.
Further, the simple feature extraction module comprises two groups of convolution layers with convolution kernel size of 3 × 3, a batch normalization layer and an activation layer which are connected in series.
Further, the disturbance feature extraction module comprises two groups of convolution layers with convolution kernel size of 3 × 3 connected in series, a feature disturbance operation, a batch normalization layer and an activation layer.
Further, the channel fusion module is configured to combine the lower layer feature map after the upsampling and the present layer feature map according to a channel, and output a feature map with a constant size and a constant number of channels, where the feature map is a sum of the lower layer feature map and the present layer feature map.
Further, the feature disturbance operation realizes disturbance on the feature map by using the following formula;
Figure BDA0002658172390000042
wherein xiFor inputting the feature map, f (x)i) And
Figure BDA0002658172390000043
respectively representing the before-and after-perturbation profiles, miConsisting of 0 and 1, following a bernoulli distribution,ifor controlling the disturbance amplitude, the parameter values are automatically optimized during training, and the multiplication of each point of the matrix is represented.
According to the tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision, on one hand, the feature map is disturbed by using a disturbance feature extraction module in the feature extraction process, so that the disturbed feature map can lack part of feature information, and the neural network learns how to obtain segmentation results by using the feature map with part of feature missing, so that the generalization capability of the network is greatly enhanced, and when a special condition is met, the reasonable segmentation results can still be obtained by using common part of features in the special condition. On the other hand, because of the introduction of the boundary network, the characteristics of the boundary of the divided regions are directly learned through the boundary network, the boundary of the divided regions can be found more easily, and the dividing effect on the conditions of difference in the classes and similarity among the classes is improved.
Drawings
FIG. 1 is a flowchart of a tooth panorama semantic segmentation method based on feature map perturbation and boundary supervision according to the present application;
FIG. 2 is a schematic diagram of a network structure according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a simple feature extraction module architecture of the present application;
FIG. 4 is a schematic structural diagram of a disturbance feature extraction module of the present application;
fig. 5 is a schematic structural diagram of a channel fusion module according to the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a method for semantic segmentation of a tooth panorama based on feature map perturbation and boundary supervision is provided, which comprises:
and step S1, obtaining the tooth panoramic picture, and carrying out sharpening operation on the tooth panoramic picture to obtain the tooth panoramic picture I with clearer tooth boundary.
This application carries out necessary preliminary treatment to the tooth panorama piece that acquires, acquire the tooth panorama piece, pass through the sharpening operation with the tooth panorama piece, obtain the more clear tooth panorama piece I in tooth border, include:
inputting original teeth panoramic picture IoriginalAnd carrying out sharpening filtering operation on each tooth panoramic picture by adopting a filter, wherein the kernel of the filter is 3 x 3 to obtain a sharpened tooth panoramic picture I.
It should be noted that, in the present application, the original tooth panorama may be directly used for subsequent processing without performing a sharpening operation. The filter kernel may also be set to 5 x 5, or 7 x 7, as desired.
Step S2, inputting the tooth panoramic picture I into the disturbance feature map extraction network after training for feature extraction to obtain a deep disturbance feature map Fdeep
Carrying out feature extraction on a tooth panoramic picture I to obtain a deep disturbance feature map FdeepThe method comprises the following steps:
step 2.1, inputting the tooth panoramic picture I into a simple feature extraction module with convolution kernel size of 3 x 3 to obtain an output feature map F1Of dimension C1×H1×W1
Step 2.2, feature map F1Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F2Of dimension C2×H2×W2
Step 2.3, converting the characteristic diagram F2Input to convolution after poolingA disturbance feature extraction module with the kernel size of 3 x 3 obtains a feature map F3Of dimension C3×H3×W3
Step 2.4, converting the characteristic diagram F3Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F4Of dimension C4×H4×W4
Step 2.5, converting the characteristic diagram F4Inputting the filtered solution into a disturbance feature extraction module with convolution kernel size of 3 x 3 to obtain a deep disturbance feature map FdeepOf dimension C5×H5×W5
As shown in fig. 2, the perturbation feature map extraction network of the present application includes a simple feature extraction module (CBR) and a four-layer perturbation feature extraction module (CDBR). In other embodiments, the disturbance feature map extraction network may also adopt other scrambling structures, for example, a structure that three layers of simple feature extraction modules and two layers of disturbance feature extraction modules are sequentially adopted.
Compared with an undisturbed characteristic diagram, the disturbed deep characteristic diagram has more common characteristics, and is beneficial to improving the generalization capability of the network.
The simple feature extraction module, as shown in fig. 3, includes two sets of convolution layers (conv3 × 3) with convolution kernel size of 3 × 3, a batch normalization layer (BN), and an activation layer (ReLU) connected in series.
Firstly, calculating an input feature map through the convolution layer, then carrying out batch normalization and ReLU activation layer processing, then carrying out a second group of convolution layer, batch normalization and ReLU activation layer, and finally outputting the processed feature map.
The perturbation feature extraction module, as shown in fig. 4, includes two sets of convolution layers (conv3 × 3) with convolution kernel size of 3 × 3 connected in series, feature perturbation, batch normalization layer (BN), and activation layer (ReLU).
Firstly, calculating an input feature map through a convolutional layer, then realizing the disturbance of the feature map through feature disturbance operation, then carrying out batch normalization and ReLU activation layer processing, then outputting the processed feature map through a second group of convolutional layers, feature disturbance, batch normalization and ReLU activation layers.
The characteristic disturbance operation realizes disturbance on the characteristic diagram by using the following formula;
Figure BDA0002658172390000071
wherein xiFor inputting the feature map, f (x)i) And
Figure BDA0002658172390000072
respectively representing the before-and after-perturbation profiles, miConsisting of 0 and 1, following a bernoulli distribution,ifor controlling the disturbance amplitude, the parameter values are automatically optimized during training, and the multiplication of each point of the matrix is represented. i denotes the i-th layer of the network, xiA feature map representing the ith layer input.
It should be noted that the convolution kernel size of the simple feature extraction module and the perturbation feature extraction module in the present application is 3 × 3, and may also be set to 5 × 5 or 7 × 7 as needed.
Step S3, deep disturbance feature map FdeepAnd respectively inputting the data to the trained mask network and the trained boundary network to obtain a tooth region segmentation result and a tooth profile segmentation result.
The present application describes a deep perturbation profile FdeepRespectively input into the trained mask network and boundary network to obtain tooth region segmentation result and tooth contour segmentation result, including,
step 3.1, in a mask network, a deep perturbation feature map FdeepAfter upsampling and F4Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C5+C4Inputting simple feature extraction module to obtain feature diagram UP4Of dimension C4×H4×W4
Step 3.2, characteristic diagram UP4After upsampling and F3Inputting the data into a channel fusion module together to obtain a feature map combined according to channels, wherein the number of the channels isC4+C3Inputting simple feature extraction module to obtain feature diagram UP3Of dimension C3×H3×W3
Step 3.3, UP the characteristic diagram3After upsampling and F2Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C3+C2Inputting simple feature extraction module to obtain feature diagram UP2Of dimension C2×H2×W2
Step 3.4, characteristic diagram UP2After upsampling and F1Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C2+C1Inputting simple feature extraction module to obtain feature diagram UP1Of dimension C1×H1×W1
Step 3.5, characteristic diagram UP1Inputting 1 x 1 convolution block to obtain characteristic diagram UP0Dimension of 32 XH1×W1Where 32 represents 32 different teeth, will UP0Each channel of (a) is activated using the following formula, resulting in UP0Multiplying the probability that each pixel point belongs to the tooth area by 255 to obtain the final segmentation result of 32 teeth;
Figure BDA0002658172390000081
and 3.6, similarly adopting the operations of the step 3.1 to the step 3.5 in the boundary network, and finally outputting a tooth profile segmentation result.
Wherein sigmoid is an activation function, and e is a constant.
In the present application, the channel fusion module (Copy), as shown in fig. 5, is configured to combine the lower layer feature map after upsampling and the present layer feature map according to channels, and output a feature map with a constant size and a constant number of channels, where the feature map is a sum of the lower layer feature map and the present layer feature map.
Similarly, the convolution kernel size of the simple feature extraction module in this embodiment is 3 × 3, and may be set to 5 × 5 or 7 × 7 as needed.
In the present application, C is the number of channels, H is the height of the picture, W is the width of the picture, and the subscripts of the letters indicate serial numbers to distinguish the dimensions of different feature maps.
According to the method and the device, the characteristics of the boundary of the divided region are directly learned through the boundary network, the boundary of the divided region can be found more easily, and the dividing effect of the intra-class difference and inter-class similarity is improved. For partial images, two parts in the same semantic region may have a large difference in image features, and the two parts are easily recognized as two types of semantic regions, which is called intra-class difference. Similarly, the image features in different semantic regions of the partial image have greater similarity, and the two parts are easily recognized as a semantic region, which is called as inter-class similarity. By learning the boundary information, correct semantic boundaries can be found better, and the segmentation effect when the images meet intra-class difference and inter-class similarity can be improved well.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (8)

1. A tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision is characterized by comprising the following steps:
obtaining a tooth panoramic picture, and carrying out sharpening operation on the tooth panoramic picture to obtain a tooth panoramic picture I with a clearer tooth boundary;
inputting the tooth panoramic picture I into a disturbance feature map extraction network after training for feature extraction to obtain a deep disturbance feature map Fdeep
Deep disturbance feature map FdeepAnd respectively inputting the data to the trained mask network and the trained boundary network to obtain a tooth region segmentation result and a tooth profile segmentation result.
2. The method for semantic segmentation of tooth panoramic film based on feature map perturbation and boundary supervision as claimed in claim 1, wherein the obtaining of tooth panoramic film and the sharpening of tooth panoramic film to obtain tooth panoramic film I with clearer tooth boundary comprises:
inputting original teeth panoramic picture IoriginalAnd carrying out sharpening filtering operation on each tooth panoramic picture by adopting a filter, wherein the kernel of the filter is 3 x 3 to obtain a sharpened tooth panoramic picture I.
3. The method as claimed in claim 1, wherein the tooth panorama I is input to a trained disturbance feature map extraction network for feature extraction, and a deep disturbance feature map F is obtaineddeepThe method comprises the following steps:
step 2.1, inputting the tooth panoramic picture I into a simple feature extraction module with convolution kernel size of 3 x 3 to obtain an output feature map F1Of dimension C1×H1×W1
Step 2.2, feature map F1Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F2Of dimension C2×H2×W2
Step 2.3, converting the characteristic diagram F2Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F3Of dimension C3×H3×W3
Step 2.4, converting the characteristic diagram F3Inputting the filtered data to a disturbance feature extraction module with a convolution kernel size of 3 x 3 to obtain a feature map F4Of dimension C4×H4×W4
Step (ii) of2.5, feature map F4Inputting the filtered solution into a disturbance feature extraction module with convolution kernel size of 3 x 3 to obtain a deep disturbance feature map FdeepOf dimension C5×H5×W5
4. The method of semantic segmentation of tooth panoramic scene based on feature map perturbation and boundary supervision as claimed in claim 2, wherein the feature map F of deep perturbation is useddeepRespectively input into the trained mask network and boundary network to obtain tooth region segmentation result and tooth contour segmentation result, including,
step 3.1, in a mask network, a deep perturbation feature map FdeepAfter upsampling and F4Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C5+C4Inputting simple feature extraction module to obtain feature diagram UP4Of dimension C4×H4×W4
Step 3.2, characteristic diagram UP4After upsampling and F3Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C4+C3Inputting simple feature extraction module to obtain feature diagram UP3Of dimension C3×H3×W3
Step 3.3, UP the characteristic diagram3After upsampling and F2Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C3+C2Inputting simple feature extraction module to obtain feature diagram UP2Of dimension C2×H2×W2
Step 3.4, characteristic diagram UP2After upsampling and F1Inputting the data into a channel fusion module to obtain a feature map combined according to channels, wherein the number of the channels is C2+C1Inputting simple feature extraction module to obtain feature diagram UP1Of dimension C1×H1×W1
Step 3.5, featureDiagram UP1Inputting 1 x 1 convolution block to obtain characteristic diagram UP0Dimension of 32 XH1×W1Where 32 represents 32 different teeth, will UP0Each channel of (a) is activated using the following formula, resulting in UP0Multiplying the probability that each pixel point belongs to the tooth area by 255 to obtain the final segmentation result of 32 teeth;
Figure FDA0002658172380000021
and 3.6, similarly adopting the operations of the step 3.1 to the step 3.5 in the boundary network, and finally outputting a tooth profile segmentation result.
5. The method of claim 4, wherein the simple feature extraction module comprises two concatenated convolutional layers with convolutional kernel size of 3 x 3, a batch normalization layer and an activation layer.
6. The method as claimed in claim 3, wherein the perturbation feature extraction module comprises two concatenated convolutional layers with convolutional kernel size of 3 x 3, a feature map perturbation operation, a batch normalization layer and an activation layer.
7. The method as claimed in claim 4, wherein the channel fusion module is configured to combine the upsampled lower layer feature map and the local layer feature map according to channels, and output a feature map with a constant size and a constant number of channels as a sum of the lower layer feature map and the local layer feature map.
8. The method for semantic segmentation of tooth panoramic scene based on feature map perturbation and boundary supervision as claimed in claim 6, wherein the feature map perturbation operation implements perturbation on the feature map by using the following formula;
Figure FDA0002658172380000031
wherein xiFor inputting the feature map, f (x)i) And
Figure FDA0002658172380000032
respectively representing the before-and after-perturbation profiles, miConsisting of 0 and 1, following a bernoulli distribution,ifor controlling the disturbance amplitude, the parameter values are automatically optimized during training, and the multiplication of each point of the matrix is represented.
CN202010894993.7A 2020-08-31 2020-08-31 Tooth full-scene semantic segmentation method based on feature map disturbance and boundary supervision Active CN112085028B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010894993.7A CN112085028B (en) 2020-08-31 2020-08-31 Tooth full-scene semantic segmentation method based on feature map disturbance and boundary supervision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010894993.7A CN112085028B (en) 2020-08-31 2020-08-31 Tooth full-scene semantic segmentation method based on feature map disturbance and boundary supervision

Publications (2)

Publication Number Publication Date
CN112085028A true CN112085028A (en) 2020-12-15
CN112085028B CN112085028B (en) 2024-03-12

Family

ID=73731245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010894993.7A Active CN112085028B (en) 2020-08-31 2020-08-31 Tooth full-scene semantic segmentation method based on feature map disturbance and boundary supervision

Country Status (1)

Country Link
CN (1) CN112085028B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112750111A (en) * 2021-01-14 2021-05-04 浙江工业大学 Method for identifying and segmenting diseases in tooth panoramic picture
CN114004831A (en) * 2021-12-24 2022-02-01 杭州柳叶刀机器人有限公司 Method for assisting implant replacement based on deep learning and auxiliary intelligent system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087327A (en) * 2018-07-13 2018-12-25 天津大学 A kind of thyroid nodule ultrasonic image division method cascading full convolutional neural networks
CN109829900A (en) * 2019-01-18 2019-05-31 创新奇智(北京)科技有限公司 A kind of steel coil end-face defect inspection method based on deep learning
CN110473243A (en) * 2019-08-09 2019-11-19 重庆邮电大学 Tooth dividing method, device and computer equipment based on depth profile perception
US20200175678A1 (en) * 2018-11-28 2020-06-04 Orca Dental AI Ltd. Dental image segmentation and registration with machine learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087327A (en) * 2018-07-13 2018-12-25 天津大学 A kind of thyroid nodule ultrasonic image division method cascading full convolutional neural networks
US20200175678A1 (en) * 2018-11-28 2020-06-04 Orca Dental AI Ltd. Dental image segmentation and registration with machine learning
CN109829900A (en) * 2019-01-18 2019-05-31 创新奇智(北京)科技有限公司 A kind of steel coil end-face defect inspection method based on deep learning
CN110473243A (en) * 2019-08-09 2019-11-19 重庆邮电大学 Tooth dividing method, device and computer equipment based on depth profile perception

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112750111A (en) * 2021-01-14 2021-05-04 浙江工业大学 Method for identifying and segmenting diseases in tooth panoramic picture
CN112750111B (en) * 2021-01-14 2024-02-06 浙江工业大学 Disease identification and segmentation method in tooth full-view film
CN114004831A (en) * 2021-12-24 2022-02-01 杭州柳叶刀机器人有限公司 Method for assisting implant replacement based on deep learning and auxiliary intelligent system

Also Published As

Publication number Publication date
CN112085028B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
CN110223281A (en) A kind of Lung neoplasm image classification method when in data set containing uncertain data
CN107247971B (en) Intelligent analysis method and system for ultrasonic thyroid nodule risk index
CN112085028A (en) Tooth panoramic semantic segmentation method based on feature map disturbance and boundary supervision
CN112750111B (en) Disease identification and segmentation method in tooth full-view film
CN112164446B (en) Medical image report generation method based on multi-network fusion
Kong et al. Automated maxillofacial segmentation in panoramic dental x-ray images using an efficient encoder-decoder network
CN112837278B (en) Tooth full-scene caries identification method based on depth boundary supervision
CN110415815A (en) The hereditary disease assistant diagnosis system of deep learning and face biological information
CN114332123A (en) Automatic caries grading method and system based on panoramic film
CN111899250B (en) Remote disease intelligent diagnosis system based on block chain and medical image
Lin et al. Tooth numbering and condition recognition on dental panoramic radiograph images using CNNs
CN107958472A (en) PET imaging methods, device, equipment and storage medium based on sparse projection data
CN114299082A (en) New coronary pneumonia CT image segmentation method, device and storage medium
CN112037212A (en) Pulmonary tuberculosis DR image identification method based on deep learning
CN113221945B (en) Dental caries identification method based on oral panoramic film and dual attention module
Kramarz et al. New remains of Astraponotus (Mammalia, Astrapotheria) and considerations on astrapothere cranial evolution
CN113160151B (en) Panoramic sheet decayed tooth depth identification method based on deep learning and attention mechanism
CN113344867B (en) Periodontitis absorption degree identification method based on near-middle and far-middle key points
Kim et al. Tooth-Related Disease Detection System Based on Panoramic Images and Optimization Through Automation: Development Study
CN116758090A (en) Medical image segmentation method based on multi-scale subtraction
CN115482384A (en) Visible light OCT image retina layer segmentation method and system
CN116362995A (en) Tooth image restoration method and system based on standard prior
CN116205925A (en) Tooth occlusion wing tooth caries segmentation method based on improved U-Net network
CN113763236A (en) Method for dynamically adjusting facial features of commercial short video according to regions
CN113379697A (en) Color image caries identification method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant