CN110473243B - Tooth segmentation method and device based on depth contour perception and computer equipment - Google Patents

Tooth segmentation method and device based on depth contour perception and computer equipment Download PDF

Info

Publication number
CN110473243B
CN110473243B CN201910733040.XA CN201910733040A CN110473243B CN 110473243 B CN110473243 B CN 110473243B CN 201910733040 A CN201910733040 A CN 201910733040A CN 110473243 B CN110473243 B CN 110473243B
Authority
CN
China
Prior art keywords
contour
image
tooth
network
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910733040.XA
Other languages
Chinese (zh)
Other versions
CN110473243A (en
Inventor
高陈强
陈乔伊
李鹏程
刘芳岑
冉洁
陈昱帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201910733040.XA priority Critical patent/CN110473243B/en
Publication of CN110473243A publication Critical patent/CN110473243A/en
Application granted granted Critical
Publication of CN110473243B publication Critical patent/CN110473243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Abstract

The invention belongs to the technical field of medical image processing and computer vision, and relates to a tooth segmentation method and device based on depth contour perception and computer equipment. The segmentation method comprises the steps of preprocessing pictures in a data set, extracting a contour mask from an original mask, and performing data expansion processing to thicken a contour; taking the contour mask as supervision information, and enabling the original image to pass through a full convolution network to obtain a contour prediction probability map; constructing a U-shaped depth profile perception network; taking an original mask as supervision information, fusing an original image and a contour prediction probability image, and then obtaining a segmentation network through a U-shaped depth contour perception network; and (3) sending the original image of the test set into a trained U-shaped network to obtain a tooth segmentation result, and making the segmentation result clearer by using a disc filter. The invention increases contour information, so that the accuracy rate of segmentation and the visual effect of segmentation are improved under the condition that the boundaries of the teeth and tissues around the teeth are not clear.

Description

Tooth segmentation method and device based on depth contour perception and computer equipment
Technical Field
The invention belongs to the technical field of medical image processing and computer vision, and relates to a tooth segmentation method and device based on depth contour perception and computer equipment.
Background
In the dental field, a photographical image is a basic data source for assisting diagnosis, and an x-ray image is used in dentistry to examine the conditions of teeth, gums, jaw and bone structures of the oral cavity, and the like. The tooth segmentation result can be widely applied to the scenes of tooth orthodontics, dental implant, forensic identification and the like.
Under certain conditions, due to the limitation of x-ray imaging effect, for example, the contrast between the tooth and surrounding tissues (especially the tooth root region) is low, the boundary is not clear, the space problem of the missing tooth exists, the imaging film is more noisy, the tooth filling causes metal artifacts, and the like, and great challenge is brought to the automatic tooth segmentation in the later stage.
Conventional methods for medical image segmentation can be divided into five major categories, namely threshold-based segmentation methods, edge-based segmentation methods, region-based segmentation methods, cluster-based segmentation methods and watershed-based segmentation methods. The threshold-based segmentation method generally only considers the gray value of a pixel and does not consider the spatial characteristics, so that the method is sensitive to noise; the edge-based segmentation method is difficult to solve the contradiction between the noise resistance and the detection precision during the edge detection, which can cause unreasonable deviation of the outline and the position; the region-based segmentation method is easy to cause excessive segmentation of the image; the clustering-based segmentation method relies on the selection of a clustering center, and results may deviate from global optima; the watershed-based segmentation method has good response to weak edges, but noise in the image can cause the watershed algorithm to generate an over-segmentation phenomenon.
In recent years, the image analysis method based on deep learning has made a good progress in the task of medical image segmentation, and has attracted a great deal of attention in the medical field.
Currently, two major challenges exist for the deep learning-based tooth segmentation task:
(1) teeth that are in contact with each other are difficult to segment. Firstly, the teeth are in contact with each other, so that the boundaries of the teeth are difficult to define; secondly, due to the imaging of the tooth pathological image, the contrast between the tooth and the surrounding tissues is small, more noise is generated, and metal artifacts may exist at the dental crown part to influence the final segmentation effect.
(2) Spatial information is lost. The receptive field of the superficial network is small, and due to the limitation of the receptive field, only local information can be focused on, and the segmentation conforming to the tooth space structure is not easy to be obtained by combining global information.
Disclosure of Invention
In view of the above, the present invention provides a tooth segmentation method, device and computer device based on depth contour perception, and in particular, relates to a tooth segmentation method based on a full convolution network and a U-shaped depth contour perception network. The full convolution network is used for predicting the tooth profile and providing additional edge auxiliary prediction information for the U-shaped depth profile perception network; and the depth profile perception network with the U-shaped structure adds a transposition convolution layer which can be directly up-sampled to the size of the original image into each up-sampling unit, so that the loss of the added boundary characteristics can be prevented, and the image segmentation result with clear boundaries at the pixel level can be obtained due to the introduction of fine edge auxiliary information.
The invention discloses a tooth segmentation method based on depth contour perception, which comprises the following steps:
s1, acquiring a tooth image data set, preprocessing the tooth image in the tooth image data set, and taking the tooth image as a training set image;
s2, extracting a contour mask from the artificially labeled binary original mask in the training set image through morphological processing and thickening the contour mask;
s3, taking the thickened contour mask as first supervision information, passing the preprocessed original tooth image through a full convolution network to minimize a first loss function, training the full convolution network, and obtaining a contour prediction probability map;
s4, constructing a U-shaped depth profile perception network comprising a contraction path and an expansion path;
s5, taking the original mask as second supervision information, fusing the preprocessed tooth image and the contour prediction probability map, obtaining a tooth segmentation result map through a U-shaped depth contour perception network after fusion, and training the U-shaped depth contour perception network through minimizing a second loss function;
s6, acquiring the shot tooth image to be segmented, carrying out the same preprocessing operation as the step S1 on the tooth image to be segmented, and sending the preprocessed tooth image to be predicted into the trained U-shaped depth profile perception network to obtain a rough segmentation result of the tooth image to be segmented;
and S7, smoothing the segmentation result to obtain a fine segmentation result of the tooth image.
Optionally, the dental image data set includes, but is not limited to, a dental image taken by x-ray, and other dental image data sets which can segment natural images and medical images can also be used as the dental image data set in the invention;
preferably, the preprocessing operation may be to convert the original tooth image into a uniform size; the size may be 512 × 1024, 600 × 800, etc.
Further, extracting the edge of the artificially marked binary original mask by using a candy operator to obtain a contour mask, and performing data expansion processing on the contour mask to thicken the contour, wherein the data expansion processing comprises thickening the contour by using a disc filter.
Wherein, the candy operator can be realized by adopting the following processes:
1) a gaussian filter is used to smooth the image and filter out noise.
2) And calculating the gradient strength and the direction of each pixel point in the image.
3) Non-Maximum Suppression (Non-Maximum Suppression) is applied to eliminate spurious responses due to edge detection.
4) A Double-Threshold (Double-Threshold) detection is applied to determine true and potential edges.
5) Edge detection is finally accomplished by suppressing isolated weak edges.
Preferably, the data expansion processing operation includes: and using a disc filter to perform thickening treatment on the extracted contour mask.
Further, the contour prediction probability map in step S3 includes passing the tooth images in the training set and the morphologically extracted contour mask through a full convolution network, and minimizing a first loss function, which includes a cross-entropy loss function, and is calculated as:
Figure BDA0002161269770000031
wherein N represents the number of pixel points,
Figure BDA0002161269770000032
representing a prediction of a pixel point i, y(i)And representing the real label corresponding to the pixel point i.
Further, the U-shaped depth profile perception network comprises a contraction path and an expansion path, wherein the contraction path comprises 5 repeated units, and each unit comprises 2 convolution layers and 1 pooling layer; the expansion path comprises 5 units with the same characteristic depth as the contraction path, each unit comprises 2 convolution layers and 1 transposition convolution layer, and the contraction path units and the expansion path units with the corresponding depths are connected in series, wherein the series connection comprises overlapping characteristic graphs; training the U-shaped depth profile perception network by minimizing a second loss function.
Optionally, the second loss function comprises
Figure BDA0002161269770000041
Wherein a represents the predicted segmented tooth image; b represents the original mask; the Dice loss function is introduced to improve the overlap ratio between the predicted segmentation result and the real original mask.
Further, the smoothing process, i.e., the erosion process, specifically includes narrowing the boundary of the predicted segmentation result by using a disc filter consistent with the data expansion process, so that the segmentation boundary can be made clearer.
Optionally, the disc filter has a radius of 2 or 2.5 or 3.
The invention also provides a tooth segmentation device based on depth contour perception, which comprises:
the image acquisition module is used for acquiring a tooth image dataset and a tooth image to be segmented;
the morphological processing module is used for extracting the outline mask of the original mask in the training set image through morphological processing;
the contour prediction probability module is used for acquiring a contour prediction probability map through a contour mask;
the network construction module is used for constructing a U-shaped depth profile perception network comprising a contraction path and an expansion path;
the image fusion module is used for fusing the tooth image and the contour prediction probability map;
the image rough segmentation module is used for enabling the tooth image to be segmented to pass through a U-shaped depth profile perception network to obtain a rough segmentation result of the tooth image;
and the image fine segmentation module is used for smoothing the rough segmentation result of the tooth image to obtain a fine segmentation result of the tooth image.
Further, the morphology processing module comprises:
a gray processing unit for graying the color tooth image;
the Gaussian filter is used for smoothing and denoising the grayed tooth image;
the edge detection unit is used for extracting a contour mask of the tooth image;
and the expansion processing unit is used for thickening the contour mask.
Further, the contour prediction probability module comprises:
the full convolution network unit comprises a network unit with a full convolution structure and is used for predicting a contour prediction probability map;
the first supervision unit is used for taking the thickened outline mask as first supervision information;
and the first loss function unit is used for optimizing and training the full convolution network unit according to the pixel point prediction and the real label corresponding to the pixel point.
Further, the network construction module includes:
the second loss function unit is used for training the U-shaped depth profile perception network according to the predicted segmented tooth image and the original mask;
the second supervision unit is used for taking the original mask as second supervision information;
the U-shaped depth profile perception network comprises a contraction path layer and an expansion path layer;
a shrink path layer comprising 5 repeating units, each unit comprising 2 convolutional layers and 1 pooling layer;
an extended path layer comprising 5 cells of a depth consistent with the characteristic depth of the contracted path, each cell comprising two convolutional layers and one transposed convolutional layer;
the contraction path unit and the expansion path unit of the corresponding depth are connected in series.
The invention also proposes a computer device comprising at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, and the processor calls the program instructions to execute the method provided by the invention.
The invention has the beneficial effects that:
1) the invention provides a tooth segmentation method based on depth contour perception, which can improve the accuracy rate of segmentation and the visual effect of segmentation under the condition that the tooth boundary is not clear.
2) The invention fuses the local information and the global information, can improve the segmentation precision of the segmentation object with larger coverage area, and can be widely applied to the segmentation of larger tissues or cells.
Drawings
In order to make the object, technical scheme and beneficial effect of the invention more clear, the invention provides the following drawings for explanation:
FIG. 1 is a schematic overall flow diagram of the present invention;
FIG. 2 is a schematic diagram of a full convolution neural network structure employed in the present invention;
FIG. 3 is a schematic diagram of a U-shaped depth profile perception network structure employed in the present invention;
FIG. 4 is a data flow diagram of the present invention;
FIG. 5 is a graph of tooth prediction results using a U-shaped depth profile perception network according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly and completely apparent, the technical solutions in the embodiments of the present invention are described below with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
As shown in fig. 1, a tooth segmentation method based on depth contour perception of the present method may specifically include the following steps:
s1, acquiring a tooth image data set, preprocessing the tooth image in the tooth image data set, and taking the tooth image as a training set image;
s2 extracting a contour mask from the artificially labeled binary original mask in the training set image through morphological processing and thickening the contour mask;
s3, taking the thickened contour mask as first supervision information, passing the preprocessed original tooth image through a full convolution network to minimize a first loss function, training the full convolution network, and obtaining a contour prediction probability map;
in one embodiment, the full convolutional network can be referred to as fig. 2, and this embodiment provides a split network with a full convolutional structure, where the network includes 5 repeating units, each unit includes 2 convolutional layers and 1 learkyrelu pooling layer, the role of this pooling layer is to prevent some neurons from being suppressed, and a BN normalization layer is added between the convolutional layers and the pooling layer to prevent the problem of gradient explosion or gradient disappearance. Directly connected after 5 repeated units are 2 convolutional layers, and the 2 convolutional layers are used for extracting more abstract features without increasing the number of feature maps. Then, the output of the last convolutional layer is up-sampled by 2 times and added to the output of the 4 th pooling layer, and the result of the addition and the output of the 3 rd pooling layer are added and up-sampled to the size of the original image. Wherein all upsampling operations are performed using transposed convolutional layers. The resulting network can achieve a refined segmentation result by minimizing the first loss function.
Wherein the first loss function is defined as shown in equation (1):
Figure BDA0002161269770000071
wherein i represents a certain pixel (x, y), N represents the number of pixels,
Figure BDA0002161269770000072
representing a prediction of a pixel point i, y(i)And representing the real label corresponding to the pixel point i.
S4, constructing a U-shaped depth profile perception network comprising a contraction path and an expansion path;
in one embodiment, the U-shaped depth profile aware network can be referred to as shown in fig. 3, and the network presents a U-shaped symmetrical structure, and the network is composed of a contraction path and an expansion path, wherein the contraction path is used for extracting features, and the expansion path is used for restoring resolution. The contraction path is composed of 5 repeated units, each unit comprises 2 convolution layers and 1 LeakyReLU pooling layer, the expansion path corresponds to a unit with the same characteristic depth as the contraction path, each unit comprises 2 convolution layers and 1 transposition convolution layer, and finally the network uses a softmax activation function as a pixel category discriminator. In addition, the units of the feature depth corresponding to the contraction path and the expansion path are spliced, and the splicing operation is to overlap the feature maps, so that the feature information and the position information of the image can be combined.
Adding a transposed convolutional layer which can be directly up-sampled to the size of the original image into each unit of the contraction path, and splicing the transposed convolutional layer with the 1 st convolutional layer of the last unit of the expansion path;
in one embodiment, as shown in fig. 3, first, a transposed convolution layer is added to each unit of the systolic path in order to prevent the additionally added contour prediction probability map from fading out in the process of downsampling and extracting features due to its relatively fine features. Next, the output of the 4 transposed convolutional layers is spliced with the 1 st convolutional layer of the last unit of the expansion path, and the splicing operation is also to overlap the feature maps. And finally, the overlapping degree of the predicted result and the real result is improved by minimizing the second loss function.
Wherein the second loss function is defined as shown in equation (2):
Figure BDA0002161269770000073
s5, taking the original mask as second supervision information, fusing the preprocessed tooth image and the contour prediction probability map, obtaining a tooth segmentation result map through a U-shaped depth contour perception network after fusion, and training the U-shaped depth contour perception network through minimizing a second loss function;
and the fusion operation is to splice the feature maps, the 1 multiplied by 1 convolution kernel fuses the features of the probability maps to realize dimension reduction, and the category of each pixel point is predicted by using a softmax activation function after the dimension reduction.
As shown in fig. 4, in an embodiment, the tooth image training stage of this embodiment may include extracting a contour from a training set image in the tooth image dataset through morphological dilation, passing the contour through a network with a full convolution structure, outputting a contour prediction probability map, fusing an original tooth image and the contour prediction probability map, inputting the fused tooth image and the contour prediction probability map into a U-shaped depth contour perception network, and training the U-shaped depth contour perception network according to a second loss function.
S6, acquiring the shot tooth image to be segmented, carrying out the same preprocessing operation as the step S1 on the tooth image to be segmented, and sending the preprocessed tooth image to be predicted into the trained U-shaped depth profile perception network to obtain a rough segmentation result of the tooth image to be segmented;
the process belongs to a testing stage, and only the tooth image to be predicted needs to be input into the U-shaped depth profile perception network, and profile information does not need to be used.
S7, smoothing the segmentation result to obtain a fine segmentation result of the tooth image as shown in FIG. 5; it can be seen that the tooth image can be effectively segmented by the method provided by the invention.
And the boundary of the segmentation result is smoothly predicted by using a disc filter with the radius of 2, so that the segmentation boundary is clearer.
The invention also provides a tooth segmentation device based on depth contour perception, which comprises:
the image acquisition module is used for acquiring a tooth image dataset and a tooth image to be segmented;
the morphological processing module is used for extracting the outline mask of the original mask in the training set image through morphological processing;
the contour prediction probability module is used for acquiring a contour prediction probability map through a contour mask;
the network construction module is used for constructing a U-shaped depth profile perception network comprising a contraction path and an expansion path;
the image fusion module is used for fusing the tooth image and the contour prediction probability map;
the image rough segmentation module is used for enabling the tooth image to be segmented to pass through a U-shaped depth profile perception network to obtain a rough segmentation result of the tooth image;
and the image fine segmentation module is used for smoothing the rough segmentation result of the tooth image to obtain a fine segmentation result of the tooth image.
Further, the morphology processing module comprises:
a gray processing unit for graying the color tooth image;
the Gaussian filter is used for smoothing and denoising the grayed tooth image;
the edge detection unit is used for extracting a contour mask of the tooth image;
and the expansion processing unit is used for thickening the contour mask.
Further, the contour prediction probability module comprises:
the full convolution network unit comprises a network unit with a full convolution structure and is used for predicting a contour prediction probability map;
the first supervision unit is used for taking the thickened outline mask as first supervision information;
and the first loss function unit is used for optimizing and training the full convolution network unit according to the pixel point prediction and the real label corresponding to the pixel point.
Further, the network construction module includes:
the second loss function unit is used for training the U-shaped depth profile perception network according to the predicted segmented tooth image and the original mask;
the second supervision unit is used for taking the original mask as second supervision information;
the U-shaped depth profile perception network comprises a contraction path layer and an expansion path layer;
a shrink path layer comprising 5 repeating units, each unit comprising 2 convolutional layers and 1 pooling layer;
an extended path layer comprising 5 cells of a depth consistent with the characteristic depth of the contracted path, each cell comprising two convolutional layers and one transposed convolutional layer;
the contraction path unit and the expansion path unit of the corresponding depth are connected in series.
The invention also proposes a computer device comprising at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, and the processor calls the program instructions to execute the method provided by the invention.
Of course, it should be understood that some of the features of the method, apparatus, and computer device of the present invention may be mutually incorporated and are not specifically recited herein for the sake of brevity.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, and the like.
The above-mentioned embodiments, which further illustrate the objects, technical solutions and advantages of the present invention, should be understood that the above-mentioned embodiments are only preferred embodiments of the present invention, and should not be construed as limiting the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A tooth segmentation method based on depth contour perception is characterized by comprising the following steps:
s1, acquiring a tooth image data set, preprocessing the tooth image in the tooth image data set, and taking the tooth image as a training set image;
s2, extracting a contour mask from the artificially labeled binary original mask in the training set image through morphological processing and thickening the contour mask;
s3, taking the thickened contour mask as first supervision information, passing the preprocessed original tooth image through a full convolution network to minimize a first loss function, training the full convolution network, and obtaining a contour prediction probability map;
s4, constructing a U-shaped depth profile perception network comprising a contraction path and an expansion path;
s5, taking the original mask as second supervision information, fusing the preprocessed tooth image and the contour prediction probability map, obtaining a tooth segmentation result map through a U-shaped depth contour perception network after fusion, and training the U-shaped depth contour perception network through minimizing a second loss function;
s6, acquiring the shot tooth image to be segmented, carrying out the same preprocessing operation as the step S1 on the tooth image to be segmented, and sending the preprocessed tooth image to be predicted into the trained U-shaped depth profile perception network to obtain a rough segmentation result of the tooth image to be segmented;
and S7, smoothing the segmentation result to obtain a fine segmentation result of the tooth image.
2. The method of claim 1, wherein the step S2 includes extracting the edges of the artificially labeled binary original mask by using a candy operator to obtain a contour mask, and applying a data expansion process to the contour mask to thicken the contour thereof, wherein the data expansion process includes thickening the contour by using a disk filter.
3. The method for tooth segmentation based on depth contour perception according to claim 1, wherein the contour prediction probability map in step S3 includes a method of passing tooth images in the training set and a contour mask extracted by a morphological method through a full convolution network and minimizing a first loss function, the first loss function is a cross-entropy loss function, and the calculation formula is:
Figure FDA0003238512270000021
wherein N represents the number of pixel points,
Figure FDA0003238512270000022
representing a prediction of a pixel point i, y(i)And representing the real label corresponding to the pixel point i.
4. The tooth segmentation method based on depth contour perception according to claim 1, wherein the U-shaped depth contour perception network comprises a contraction path and an expansion path, the contraction path comprises 5 repeated units, each unit comprises 2 convolution layers and 1 pooling layer; the expansion path comprises 5 units with the same characteristic depth as the contraction path, each unit comprises 2 convolution layers and 1 transposition convolution layer, and the contraction path units and the expansion path units with the corresponding depths are connected in series, wherein the series connection comprises overlapping characteristic graphs; training the U-shaped depth profile perception network by minimizing a second loss function.
5. The method of claim 1 or 4, wherein the second loss function comprises
Figure FDA0003238512270000023
Wherein a represents the predicted segmented tooth image; b denotes the original mask.
6. A tooth segmentation device based on depth contour perception, characterized in that the device comprises:
the image acquisition module is used for acquiring a tooth image dataset and a tooth image to be segmented;
the morphological processing module is used for extracting the outline mask of the original mask in the training set image through morphological processing;
the contour prediction probability module is used for acquiring a contour prediction probability map through a contour mask; taking the thickened outline mask as first supervision information, passing the preprocessed original tooth image through a full convolution network to minimize a first loss function, training the full convolution network, and obtaining an outline prediction probability map;
the image fusion module is used for fusing the preprocessed tooth image and the contour prediction probability map;
the network construction module is used for constructing a U-shaped depth profile perception network comprising a contraction path and an expansion path; taking the original mask as second supervision information, enabling the image fused by the image fusion module to pass through a U-shaped depth profile perception network to obtain a tooth segmentation result image, and training the U-shaped depth profile perception network by minimizing a second loss function;
the image rough segmentation module is used for enabling the tooth image to be segmented to pass through a U-shaped depth profile perception network to obtain a rough segmentation result of the tooth image;
and the image fine segmentation module is used for smoothing the rough segmentation result of the tooth image to obtain a fine segmentation result of the tooth image.
7. The tooth segmentation device based on depth contour perception according to claim 6, wherein the morphology processing module comprises:
a gray processing unit for graying the color tooth image;
the Gaussian filter is used for smoothing and denoising the grayed tooth image;
the edge detection unit is used for extracting a contour mask of the tooth image;
and the expansion processing unit is used for thickening the contour mask.
8. The apparatus of claim 6, wherein the contour prediction probability module comprises:
the full convolution network unit comprises a network unit with a full convolution structure and is used for predicting a contour prediction probability map;
the first supervision unit is used for taking the thickened outline mask as first supervision information;
and the first loss function unit is used for optimizing and training the full convolution network unit according to the pixel point prediction and the real label corresponding to the pixel point.
9. The tooth segmentation device based on depth contour perception according to claim 6, wherein the network construction module comprises:
the second supervision unit is used for taking the original mask as second supervision information;
the second loss function unit is used for training the U-shaped depth profile perception network according to the predicted profile segmentation probability image and the original mask;
the U-shaped depth profile perception network comprises a contraction path layer and an expansion path layer;
a shrink path layer comprising 5 repeating units, each unit comprising 2 convolutional layers and 1 pooling layer;
an extended path layer comprising 5 cells of a depth consistent with the characteristic depth of the contracted path, each cell comprising two convolutional layers and one transposed convolutional layer;
the contraction path unit and the expansion path unit of the corresponding depth are connected in series.
10. A computer device comprising at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1 to 5.
CN201910733040.XA 2019-08-09 2019-08-09 Tooth segmentation method and device based on depth contour perception and computer equipment Active CN110473243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910733040.XA CN110473243B (en) 2019-08-09 2019-08-09 Tooth segmentation method and device based on depth contour perception and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910733040.XA CN110473243B (en) 2019-08-09 2019-08-09 Tooth segmentation method and device based on depth contour perception and computer equipment

Publications (2)

Publication Number Publication Date
CN110473243A CN110473243A (en) 2019-11-19
CN110473243B true CN110473243B (en) 2021-11-30

Family

ID=68510557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910733040.XA Active CN110473243B (en) 2019-08-09 2019-08-09 Tooth segmentation method and device based on depth contour perception and computer equipment

Country Status (1)

Country Link
CN (1) CN110473243B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992374B (en) * 2019-11-28 2023-09-05 杭州小影创新科技股份有限公司 Hair refinement segmentation method and system based on deep learning
CN111161287A (en) * 2019-12-05 2020-05-15 山东科技大学 Retinal vessel segmentation method based on symmetric bidirectional cascade network deep learning
CN111259772B (en) * 2020-01-13 2023-05-30 广州虎牙科技有限公司 Image labeling method, device, equipment and medium
CN113436734B (en) * 2020-03-23 2024-03-05 北京好啦科技有限公司 Tooth health assessment method, equipment and storage medium based on face structure positioning
CN111563887B (en) * 2020-04-30 2022-04-22 北京航空航天大学杭州创新研究院 Intelligent analysis method and device for oral cavity image
CN111784754B (en) * 2020-07-06 2024-01-12 浙江得图网络有限公司 Tooth orthodontic method, device, equipment and storage medium based on computer vision
CN111968120B (en) * 2020-07-15 2022-03-15 电子科技大学 Tooth CT image segmentation method for 3D multi-feature fusion
CN112085028B (en) * 2020-08-31 2024-03-12 浙江工业大学 Tooth full-scene semantic segmentation method based on feature map disturbance and boundary supervision
CN113139977B (en) * 2021-04-23 2022-12-27 西安交通大学 Mouth cavity curve image wisdom tooth segmentation method based on YOLO and U-Net
CN113516784B (en) * 2021-07-27 2023-05-23 四川九洲电器集团有限责任公司 Tooth segmentation modeling method and device
CN113822904B (en) * 2021-09-03 2023-08-08 上海爱乐慕健康科技有限公司 Image labeling device, method and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492286A (en) * 2018-03-13 2018-09-04 成都大学 A kind of medical image cutting method based on the U-shaped convolutional neural networks of binary channel
CN109785334A (en) * 2018-12-17 2019-05-21 深圳先进技术研究院 Cardiac magnetic resonance images dividing method, device, terminal device and storage medium
CN109816661A (en) * 2019-03-22 2019-05-28 电子科技大学 A kind of tooth CT image partition method based on deep learning
CN109903396A (en) * 2019-03-20 2019-06-18 洛阳中科信息产业研究院(中科院计算技术研究所洛阳分所) A kind of tooth three-dimensional model automatic division method based on surface parameterization

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9129363B2 (en) * 2011-07-21 2015-09-08 Carestream Health, Inc. Method for teeth segmentation and alignment detection in CBCT volume
US9984476B2 (en) * 2015-03-30 2018-05-29 General Electric Company Methods and systems for automatic segmentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492286A (en) * 2018-03-13 2018-09-04 成都大学 A kind of medical image cutting method based on the U-shaped convolutional neural networks of binary channel
CN109785334A (en) * 2018-12-17 2019-05-21 深圳先进技术研究院 Cardiac magnetic resonance images dividing method, device, terminal device and storage medium
CN109903396A (en) * 2019-03-20 2019-06-18 洛阳中科信息产业研究院(中科院计算技术研究所洛阳分所) A kind of tooth three-dimensional model automatic division method based on surface parameterization
CN109816661A (en) * 2019-03-22 2019-05-28 电子科技大学 A kind of tooth CT image partition method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《U-Net: Convolutional Networks for Biomedical Image Segmentation 》;Olaf Ronneberger et al;;《arXiv:1505.04597v1》;20150518;第1-8页; *
《基于分割和轮廓特征的医学牙齿图像处理算法研究》;李惠;《硕士论文库》;20141231;第1-62页; *

Also Published As

Publication number Publication date
CN110473243A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN110473243B (en) Tooth segmentation method and device based on depth contour perception and computer equipment
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
CN110706246B (en) Blood vessel image segmentation method and device, electronic equipment and storage medium
CN109816666B (en) Symmetrical full convolution neural network model construction method, fundus image blood vessel segmentation device, computer equipment and storage medium
CN111598862B (en) Breast molybdenum target image segmentation method, device, terminal and storage medium
CN112862805B (en) Automatic auditory neuroma image segmentation method and system
CN113554665A (en) Blood vessel segmentation method and device
CN112602114A (en) Image processing method and device, neural network and training method, and storage medium
CN115546570A (en) Blood vessel image segmentation method and system based on three-dimensional depth network
US20220138936A1 (en) Systems and methods for calcium-free computed tomography angiography
CN115100494A (en) Identification method, device and equipment of focus image and readable storage medium
JPH1097624A (en) Method and device for detecting abnormal shadow candidate
Goutham et al. Automatic localization of landmarks in cephalometric images via modified U-Net
CN111986102B (en) Digital pathological image deblurring method
CN113870270A (en) Eyeground image cup and optic disc segmentation method under unified framework
CN108447066B (en) Biliary tract image segmentation method, terminal and storage medium
CN116309806A (en) CSAI-Grid RCNN-based thyroid ultrasound image region of interest positioning method
KR102476888B1 (en) Artificial diagnostic data processing apparatus and its method in digital pathology images
CN115937158A (en) Stomach cancer focus region segmentation method based on layered attention mechanism
CN113902824B (en) Guide wire artifact removing method for intravascular optical coherence tomography
CN115641344A (en) Method for segmenting optic disc image in fundus image
JP7019104B2 (en) Threshold learning method
CN115439409A (en) Tooth type identification method and device
CN111292270A (en) Three-dimensional image blood vessel enhancement method based on deep learning network
Jimi et al. Automated Skin Lesion Segmentation using VGG-UNet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant