CN113470045B - Oral cavity CBCT image segmentation method based on super-pixel statistical characteristics and graph annotating force network - Google Patents

Oral cavity CBCT image segmentation method based on super-pixel statistical characteristics and graph annotating force network Download PDF

Info

Publication number
CN113470045B
CN113470045B CN202110666189.8A CN202110666189A CN113470045B CN 113470045 B CN113470045 B CN 113470045B CN 202110666189 A CN202110666189 A CN 202110666189A CN 113470045 B CN113470045 B CN 113470045B
Authority
CN
China
Prior art keywords
super
pixel
graph
value
oral cavity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110666189.8A
Other languages
Chinese (zh)
Other versions
CN113470045A (en
Inventor
徐新黎
邢少恒
龙海霞
吴福理
管秋
杨旭华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202110666189.8A priority Critical patent/CN113470045B/en
Publication of CN113470045A publication Critical patent/CN113470045A/en
Application granted granted Critical
Publication of CN113470045B publication Critical patent/CN113470045B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Abstract

An oral cavity CBCT image segmentation method based on super-pixel statistical characteristics and a graph annotating force network comprises the following steps: step one: initializing a CT value; step two: super-pixel segmentation; step three: super-pixel composition and truth value label setting; step four: extracting super-pixel statistical characteristics; step five: constructing a graph attention network model; step six: training a model; step seven: CBCT image segmentation. The invention provides the oral cavity CBCT image segmentation method with high segmentation precision and high operation efficiency, which reduces the data processing scale of the oral cavity CBCT image target segmentation task and improves the training speed of the segmentation model.

Description

Oral cavity CBCT image segmentation method based on super-pixel statistical characteristics and graph annotating force network
Technical Field
The invention relates to the field of medical image processing and machine learning, in particular to an image segmentation method for oral CBCT.
Background
The oral cavity CBCT image is an important reference for dentists in tooth implantation, and organ segmentation can be used for subsequent 3D modeling, distance measurement and other operation aided designs. The existing method for dividing the tissues of each organ in the oral cavity CBCT image has the problems of low precision, time consumption and the like due to imaging differences of CT equipment, pathological condition differences of organs of patients and the like. In 2003 Ren et al propose superpixels, which refer to image blocks composed of adjacent pixels having similar texture, color, brightness, etc. Super-pixel segmentation is an important preprocessing stage of image processing, so that the number of basic units of subsequent processing can be effectively reduced, and the performance and efficiency of a segmentation algorithm are improved. In the oral CBCT image, the super-pixel segmentation is applied to effectively find out the boundary information among all organs and reduce the cost of segmentation time.
As one of the important tasks of medical image processing, medical image segmentation is a widely studied direction in visual deep learning. The traditional medical image segmentation method comprises a threshold method, a region growing and watershed algorithm and the like, and is based on gradient or gray scale; the U-net method based on deep learning can well combine the low-layer structural features and the high-layer semantic features of images through jump-level connection and a full convolution network, however, like other deep learning methods, the method uses pixels as basic units, stacks of convolution layers and pooling layers as network structures, the time for training a network model is multiplied along with the increase of depth, and the data distribution is very sensitive. The super-pixel obtained by super-pixel segmentation can be well attached to the real edge of an object, the super-pixel result of the oral cavity CBCT image is taken as a basic unit of image segmentation, and the super-pixel is correctly classified to realize image segmentation, so that the time and complexity of image segmentation are greatly reduced.
Disclosure of Invention
In order to realize the segmentation of the oral cavity CBCT image and improve the accuracy and the operation efficiency of a segmentation result, the invention provides an oral cavity CBCT image segmentation method based on super-pixel statistical characteristics and a graph annotation force network.
The technical scheme adopted for solving the technical problems is as follows:
an oral cavity CBCT image segmentation method based on super-pixel statistical characteristics and a graph annotating force network comprises the following steps:
step one: inputting the super-pixel target number K of the oral CBCT image, and carrying out initialization transformation on the pixel CT value l according to the window width WW and the window level WL:
step two: normalized mapping of CT value l, edge intensity value e and pixel space coordinates (x, y) of each pixel point of the oral cavity image to interval [0,1 ]]Wherein e represents each pixel point as a four-dimensional vector p by using the result of Canny algorithm without non-maximum suppression xy =[l,e,x,y]Dividing the oral cavity CBCT image into K super pixels by using a super pixel generation algorithm based on edge probability;
step three: representing K superpixels as K nodes in the graph, constructing a topological relation of the graph according to the adjacent relation among the superpixels, and if the superpixels S i Super-pixel S in image plane k Bordering, i.e. there are adjacent pixel pairs between the edge pixels of two super-pixels, the two super-pixels are regarded as direct neighbors, S k Assigning truth labels
Wherein gt is a truth value area of the foreground part to be segmented, ||S k The I is the super pixel S k The number of inner pixels, ||S k And ∈gt|| is the super pixel S k The number of pixels in the intersection with the truth value foreground gt,is a threshold value;
step four: extracting statistical characteristics of super pixel nodes, and adding each super pixel S k Is expressed as an 8-dimensional feature vector:
wherein the method comprises the steps ofIs super pixel S k Is>Is super pixel S k Is,>is super pixel S k Significance of->Is super pixel S k Centroid and abscissa of (2),>is super pixel S k Is the centroid ordinate, x min Is super pixel S k Is the minimum abscissa, y min Is super pixel S k Minimum ordinate,>is super pixel S k The number of pixels involved +.>Is super pixel S l Is a mean gray value of (2);
step five: building a superpixel-based graph annotation effort network model, wherein the graph annotation effort module adopts an L layer (L>2) Setting the intermediate layer as k 1 The number of attention heads and the output layer are k 2 A plurality of attention heads, wherein k 1 <k 2 According to the true value set y of the superpixel true And a set of predicted values y pred Calculating a Loss function Loss and updating network model parameters:
wherein y pred ∩y true The number of elements of the intersection of the true value and the predicted value is the number of elements of the intersection of the true value and the predicted value, y pred The number of elements of the prediction set is the number of elements of the prediction set, y true The I is the number of elements of the true value set;
step six: setting the maximum iteration step number, the learning rate and the minimum lot number, training a graph annotation force network based on super pixels by using the labeled super pixel node graph data, and adopting 5-fold cross validation until the Loss function Loss converges or the maximum iteration step number is reached;
step seven: inputting the oral cavity CBCT image into a trained super-pixel-based graph annotating force network, and predicting the category of the super-pixel node to obtain an oral cavity CBCT image segmentation result.
The technical conception of the invention is as follows: the super-pixels of the oral cavity CBCT image are expressed as graph nodes, the topological structure of the graph is constructed according to the adjacent relation, the statistical characteristics such as average gray value, internal gray value variance, saliency, centroid coordinates and the like of the super-pixels are extracted, the connection weights among the super-pixels with different attention network learning properties of the graph are trained, so that the classification prediction of the super-pixels is completed, and the segmentation of the regions of interest such as maxillary sinuses in the oral cavity CBCT image is realized.
The beneficial effects of the invention are as follows: the super-pixel with good edge fitting degree is used as a basic unit of image segmentation, the oral cavity CBCT image segmentation method with high segmentation precision and high operation efficiency is provided, the data processing scale of an oral cavity CBCT image target segmentation task is reduced, and the training speed of a segmentation model is improved.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1, an oral cavity CBCT image segmentation method based on super-pixel statistical features and a graph injection force network includes the following steps:
step one: inputting the super-pixel target number K of the oral CBCT image, and carrying out initialization transformation on the pixel CT value l according to the window width WW and the window level WL:
step two: normalized mapping of CT value l, edge intensity value e and pixel space coordinates (x, y) of each pixel point of the oral cavity image to interval [0,1 ]]Wherein e represents each pixel point as a four-dimensional vector p by using the result of Canny algorithm without non-maximum suppression xy =[l,e,x,y]Dividing the oral cavity CBCT image into K super pixels by using a super pixel generation algorithm based on edge probability;
step three: representing K superpixels as K nodes in the graph, constructing a topological relation of the graph according to the adjacent relation among the superpixels, and if the superpixels S i Super-pixel S in image plane k Bordering, i.e. there are adjacent pixel pairs between the edge pixels of two super-pixels, the two super-pixels are regarded as direct neighbors, S k Assigning truth labels
Wherein gt is a truth value area of the foreground part to be segmented, ||S k The I is the super pixel S k The number of inner pixels, ||S k And ∈gt|| is the super pixel S k The number of pixels in the intersection with the truth value foreground gt,is a threshold value;
step four: extracting statistical characteristics of super pixel nodes, and adding each super pixel S k Is expressed as an 8-dimensional feature vector:
wherein the method comprises the steps ofIs super pixel S k Is>Is super pixel S k Is,>is super pixel S k Significance of->Is super pixel S k Centroid and abscissa of (2),>is super pixel S k Is the centroid ordinate, x min Is super pixel S k Is the minimum abscissa, y min Is super pixel S k Minimum ordinate,>is super pixel S k The number of pixels involved +.>Is super pixel S l Is a mean gray value of (2);
step five: building a superpixel-based graph annotation effort network model, wherein the graph annotation effort module adopts an L layer (L>2) Setting the intermediate layer as k 1 The number of attention heads and the output layer are k 2 A plurality of attention heads, wherein k 1 <k 2 According to the true value set y of the superpixel true And a set of predicted values y pred Calculating a Loss function Loss and updating network model parameters:
wherein y pred ∩y true The number of elements of the intersection of the true value and the predicted value is the number of elements of the intersection of the true value and the predicted value, y pred I is the sum of the elements of the prediction set, y true The I is the number of elements of the true value set;
step six: setting the maximum iteration step number, the learning rate and the minimum lot number, training a graph annotation force network based on super pixels by using the labeled super pixel node graph data, and adopting 5-fold cross validation until the Loss function Loss converges or the maximum iteration step number is reached;
step seven: inputting the oral cavity CBCT image into a trained super-pixel-based graph annotating force network, and predicting the category of the super-pixel node to obtain an oral cavity CBCT image segmentation result.
As described above, the specific implementation steps implemented by this patent make the present invention clearer. Any modifications and changes made to the present invention fall within the spirit of the invention and the scope of the appended claims.

Claims (1)

1. An oral cavity CBCT image segmentation method based on super-pixel statistical characteristics and a graph annotating force network is characterized by comprising the following steps of: the method comprises the following steps:
step one: inputting the super-pixel target number K of the oral CBCT image, and carrying out initialization transformation on the pixel CT value l according to the window width WW and the window level WL:
step two: normalized mapping of CT value l, edge intensity value e and pixel space coordinates (x, y) of each pixel point of the oral cavity image to interval [0,1 ]]Wherein e is not non-maximum suppressed using the Canny algorithmThe result of this is that each pixel point is represented as a four-dimensional vector p xy =[l,e,x,y]Dividing the oral cavity CBCT image into K super pixels by using a super pixel generation algorithm based on edge probability;
step three: representing K superpixels as K nodes in the graph, constructing a topological relation of the graph according to the adjacent relation among the superpixels, and if the superpixels S i Super-pixel S in image plane k Bordering, i.e. there are adjacent pixel pairs between the edge pixels of two super-pixels, the two super-pixels are regarded as direct neighbors, S k Assigning truth labels
Wherein gt is a truth value area of the foreground part to be segmented, ||S k The I is the super pixel S k The number of inner pixels, ||S k And ∈gt|| is the super pixel S k The number of pixels in the intersection with the truth value foreground gt,is a threshold value;
step four: extracting statistical characteristics of super pixel nodes, and adding each super pixel S k Is expressed as an 8-dimensional feature vector:
wherein the method comprises the steps ofIs super pixel S k Is>Is super pixel S k Is,>is super pixel S k Significance of->Is super pixel S k Centroid and abscissa of (2),>is super pixel S k Is the centroid ordinate, x min Is super pixel S k Is the minimum abscissa, y min Is super pixel S k Minimum ordinate,>is super pixel S k The number of pixels involved +.>Is super pixel S l Is a mean gray value of (2);
step five: constructing a graph annotation meaning force network model based on superpixels, wherein the graph annotation meaning force module adopts L layers and L>2, setting the intermediate layer as k 1 The number of attention heads and the output layer are k 2 A plurality of attention heads, wherein k 1 <k 2 According to the true value set y of the superpixel true And a set of predicted values y pred Calculating a Loss function Loss and updating network model parameters:
wherein y pred ∩y true The number of elements of the intersection of the true value and the predicted value is the number of elements of the intersection of the true value and the predicted value, y pred The number of elements of the prediction set is the number of elements of the prediction set, y true The I is the number of elements of the true value set;
step six: setting the maximum iteration step number, the learning rate and the minimum lot number, training a graph annotation force network based on super pixels by using the labeled super pixel node graph data, and adopting 5-fold cross validation until the Loss function Loss converges or the maximum iteration step number is reached;
step seven: inputting the oral cavity CBCT image into a trained super-pixel-based graph annotating force network, and predicting the category of the super-pixel node to obtain an oral cavity CBCT image segmentation result.
CN202110666189.8A 2021-06-16 2021-06-16 Oral cavity CBCT image segmentation method based on super-pixel statistical characteristics and graph annotating force network Active CN113470045B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110666189.8A CN113470045B (en) 2021-06-16 2021-06-16 Oral cavity CBCT image segmentation method based on super-pixel statistical characteristics and graph annotating force network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110666189.8A CN113470045B (en) 2021-06-16 2021-06-16 Oral cavity CBCT image segmentation method based on super-pixel statistical characteristics and graph annotating force network

Publications (2)

Publication Number Publication Date
CN113470045A CN113470045A (en) 2021-10-01
CN113470045B true CN113470045B (en) 2024-04-16

Family

ID=77870255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110666189.8A Active CN113470045B (en) 2021-06-16 2021-06-16 Oral cavity CBCT image segmentation method based on super-pixel statistical characteristics and graph annotating force network

Country Status (1)

Country Link
CN (1) CN113470045B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741341A (en) * 2018-12-20 2019-05-10 华东师范大学 A kind of image partition method based on super-pixel and long memory network in short-term
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN109875863A (en) * 2019-03-14 2019-06-14 河海大学常州校区 Wear-type VR eyesight lifting system based on binocular vision Yu mental image training
CN110717956A (en) * 2019-09-30 2020-01-21 重庆大学 L0 norm optimization reconstruction method guided by finite angle projection superpixel

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN109741341A (en) * 2018-12-20 2019-05-10 华东师范大学 A kind of image partition method based on super-pixel and long memory network in short-term
CN109875863A (en) * 2019-03-14 2019-06-14 河海大学常州校区 Wear-type VR eyesight lifting system based on binocular vision Yu mental image training
CN110717956A (en) * 2019-09-30 2020-01-21 重庆大学 L0 norm optimization reconstruction method guided by finite angle projection superpixel

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陶永鹏 ; 景雨 ; 顼聪 ; .融合超像素和CNN的CT图像分割方法.计算机工程与应用.(05),第204-209页. *

Also Published As

Publication number Publication date
CN113470045A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN111476292B (en) Small sample element learning training method for medical image classification processing artificial intelligence
CN111798475A (en) Indoor environment 3D semantic map construction method based on point cloud deep learning
CN110853070A (en) Underwater sea cucumber image segmentation method based on significance and Grabcut
CN110853064B (en) Image collaborative segmentation method based on minimum fuzzy divergence
CN112419344B (en) Unsupervised image segmentation method based on Chan-Vese model
CN112634149A (en) Point cloud denoising method based on graph convolution network
Han et al. Removing illumination from image pair for stereo matching
Reddy et al. A hybrid K-means algorithm improving low-density map-based medical image segmentation with density modification
CN111815593A (en) Lung nodule domain adaptive segmentation method and device based on counterstudy and storage medium
CN113470045B (en) Oral cavity CBCT image segmentation method based on super-pixel statistical characteristics and graph annotating force network
CN113470054A (en) Oral CBCT (cone beam computed tomography) superpixel generation method based on edge probability
CN112419330A (en) Temporal bone key anatomical structure automatic positioning method based on spatial relative position prior
CN112241959A (en) Attention mechanism generation semantic segmentation method based on superpixels
CN112801970A (en) Breast ultrasound image tumor segmentation method
Qian et al. Unet#: a Unet-like redesigning skip connections for medical image segmentation
Chen et al. A modified graph cuts image segmentation algorithm with adaptive shape constraints and its application to computed tomography images
CN116310128A (en) Dynamic environment monocular multi-object SLAM method based on instance segmentation and three-dimensional reconstruction
CN113763474B (en) Indoor monocular depth estimation method based on scene geometric constraint
CN112465837B (en) Image segmentation method for sparse subspace fuzzy clustering by utilizing spatial information constraint
CN113470046B (en) Drawing meaning force network segmentation method for medical image super-pixel gray texture sampling characteristics
CN110415254B (en) Level set image segmentation method based on oscillation coupling network and computer
CN114092494A (en) Brain MR image segmentation method based on superpixel and full convolution neural network
Puligandla et al. A supervoxel segmentation method with adaptive centroid initialization for point clouds
CN112508844B (en) Weak supervision-based brain magnetic resonance image segmentation method
Xu et al. Color image segmentation based on watershed and Ncut of improved weight matrix

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant