CN114419064A - Mammary gland duct region image segmentation method based on RN-DoubleU-Net network - Google Patents

Mammary gland duct region image segmentation method based on RN-DoubleU-Net network Download PDF

Info

Publication number
CN114419064A
CN114419064A CN202210021366.1A CN202210021366A CN114419064A CN 114419064 A CN114419064 A CN 114419064A CN 202210021366 A CN202210021366 A CN 202210021366A CN 114419064 A CN114419064 A CN 114419064A
Authority
CN
China
Prior art keywords
network
size
sub
convolution
multiplied
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210021366.1A
Other languages
Chinese (zh)
Other versions
CN114419064B (en
Inventor
陈昱莅
周耀
陆铖
马苗
裴炤
武杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Normal University
Original Assignee
Shaanxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Normal University filed Critical Shaanxi Normal University
Priority to CN202210021366.1A priority Critical patent/CN114419064B/en
Publication of CN114419064A publication Critical patent/CN114419064A/en
Application granted granted Critical
Publication of CN114419064B publication Critical patent/CN114419064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Abstract

A mammary gland duct region image segmentation method based on an RN-double U-Net network comprises the steps of data set preprocessing, RN-double U-Net network model building, RN-double U-Net network training, model storage, RN-double U-Net network verification and RN-double U-Net network testing. The double-UubleU-Net network model is formed by connecting two Unet sub-networks in series, wherein each Unet sub-network is formed by sequentially connecting an encoder, a hole convolution and a decoder in series. And a depth residual error network module consisting of a common convolution block and three residual error convolution blocks is used as a coder of a second Unet sub-network to construct an RN-DoubleU-Net network, so that the mammary duct region image is segmented, and the effective information in the image is utilized to accurately segment the mammary duct region image in the image. Compared with the existing image segmentation method, the method has the advantages of accurate image segmentation of the mammary gland duct region, high segmentation precision, high segmentation speed and the like, and can be used for automatically segmenting the mammary gland duct region image.

Description

Mammary gland duct region image segmentation method based on RN-DoubleU-Net network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to segmentation of a mammary gland duct region image.
Background
The image segmentation means that a region or a target which is meaningful to people in an image is detected; with the popularization of intelligent devices and the arrival of the big data era, people are generating, storing and using a large number of pictures, and the image segmentation technology can help people to further identify and analyze the pictures. The image segmentation by the deep learning technology shows more powerful performance, and the deep learning technology segments the characteristics of the image through continuous training, so that the interested region of people is reserved. Therefore, the deep learning technology has wide research value and significance in the field of image segmentation.
Many deep learning methods applied to image segmentation, such as U-Net network, and various methods for improving the U-Net network, have been studied. The U-Net network has an unobvious and very unstable segmentation detection effect on the mammary gland duct region; the U-Net network has obvious segmentation detection effect on common cells, but the segmentation effect on the mammary gland duct region is still to be improved; a method for improving a U-Net network, such as a DoubleU-Net network, has higher precision of segmentation detection of a mammary duct region than that of the U-Net network, but does not meet certain requirements.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the defects of the prior art and provide a mammary gland duct region image segmentation method based on an RN-DoubleU-Net network, which has accurate segmentation region, high precision and high segmentation detection speed.
The technical scheme adopted for solving the technical problems comprises the following steps:
(1) data set preprocessing
398 pictures of the mammary gland duct region data set are taken, and the size of the pictures is 2000 multiplied by 2000 pixels.
1) The mammary gland duct region data set pixel values were normalized to [0,1] and sliced into pictures of 512 x 512 pixels in size.
2) And (3) dividing the segmented data set according to the ratio of 8: 1: the proportion of 1 is divided into a training set, a verification set and a test set.
(2) Construction of RN-DoubleU-Net network model
The RN-DoubleU-Net network model is formed by connecting a first Unet sub-network and a second Unet sub-network, wherein the output of the first Unet sub-network is connected with the input of the second Unet sub-network.
The first Unet sub-network is formed by sequentially connecting a first sub-network encoder, a first sub-network hole convolution and a first sub-network decoder in series, and the second Unet sub-network is formed by sequentially connecting a second sub-network encoder, a second sub-network hole convolution and a second sub-network decoder in series.
(3) Training RN-DoubleU-Net network
1) Determining an objective function
The objective function includes a loss function LdiceAnd an evaluation function F1, determining a loss function L according to the following formuladice
Figure BDA0003462721900000021
Wherein X represents the true value, X ∈ { X ∈ [ ]1,x2,...xnY denotes a predictor, Y ∈ { Y ∈ }1,y2,...ynN is the number of elements, a finite positive integer.
An evaluation function F1 determined as follows:
Figure BDA0003462721900000022
Figure BDA0003462721900000023
Figure BDA0003462721900000024
wherein P is precision rate, P is belonged to [0,1], R is recall rate, R is belonged to [0,1], T is true positive, T is belonged to [0,1], F is false positive, F is belonged to [0,1], N is false negative, N is belonged to [0,1], and P, R, T, F, N is not 0 at the same time.
2) Training RN-DoubleU-Net network
Sending the training set into an RN-double U-Net network for training, wherein the learning rate gamma of the RN-double U-Net network belongs to [10 ]-5,10-3]And the optimizer adopts an estimation optimizer of the adaptive moment and iterates until the loss function is converged.
(4) Preservation model
And in the process of training the RN-DoubleU-Net network, continuously updating the weight by using a deep learning framework and storing a weight file.
(5) Validating RN-DoubleU-Net networks
And inputting the verification set into the RN-DoubleU-Net network for verification.
(6) Testing RN-DoubleU-Net networks
And inputting the test set into an RN-DoubleU-Net network for testing, and loading the stored weight file to obtain a mammary gland duct region image.
In the step (2) of constructing the RN-DoubleU-Net network model, the first sub-network encoder is a VGG19 network in the deep learning framework, the VGG19 network is composed of 16 serially connected code convolutional layers, and the code convolutional kernel size of the code convolutional layer of the VGG19 network is 3 × 3 and the step size is 1.
The first sub-network cavity convolution is composed of cavity convolution layers, and each cavity convolution layer is formed by sequentially connecting 1 common convolution kernel and 5 cavity convolution kernels in series.
The first sub-network decoder is composed of 4 decoding volume blocks, each decoding volume block comprises 1 upsampling layer with the size of 2 multiplied by 2, 2 decoding convolution kernels with the size of 3 multiplied by 3 and the step length of 1 and 1 attention mechanism block, and the input of the first sub-network decoder is connected with the output of the first sub-network hole convolution.
The size of a common convolution kernel is 1 multiplied by 1 step length and is 1, the size of a first cavity convolution kernel is 1 multiplied by 1 step length and is 1, the size of a convolution kernel of a second cavity convolution kernel is 3 multiplied by 3 step length and is 1, a cavity is 6, the size of a convolution kernel of a third cavity convolution kernel is 3 multiplied by 3 step length and is 1, a cavity is 12, the size of a convolution kernel of a fourth cavity convolution kernel is 3 multiplied by 3 step length and is 1, a cavity is 18, the size of a convolution kernel of a fifth cavity convolution kernel is 1 multiplied by 1 step length and is 1, and a cavity is 1; the input of the hole convolutional layer is connected to the output of the first sub-network encoder.
In the step (2) of constructing the RN-doublu-Net network model, the second sub-network hole convolution has the same structure as the first sub-network hole convolution, and an input of the second sub-network hole convolution is connected to an output of the second sub-network encoder. The second sub-network decoder is formed by connecting 4 decoding convolution blocks in series, each decoding convolution block comprises 1 upsampling layer with the size of 2 multiplied by 2, 2 decoding convolution kernels with the size of 3 multiplied by 3 and the step length of 1 and 1 attention mechanism block, and the input of the second sub-network decoder is connected with the output of the second sub-network hole convolution.
In the step (2) of constructing the RN-DoubleU-Net network, the second sub-network encoder is formed by sequentially connecting a common convolution block, a first residual convolution block, a second residual convolution block, and a third residual convolution block in series.
The common convolution block is formed by connecting 1 coding convolution kernel with the size of 7 multiplied by 7 and the step size of 2 and 1 pooling layer with the pooling size of 3 multiplied by 3 and the step size of 2 in series.
The first residual error convolution block is composed of 3 residual error units which are sequentially connected in series, the first residual error unit is composed of 4 coding convolution kernels which are sequentially connected in series, the size of the second coding convolution kernel is 3 multiplied by 3, the step length is 1, and the sizes of the first coding convolution kernel, the third coding convolution kernel and the fourth coding convolution kernel are 1 multiplied by 1, and the step length is 1; the second and third residual error units are formed by connecting 3 coding convolution kernels in series in sequence, the size of the second coding convolution kernel of each residual error unit is 3 multiplied by 3, the step length is 1, and the size of the first and third coding convolution kernels is 1 multiplied by 1, and the step length is 1.
The second residual convolution block is composed of 4 residual units which are sequentially connected in series, the first residual unit is composed of 4 coding convolution kernels which are sequentially connected in series, the size of the first coding convolution kernel and the size of the fourth coding convolution kernel are 1 multiplied by 1 step length and are 2, the size of the second coding convolution kernel is 3 multiplied by 3 step length and is 1, and the size of the third coding convolution kernel is 1 multiplied by 1 step length and is 1; the second to four residual error units are formed by sequentially connecting 3 coding convolution kernels in series, the size of the second coding convolution kernel of each residual error unit is 3 multiplied by 3, the step size is 1, and the sizes of the first coding convolution kernel and the third coding convolution kernel are 1 multiplied by 1, and the step size is 1.
The third residual volume block consists of 6 residual units which are sequentially connected in series; the first residual unit is formed by sequentially connecting 4 coding convolution kernels in series, the size of the first coding convolution kernel and the size of the fourth coding convolution kernel are 1 multiplied by 1, the step size is 2, the size of the second coding convolution kernel is 3 multiplied by 3, the step size is 1, and the size of the third coding convolution kernel is 1 multiplied by 1, and the step size is 1; the second to sixth residual error units are formed by sequentially connecting 3 coding convolution kernels in series, the size of the second coding convolution kernel of each residual error unit is 3 multiplied by 3, the step length is 1, the sizes of the first and third coding convolution kernels are 1 multiplied by 1, and the step length is 1; the input of the second sub-network encoder is connected to the output of the first sub-network decoder.
Because the invention adopts the common convolution block and the three residual convolution blocks as the second sub-network encoder to construct the RN-double U-Net network for segmenting the mammary gland duct region image, the RN-double U-Net network can fully utilize the effective information in the image to accurately segment the mammary gland duct region image in the image. Compared with the existing image segmentation method, the method provided by the invention has the advantages of accurate image segmentation of the mammary gland duct region, high segmentation precision, high segmentation speed and the like, and can be used for automatically segmenting the mammary gland duct region image.
Drawings
FIG. 1 is a flowchart of example 1 of the present invention.
FIG. 2 is a schematic diagram of the RN-DoubleU-Net network.
Fig. 3 is a schematic diagram of a second sub-network encoder of the RN-DoubleU-Net network.
Detailed Description
The invention will be further described with reference to the drawings and examples, but the invention is not limited to the examples described below.
Example 1
Fig. 1 shows a flowchart of the present embodiment. In fig. 1, the method for segmenting the image of the mammary duct region based on the RN-DoubleU-Net network of the present embodiment comprises the following steps:
(1) data set preprocessing
398 pictures of the mammary gland duct region data set are taken, and the size of the pictures is 2000 multiplied by 2000 pixels.
1) The pixel values of the data set of the mammary duct region are normalized to [0,1], and the pixel values of the data set of the mammary duct region are normalized to 0.5 and are cut into pictures with the size of 512 × 512 pixels in the embodiment.
2) And (3) dividing the segmented data set according to the ratio of 8: 1: the proportion of 1 is divided into a training set, a verification set and a test set.
(2) Construction of RN-DoubleU-Net network model
In fig. 2, the RN-doublleu-Net network of this embodiment is formed by connecting a first Net sub-network and a second Net sub-network, and an output of the first Net sub-network is connected to an input of the second Net sub-network.
The first Unet sub-network is formed by sequentially connecting a first sub-network encoder, a first sub-network hole convolution and a first sub-network decoder in series, and the second sub-network is formed by sequentially connecting a second sub-network encoder, a second sub-network hole convolution and a second sub-network decoder in series.
The first sub-network encoder of this embodiment is a VGG19 network in the deep learning framework, the VGG19 network is composed of 16 code convolutional layers connected in series, and the code convolutional kernel size of the code convolutional layers of the VGG19 network is 3 × 3 and the step size is 1. The first subnetwork cavity convolution is composed of cavity convolution layers, and the cavity convolution layers are formed by sequentially connecting 1 common convolution kernel and 5 cavity convolution kernels in series. The first sub-network decoder is made up of 4 decoding convolutional blocks, each of which contains 1 upsampling layer of size 2 × 2, 2 decoding convolutional kernels of size 3 × 3 with step size 1, 1 attention mechanism block, and the input of the first sub-network decoder is connected to the output of the first sub-network hole convolution.
In this embodiment, the size of the ordinary convolution kernel is 1 × 1 with a step size of 1 × 1, the size of the first cavity convolution kernel is 1 × 1 with a step size of 1, the size of the second cavity convolution kernel is 3 × 3 with a step size of 1, the size of the cavity is 6, the size of the third cavity convolution kernel is 3 × 3 with a step size of 1, the size of the cavity is 12, the size of the fourth cavity convolution kernel is 3 × 3 with a step size of 1, the size of the cavity is 18, the size of the fifth cavity convolution kernel is 1 × 1 with a step size of 1, and the size of the cavity is 1; the input of the hole convolutional layer is connected to the output of the first sub-network encoder.
In FIG. 2, the second sub-network hole convolution is identical in structure to the first sub-network hole convolution, and the input of the second sub-network hole convolution is connected to the output of the second sub-network encoder. The second sub-network decoder is formed by connecting 4 decoding convolution blocks in series, each decoding convolution block comprises 1 upsampling layer with the size of 2 multiplied by 2, 2 decoding convolution kernels with the size of 3 multiplied by 3 and the step length of 1 and 1 attention mechanism block, and the input of the second sub-network decoder module is connected with the output of the second sub-network hole convolution.
In fig. 3, the second sub-network encoder of this embodiment is formed by sequentially connecting a normal convolution block, a first residual convolution block, a second residual convolution block, and a third residual convolution block in series.
The normal convolutional block of this embodiment is formed by connecting in series 1 encoded convolutional kernel of size 7 × 7 with step size 2 and 1 pooling layer of pooling size 3 × 3 with step size 2. The first residual error convolution block is composed of 3 residual error units which are sequentially connected in series, wherein the first residual error unit is composed of 4 coding convolution kernels which are sequentially connected in series, the size of the second coding convolution kernel is 3 multiplied by 3, the step length is 1, and the sizes of the first coding convolution kernel, the third coding convolution kernel and the fourth coding convolution kernel are 1 multiplied by 1, and the step length is 1; the second and third residual error units are formed by connecting 3 coding convolution kernels in series in sequence, the size of the second coding convolution kernel of each residual error unit is 3 multiplied by 3, the step size is 1, and the size of the first and third coding convolution kernels is 1 multiplied by 1, and the step size is 1. The second residual convolution block is composed of 4 residual units which are sequentially connected in series, the first residual unit is composed of 4 coding convolution kernels which are sequentially connected in series, the size of the first coding convolution kernel and the size of the fourth coding convolution kernel are 1 multiplied by 1 step length and are 2, the size of the second coding convolution kernel is 3 multiplied by 3 step length and is 1, and the size of the third coding convolution kernel is 1 multiplied by 1 step length and is 1; the second to four residual error units are formed by sequentially connecting 3 coding convolution kernels in series, the size of the second coding convolution kernel of each residual error unit is 3 multiplied by 3, the step size is 1, and the sizes of the first coding convolution kernel and the third coding convolution kernel are 1 multiplied by 1, and the step size is 1. The third residual rolling block is composed of 6 residual units which are sequentially connected in series; the first residual unit is formed by sequentially connecting 4 coding convolution kernels in series, the size of the first coding convolution kernel and the size of the fourth coding convolution kernel are 1 multiplied by 1, the step size is 2, the size of the second coding convolution kernel is 3 multiplied by 3, the step size is 1, and the size of the third coding convolution kernel is 1 multiplied by 1, and the step size is 1; the second to sixth residual error units are formed by sequentially connecting 3 coding convolution kernels in series, the size of the second coding convolution kernel of each residual error unit is 3 multiplied by 3, the step length is 1, the sizes of the first and third coding convolution kernels are 1 multiplied by 1, and the step length is 1; the input of the second sub-network encoder is connected to the output of the first sub-network decoder.
Because the invention adopts the common convolution block and the three residual convolution blocks as the second sub-network encoder to construct the RN-DoubleU-Net network, the mammary duct region image in the data set is segmented, and the effective information in the image is utilized to segment the mammary duct region image, thereby obtaining higher segmentation precision. Compared with the prior art, the method has the advantages of accurate image segmentation of the mammary gland duct region, high segmentation precision, high segmentation speed and the like.
(3) Training RN-DoubleU-Net network
1) Determining an objective function
The objective function includes a loss function LdiceAnd an evaluation function F1, determining a loss function L according to the following formuladice
Figure BDA0003462721900000071
Wherein X represents the true value, X ∈ { X ∈ [ ]1,x2,...xnY denotes a predictor, Y ∈ { Y ∈ }1,y2,...ynN is the number of elements and is a limited positive integer, the value of n in this embodiment is 512, and the specific value of n should be determined according to the size of the image pixel, usually adopting a multiple of 512, or a number that can divide 512.
An evaluation function F1 determined as follows:
Figure BDA0003462721900000072
Figure BDA0003462721900000073
Figure BDA0003462721900000074
wherein, P is the precision rate, P belongs to [0,1], R is the recall rate, R belongs to [0,1], T is true positive, T belongs to [0,1], F is false positive, F belongs to [0,1], N is false negative, N belongs to [0,1], and the value of P, R, T, F, N in this embodiment is 0.5.
2) Training RN-DoubleU-Net network
Sending the training set into an RN-double U-Net network for training, wherein the learning rate gamma of the RN-double U-Net network belongs to [10 ]-5,10-3]In this embodiment, γ is 10-3And the optimizer adopts an estimation optimizer of the adaptive moment and iterates until the loss function is converged.
(4) Preservation model
And in the process of training the RN-DoubleU-Net network, continuously updating the weight by using a deep learning framework and storing a weight file.
(5) Validating RN-DoubleU-Net networks
And inputting the verification set into the RN-DoubleU-Net network for verification.
(6) Testing RN-DoubleU-Net networks
And inputting the test set into an RN-DoubleU-Net network for testing, and loading the stored weight file to obtain a mammary gland duct region image.
And finishing the image segmentation method of the mammary gland duct region based on the RN-DoubleU-Net network.
Example 2
The method for segmenting the image of the mammary duct region based on the RN-DoubleU-Net network comprises the following steps:
(1) data set preprocessing
398 pictures of the mammary gland duct region data set are taken, and the size of the pictures is 2000 multiplied by 2000 pixels.
1) The pixel values of the data set of the mammary gland duct region are normalized to [0,1], and the pixel values of the data set of the mammary gland duct region are normalized to 0 and are cut into pictures with the size of 512 × 512 pixels in the embodiment.
2) And (3) dividing the segmented data set according to the ratio of 8: 1: the proportion of 1 is divided into a training set, a verification set and a test set.
(2) Construction of RN-DoubleU-Net network model
This procedure is the same as in example 1.
(3) Training RN-DoubleU-Net network
1) Determining an objective function
The objective function includes a loss function LdiceAnd an evaluation function F1, determining a loss function L according to the following formuladice
Figure BDA0003462721900000081
Wherein X represents the true value, X ∈ { X ∈ [ ]1,x2,...xnY denotes a predictor, Y ∈ { Y ∈ }1,y2,...ynN is the number of elements and is a limited positive integer, the value of n in this embodiment is 512, and the specific value of n should be determined according to the size of the image pixel, usually adopting a multiple of 512, or a number that can divide 512.
An evaluation function F1 determined as follows:
Figure BDA0003462721900000091
Figure BDA0003462721900000092
Figure BDA0003462721900000093
wherein, P is the precision rate, P belongs to [0,1], R is the recall rate, R belongs to [0,1], T is the true positive, T belongs to [0,1], F is the false positive, F belongs to [0,1], N is the false negative, N belongs to [0,1 ]. In this embodiment, P is 0.5, and R, T, F, N is 0.
2) Training RN-DoubleU-Net network
Sending the training set into an RN-double U-Net network for training, wherein the learning rate gamma of the RN-double U-Net network belongs to [10 ]-5,10-4]In this embodiment, γ is 10-5And the optimizer adopts an estimation optimizer of the adaptive moment and iterates until the loss function is converged.
The other steps were the same as in example 1. And finishing the image segmentation method of the mammary gland duct region based on the RN-DoubleU-Net network.
Example 3
The method for segmenting the image of the mammary duct region based on the RN-DoubleU-Net network comprises the following steps:
(1) data set preprocessing
398 pictures of the mammary gland duct region data set are taken, and the size of the pictures is 2000 multiplied by 2000 pixels.
1) The pixel values of the data set of the mammary duct region are normalized to [0,1], and the present embodiment adopts the method that the pixel values of the data set of the mammary duct region are normalized to 1 and are cut into pictures with the size of 512 × 512 pixels.
2) And (3) dividing the segmented data set according to the ratio of 8: 1: the proportion of 1 is divided into a training set, a verification set and a test set.
(2) Construction of RN-DoubleU-Net network model
This procedure is the same as in example 1.
(3) Training RN-DoubleU-Net network
1) Determining an objective function
The objective function includes a loss function LdiceAnd an evaluation function F1, determining a loss function L according to the following formuladice
Figure BDA0003462721900000094
Wherein X represents the true value, X ∈ { X ∈ [ ]1,x2,...xnY denotes a predictor, Y ∈ { Y ∈ }1,y2,...ynN is the number of elements and is a limited positive integer, the value of n in this embodiment is 512, and the specific value of n should be determined according to the size of the image pixel, usually adopting a multiple of 512, or a number that can divide 512.
An evaluation function F1 determined as follows:
Figure BDA0003462721900000101
Figure BDA0003462721900000102
Figure BDA0003462721900000103
wherein, P is the precision rate, P belongs to [0,1], R is the recall rate, R belongs to [0,1], T is the true positive, T belongs to [0,1], F is the false positive, F belongs to [0,1], N is the false negative, N belongs to [0,1 ]. The value of P, R, T, F, N is 1.
2) Training RN-DoubleU-Net network
Sending the training set into an RN-double U-Net network for training, wherein the learning rate gamma of the RN-double U-Net network belongs to [10 ]-5,10-4]In this embodiment, γ is 10-4And the optimizer adopts an estimation optimizer of the adaptive moment and iterates until the loss function is converged.
The other steps were the same as in example 1. And finishing the image segmentation method of the mammary gland duct region based on the RN-DoubleU-Net network.
Example 4
The method for segmenting the image of the mammary duct region based on the RN-DoubleU-Net network comprises the following steps:
(1) data set preprocessing
This procedure is the same as in example 1.
(2) Construction of RN-DoubleU-Net network model
This procedure is the same as in example 1.
(3) Training RN-DoubleU-Net network
1) Determining an objective function
The objective function includes a loss function LdiceAnd an evaluation function F1, determining a loss function L according to the following formuladice
Figure BDA0003462721900000111
Wherein X represents the true value, X ∈ { X ∈ [ ]1,x2,...xnY denotes a predictor, Y ∈ { Y ∈ }1,y2,...ynN is the number of elements and is a limited positive integer, the value of n in this embodiment is 512, and the specific value of n should be determined according to the size of the image pixel, usually adopting a multiple of 512, or a number that can divide 512.
An evaluation function F1 determined as follows:
Figure BDA0003462721900000112
Figure BDA0003462721900000113
Figure BDA0003462721900000114
wherein, P is the precision rate, P belongs to [0,1], R is the recall rate, R belongs to [0,1], T is true positive, T belongs to [0,1], F is false positive, F belongs to [0,1], N is false negative, N belongs to [0,1], the value of P is 0, and the value of R, T, F, N is 0.5.
2) Training RN-DoubleU-Net network
This procedure is the same as in example 1.
The other steps were the same as in example 1. And finishing the image segmentation method of the mammary gland duct region based on the RN-DoubleU-Net network.
In order to verify the beneficial effects of the invention, the inventor carries out a comparative simulation experiment by adopting the RN-DoubleU-Net network-based mammary duct region image segmentation method of the embodiment 1 of the invention and a DoubleU-Net method and a U-Net method, and the experimental conditions are as follows:
the trained models are used for testing the same test set, the evaluation code is used for testing the model accuracy, the evaluation function F1 is used as the evaluation method, the larger the evaluation function F1 value is, the better the method is, and the experimental result of the evaluation function F1 is shown in Table 1.
TABLE 1 evaluation function F1 values for example 1 and the DoubleU-Net method, U-Net method
Test method Evaluation function F1
Example 1 0.7271
Method of DoubleU-Net 0.7167
U-Net method 0.6206
As can be seen from Table 1, the evaluation function F1 of the method of example 1 has a value of 0.7271, the evaluation function F1 of the DoubleU-Net method has a value of 0.7167, and the evaluation function F1 of the U-Net method has a value of 0.6206. The evaluation function F1 value for the method of example 1 is 1.45% higher than the evaluation function F1 value for the DoubleU-Net method and 17.16% higher than the evaluation function F1 value for the U-Net method.

Claims (5)

1. A mammary gland duct region image segmentation method based on an RN-DoubleU-Net network is characterized by comprising the following steps:
(1) data set preprocessing
398 pictures of the mammary gland duct region data set are taken, and the size of the pictures is 2000 multiplied by 2000 pixels;
1) normalizing the pixel value of the mammary gland duct region data set to [0,1] and cutting into pictures with the size of 512 x 512 pixels;
2) and (3) dividing the segmented data set according to the ratio of 8: 1: the proportion of 1 is divided into a training set, a verification set and a test set;
(2) construction of RN-DoubleU-Net network model
The RN-DoubleU-Net network model is formed by connecting a first Unet sub-network and a second Unet sub-network, wherein the output of the first Unet sub-network is connected with the input of the second Unet sub-network;
the first Unet sub-network is formed by sequentially connecting a first sub-network encoder, a first sub-network hole convolution and a first sub-network decoder in series, and the second Unet sub-network is formed by sequentially connecting a second sub-network encoder, a second sub-network hole convolution and a second sub-network decoder in series;
(3) training RN-DoubleU-Net network
1) Determining an objective function
The objective function includes a loss function LdiceAnd an evaluation function F1, determining a loss function L according to the following formuladice
Figure FDA0003462721890000011
Wherein X represents the true value, X ∈ { X ∈ [ ]1,x2,...xnY denotes a predictor, Y ∈ { Y ∈ }1,y2,...ynN is the number of elements and is a limited positive integer;
an evaluation function F1 determined as follows:
Figure FDA0003462721890000012
Figure FDA0003462721890000013
Figure FDA0003462721890000014
wherein P is the precision rate, P belongs to [0,1], R is the recall rate, R belongs to [0,1], T is the true positive, T belongs to [0,1], F is the false positive, F belongs to [0,1], N is the false negative, N belongs to [0,1], and P, R, T, F, N is not 0 at the same time;
2) training RN-DoubleU-Net network
Sending the training set into an RN-double U-Net network for training, wherein the learning rate gamma of the RN-double U-Net network belongs to [10 ]-5,10-3]The optimizer adopts an estimation optimizer of the adaptive moment and iterates until the loss function is converged;
(4) preservation model
In the process of training the RN-DoubleU-Net network, continuously updating the weight by using a deep learning framework, and storing a weight file;
(5) validating RN-DoubleU-Net networks
Inputting the verification set into an RN-DoubleU-Net network for verification;
(6) testing RN-DoubleU-Net networks
And inputting the test set into an RN-DoubleU-Net network for testing, and loading the stored weight file to obtain a mammary gland duct region image.
2. The RN-DoubleU-Net network-based mammary duct region image segmentation method of claim 1, wherein: in the (2) step of constructing the RN-doublu-Net network model, the first sub-network encoder is a VGG19 network in the deep learning framework, the VGG19 network is composed of 16 serially connected code convolutional layers, the size of the code convolutional kernel of the code convolutional layer of the VGG19 network is 3 × 3, and the step size is 1;
the first sub-network cavity convolution is composed of cavity convolution layers, and each cavity convolution layer is formed by sequentially connecting 1 common convolution kernel and 5 cavity convolution kernels in series;
the first sub-network decoder is composed of 4 decoding volume blocks, each decoding volume block comprises 1 upsampling layer with the size of 2 multiplied by 2, 2 decoding convolution kernels with the size of 3 multiplied by 3 and the step length of 1 and 1 attention mechanism block, and the input of the first sub-network decoder is connected with the output of the first sub-network hole convolution.
3. The RN-DoubleU-Net network-based mammary duct region image segmentation method of claim 2, wherein: the size of the common convolution kernel is 1 × 1 with the step length of 1, the size of the first cavity convolution kernel is 1 × 1 with the step length of 1 and the size of a cavity is 1, the size of the convolution kernel of the second cavity convolution kernel is 3 × 3 with the step length of 1 and the size of a cavity is 6, the size of the convolution kernel of the third cavity convolution kernel is 3 × 3 with the step length of 1 and the size of a cavity is 12, the size of the convolution kernel of the fourth cavity convolution kernel is 3 × 3 with the step length of 1 and the size of a cavity is 18, and the size of the convolution kernel of the fifth cavity convolution kernel is 1 × 1 with the step length of 1 and the size of a cavity is 1; the input of the hole convolutional layer is connected to the output of the first sub-network encoder.
4. The RN-DoubleU-Net network-based mammary duct region image segmentation method of claim 1, wherein: in the step (2) of constructing the RN-DoubleU-Net network model, the second sub-network hole convolution has the same structure as the first sub-network hole convolution, and the input of the second sub-network hole convolution is connected with the output of the second sub-network encoder;
the second sub-network decoder is formed by connecting 4 decoding convolution blocks in series, each decoding convolution block comprises 1 upsampling layer with the size of 2 multiplied by 2, 2 decoding convolution kernels with the size of 3 multiplied by 3 and the step length of 1 and 1 attention mechanism block, and the input of the second sub-network decoder is connected with the output of the second sub-network hole convolution.
5. The RN-DoubleU-Net network-based breast duct region image segmentation method according to claim 1 or 4, wherein: in the step (2) of constructing the RN-DoubleU-Net network, the second sub-network encoder is formed by sequentially connecting a common convolution block, a first residual convolution block, a second residual convolution block and a third residual convolution block in series;
the common convolution block is formed by connecting 1 coding convolution kernel with the size of 7 multiplied by 7 and the step length of 2 and 1 pooling layer with the size of 3 multiplied by 3 and the step length of 2 in series;
the first residual error convolution block is composed of 3 residual error units which are sequentially connected in series, the first residual error unit is composed of 4 coding convolution kernels which are sequentially connected in series, the size of the second coding convolution kernel is 3 multiplied by 3, the step length is 1, and the sizes of the first coding convolution kernel, the third coding convolution kernel and the fourth coding convolution kernel are 1 multiplied by 1, and the step length is 1; the second and third residual error units are formed by sequentially connecting 3 coding convolution kernels in series, the size of the second coding convolution kernel of each residual error unit is 3 multiplied by 3, the step length is 1, and the size of the first and third coding convolution kernels is 1 multiplied by 1, and the step length is 1;
the second residual convolution block is composed of 4 residual units which are sequentially connected in series, the first residual unit is composed of 4 coding convolution kernels which are sequentially connected in series, the size of the first coding convolution kernel and the size of the fourth coding convolution kernel are 1 multiplied by 1 step length and are 2, the size of the second coding convolution kernel is 3 multiplied by 3 step length and is 1, and the size of the third coding convolution kernel is 1 multiplied by 1 step length and is 1; the second to the four residual error units are formed by sequentially connecting 3 coding convolution kernels in series, the size of the second coding convolution kernel of each residual error unit is 3 multiplied by 3, the step length is 1, and the sizes of the first coding convolution kernel and the third coding convolution kernel are 1 multiplied by 1, and the step length is 1;
the third residual volume block consists of 6 residual units which are sequentially connected in series; the first residual unit is formed by sequentially connecting 4 coding convolution kernels in series, the size of the first coding convolution kernel and the size of the fourth coding convolution kernel are 1 multiplied by 1, the step size is 2, the size of the second coding convolution kernel is 3 multiplied by 3, the step size is 1, and the size of the third coding convolution kernel is 1 multiplied by 1, and the step size is 1; the second to sixth residual error units are formed by sequentially connecting 3 coding convolution kernels in series, the size of the second coding convolution kernel of each residual error unit is 3 multiplied by 3, the step length is 1, the sizes of the first and third coding convolution kernels are 1 multiplied by 1, and the step length is 1; the input of the second sub-network encoder is connected to the output of the first sub-network decoder.
CN202210021366.1A 2022-01-10 2022-01-10 Mammary gland area image segmentation method based on RN-DoubleU-Net network Active CN114419064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210021366.1A CN114419064B (en) 2022-01-10 2022-01-10 Mammary gland area image segmentation method based on RN-DoubleU-Net network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210021366.1A CN114419064B (en) 2022-01-10 2022-01-10 Mammary gland area image segmentation method based on RN-DoubleU-Net network

Publications (2)

Publication Number Publication Date
CN114419064A true CN114419064A (en) 2022-04-29
CN114419064B CN114419064B (en) 2024-04-05

Family

ID=81272311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210021366.1A Active CN114419064B (en) 2022-01-10 2022-01-10 Mammary gland area image segmentation method based on RN-DoubleU-Net network

Country Status (1)

Country Link
CN (1) CN114419064B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020118618A1 (en) * 2018-12-13 2020-06-18 深圳先进技术研究院 Mammary gland mass image recognition method and device
CN113487615A (en) * 2021-06-29 2021-10-08 上海海事大学 Retina blood vessel segmentation method and terminal based on residual error network feature extraction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020118618A1 (en) * 2018-12-13 2020-06-18 深圳先进技术研究院 Mammary gland mass image recognition method and device
CN113487615A (en) * 2021-06-29 2021-10-08 上海海事大学 Retina blood vessel segmentation method and terminal based on residual error network feature extraction

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DEBESH JHA等: "DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation", ARXIV, 8 June 2020 (2020-06-08) *
王帅;刘娟;毕姚姚;陈哲;郑群花;段慧芳;: "基于两步聚类和随机森林的乳腺腺管自动识别方法", 计算机科学, no. 03, 15 March 2018 (2018-03-15) *

Also Published As

Publication number Publication date
CN114419064B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN111369563B (en) Semantic segmentation method based on pyramid void convolutional network
CN112101190B (en) Remote sensing image classification method, storage medium and computing device
CN108230323B (en) Pulmonary nodule false positive screening method based on convolutional neural network
CN105512289B (en) Image search method based on deep learning and Hash
CN110349103A (en) It is a kind of based on deep neural network and jump connection without clean label image denoising method
CN110781406A (en) Social network user multi-attribute inference method based on variational automatic encoder
CN111242180A (en) Image identification method and system based on lightweight convolutional neural network
WO2021151311A1 (en) Group convolution number searching method and apparatus
CN111832484A (en) Loop detection method based on convolution perception hash algorithm
CN112163520A (en) MDSSD face detection method based on improved loss function
CN114612715A (en) Edge federal image classification method based on local differential privacy
CN108960326B (en) Point cloud fast segmentation method and system based on deep learning framework
CN114639102B (en) Cell segmentation method and device based on key point and size regression
CN114972759A (en) Remote sensing image semantic segmentation method based on hierarchical contour cost function
CN115097398A (en) Radar anti-interference signal recovery method based on cross-domain signal low-loss recovery network
CN114359269A (en) Virtual food box defect generation method and system based on neural network
CN114419064A (en) Mammary gland duct region image segmentation method based on RN-DoubleU-Net network
CN116340869A (en) Distributed CatB body detection method and equipment based on red fox optimization algorithm
CN116433980A (en) Image classification method, device, equipment and medium of impulse neural network structure
CN115375966A (en) Image countermeasure sample generation method and system based on joint loss function
US20220138554A1 (en) Systems and methods utilizing machine learning techniques for training neural networks to generate distributions
CN114596302A (en) PCB defect detection method, system, medium, equipment and terminal
CN115600134A (en) Bearing transfer learning fault diagnosis method based on domain dynamic impedance self-adaption
CN113255670A (en) Unbalanced small sample target detection method and device and computer equipment
CN112084551A (en) Building facade identification and generation method based on confrontation generation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant